Skip to content

Conversation

reinecke
Copy link
Collaborator

@reinecke reinecke commented Oct 8, 2019

This is meant to be an alternate approach to the one @apetrynet took in pr #536 as a point of discussion. This resolves #69. I wanted to try an approach that more explicitly broke out image sequence parameters and had awareness of timing associated with frames. This also avoids using format strings in urls which could become ambiguous or need complex escaping logic if a url encoding includes a % (e.x. %20 for a space).

The image sequence is expressed with:

  • target_url_base - everything leading up to the file name in the target_url
  • name_prefix - everything in the file name leading up to the frame number
  • name_suffix - everything after the frame number in the file name
  • start_frame - first frame number used in file names
  • frame_step - step between frame numbers in file names (every other frame is a step of 2)
  • rate - double frame rate if every frame in the sequence were played back (ignoring skip frames)
  • frame_zero_padding - Number of digits to pad zeros out to (e.x. frame 10 with a pad of 4 would be 0010)
  • missing_frame_policy - enum ImageSequenceReference.MissingFramePolicy {error, hold, black} allows hinting about how a consuming app should behave if an image for which a url is returned should be handled when missing from disk

An example for 24fps media with a sample provided each frame numbered 1-1000 with a path /show/sequence/shot/sample_image_sequence.%04d.exr might be:

{
  "available_range": {
    "start_time": {
      "value": 0,
      "rate": 24
    },
    "duration": {
      "value": 1000,
      "rate": 24
    }
  },
  "start_frame": 1,
  "frame_step": 1,
  "rate": 24,
  "target_url_base": "file:///show/sequence/shot/",
  "name_prefix": "sample_image_sequence.",
  "name_suffix": ".exr"
  "frame_zero_padding": 4,
}

The same duration sequence but with only every 2nd frame available in the sequence would be:

{
  "available_range": {
    "start_time": {
      "value": 0,
      "rate": 24
    },
    "duration": {
      "value": 1000,
      "rate": 24
    }
  },
  "start_frame": 1,
  "frame_step": 2,
  "rate": 24,
  "target_url_base": "file:///show/sequence/shot/",
  "name_prefix": "sample_image_sequence.",
  "name_suffix": ".exr"
  "frame_zero_padding": 4,
}

A target url is generated using the equivalent of the following python format string:
f"{target_url_prefix}{(start_frame + (sample_number * frame_step)):0{value_zero_padding}}{target_url_postfix}"

Negative start_frame is also handled. The above example with a start_frame of -1 would yield the first three target urls as:

  • file:///show/sequence/shot/sample_image_sequence.-0001.exr
  • file:///show/sequence/shot/sample_image_sequence.-0000.exr
  • file:///show/sequence/shot/sample_image_sequence.0001.exr

Benefits of this approach include:

  • The ability to derive the times each image map to
  • Avoids having to support and potentially implement a rich format string system in otio

Downsides:

  • A bit heavier than using format strings
  • Not as flexible/extensible as format strings

Questions:

  • Should number_of_images_in_sequence be a property or method in python?
  • Should the negative behavior match the python format string {:04d} and make a number section of -001 rather than -0001?
  • Should I go ahead and implement image_number_for_time? This would be useful for people building time-based playback engines but it would also take the opinion a frame starts at a given time and holds until the next frame's start time.

References:

TODO:

  • Implement frame_for_time
  • Implement frame_range_for_time_range
  • Implement abstract_target_url
  • Validate choices of C++ data types (mostly int vs. long in a number of places)

Co-authored by: @apetrynet

@mikekoetter
Copy link
Contributor

I like that it avoids having to choose a format string standard.
It should making having to switch between said standards easier

@apetrynet
Copy link
Contributor

My C++ foo is a bit weak, so please excuse "stupid/obvious" questions :)

The approach with breaking up the filename and providing zero padding etc. seems like a good idea.
As you say, it is a bit heavy providing all that info when creating an instance, but avoids OTIO having to enforce a certain sting format. Which is good.

Will this support available_range not being the same frame numbers as start_value + length of sequence like discussed in #69 or #536?

For your questions:

  • I think your property names are good, but we could consider changing start_value and value_step to start_frame and frame_step. To me it seems a tad clearer and more consistent with regards to frame_duration or frame_rate (whichever you go for).
  • I'm having a hard time deciding whether number_of_images_in_sequence should be a method or property. On one hand it make sense exposing it as a property since it won't take any arguments as a method. On the other hand, the length of the attribute name makes me think it should be a method for some reason.
  • frame_duration vs frame_rate. I'm not sure I completely understand what your saying here.

@reinecke
Copy link
Collaborator Author

reinecke commented Oct 8, 2019

@apetrynet Thanks for taking a look! My C++ is pretty rusty too so don't worry :)

Will this support available_range not being the same frame numbers as start_value + length of sequence like discussed in #69 or #536?

This approach de-couples frame time from the number it's assigned in the url completely except for the assumption that you have available_range.duration / frame_duration number of frames. So if your available_range.start_time is RationalTime(3600, 1) and the first frame on disk is my_sequence_frame.0000.exr, you simply set start_value = 0 and it will generate frame urls accordingly.

I think your property names are good, but we could consider changing start_value and value_step to start_frame and frame_step. To me it seems a tad clearer and more consistent with regards to frame_duration or frame_rate (whichever you go for).

I think I agree with you. In general with media I avoid referring to frames explicitly because many of the container formats support behavior that will break models involving constant frame rates, but in this case a constant frame rate is a base assumption.

frame_duration vs frame_rate. I'm not sure I completely understand what your saying here.

My explanation was a bit fuzzy on that. If you have an image sequence at 24 fps where you've rendered every frame (frame numbers 1, 2, 3, 4, etc.) this means you have an implicit frame duration of 1/24th of a second. To set this on the image sequence reference you'd do this:

reference.frame_duration = RationalTime(1, 24)
reference.value_step = 1

If you have the same sequence, but you've only generated every other frame (frame numbers 1, 3, 5, etc.) you have an implicit frame duration of 2/24ths of a second. To set this on the image sequence reference you'd do this:

reference.frame_duration = RationalTime(2, 24)
reference.value_step = 2

While the terminology should keep that pretty clear, I worry that using frame_duration doesn't quite match up with people's mental model. If we used frame_rate or fps instead, then you could alter just value_step and the frame duration could be implied by that. This behavior also fits really nicely with your proposed renaming of value_step to frame_step.

The reason I took the approach of using frame_duration was that 1. it felt like it decoupled timing from frame numbering more and 2. Duration could be stored as a RationalTime rather than serializing some other kind of fraction. I think point 1 isn't as relevant given the assumption of constant frame rate for this object, and point 2 came from laziness more than anything.

@jminor
Copy link
Collaborator

jminor commented Oct 8, 2019

This does a good job of capturing the important data. I do worry that it is quite verbose, but it's only a vague concern - it's probably fine.

One question though: do we really need to model the value_step? That doesn't affect the overall length, and it invites the question of random missing frames, or other sparse variations like first/middle/last.

@reinecke
Copy link
Collaborator Author

reinecke commented Oct 9, 2019

The reason for modeling value_step is that it lets you signal what frames are expected to exist and can avoid the cost of requesting frames that we know don’t exist.
An argument could be made that what is done about missing frames in a sequence are up to the consumer to decide though.

@apetrynet
Copy link
Contributor

This approach de-couples frame time from the number it's assigned in the url completely except for the assumption that you have available_range.duration / frame_duration number of frames. So if your available_range.start_time is RationalTime(3600, 1) and the first frame on disk is my_sequence_frame.0000.exr, you simply set start_value = 0 and it will generate frame urls accordingly.

OK, I see. But what I seem to be missing here is an easy way of mapping the renumbered frames on disk to the source_range of the clip like I did in the trimmed_range/map_source_range_to_frame_range combo. Calling target_url_for_image_number with numbers from source_range will potentially yield faulty results if source/available_range.start_time is RationalTime(3600, 1) and value_start = 0 won't it?
I'd have to write a function for calculating the offset in my application.

Regarding value_step vs frame_duration. If I understand you correctly it feels a bit redundant to keep both.

An argument could be made that what is done about missing frames in a sequence are up to the consumer to decide though.

I'm actually leaning towards skipping the value_step all together and let the handling of missing frames be up to the consumers. I like the idea of the step, but since we'd need to implement handling of missing frames in the consumer in any case it would help simplify the ImageSequenceReference intancing by removing it.

@meshula
Copy link
Collaborator

meshula commented Oct 10, 2019

I would like to raise my use case again, of a sequence of images captured at a variable framerate, such as me pushing the shutter button on my camera. In this case, not only is the frame rate variable, but I expect the duration to be implicit, like a time filling slug.

A related issue would be a missing frame in a rendered test sequence. e.g. A simulation might fail intermittently, but we would still like to use the previous successful frame as a placeholder without needing to duplicate it on disk to satisfy a regex.

I'm not spotting in either of these proposals how an irregular framerate can be accommodated, or hold-frame behavior specified.

I'm fine if this were to be handled in a different way, in a different schema holding a list of frames and times, but I'm curious if the use cases can be met in the context of the current proposals?

@apetrynet
Copy link
Contributor

Hi @meshula!
Please correct me if I'm not answering any of your questions.

I would like to raise my use case again, of a sequence of images captured at a variable framerate, such as me pushing the shutter button on my camera. In this case, not only is the frame rate variable, but I expect the duration to be implicit, like a time filling slug.

My look at the ImageSequenceReference is the same as a regular ExternalReference. The difference being that ImageSequenceReference points to several source files while ExternalReference points to a single file. The available_range should reflect full range of the source files available either based on time code or frame numbers. How the frames are recorded shouldn't really matter as there's most likely an intended playback speed. Even in cases where you in the middle of a shot increase/decrease the exposure frequency the playback speed should be the same producing a "slow-mo or fast-forward effect".

As I read your initial use case again I got a bit confused. By "I expect the duration to be implicit, like a time filling slug", do you mean that you'd like to record the actual time between exposures, producing a sequence that only contains a few images but a total duration of the media reference that reflects the time you used to shoot them?
If so, This could be solved by producing a small edit with your frames and using the rendered image sequence as a media_reference.

A related issue would be a missing frame in a rendered test sequence. e.g. A simulation might fail intermittently, but we would still like to use the previous successful frame as a placeholder without needing to duplicate it on disk to satisfy a regex.

As I mentioned in an earlier comment, I think this should be up to each consumer to handle. It would be no different than dealing with a corrupt frame in a .mov file for instance. Since (I believe I read this in another thread, couldn't find it though) it's not in the goals of OTIO to become a media player I don't think we need to solve this in the media_reference

I'm not spotting in either of these proposals how an irregular framerate can be accommodated, or hold-frame behavior specified.

I believe frame-holds and adjustment of playback speed should be editorial decisions and be reflected in the timeline or clip parts of the .otio file. I don't think ExternalReference has any support for this neither. Does it?

@jminor
Copy link
Collaborator

jminor commented Oct 10, 2019 via email

@jminor
Copy link
Collaborator

jminor commented Oct 10, 2019 via email

@apetrynet
Copy link
Contributor

We could combine image formats and video container formats in "format", and image encoding/compression with video codec into "encoding" in an attempt to allow these to be treated uniformly. Is this a good idea to lump them together?

I like your summary and this sounds like a good idea, but I think most software will pull the information they need from the files themselves and handle them thereafter. At least for EXR and DPX files, the metadata will contain a lot of info that will help applications read them correctly in lines of what video formats do.
I therefore (with the exception of available_range) don't think this additional data should be mandatory.

If some frames are missing (on 2s, or missing frame 51) then a policy can specify what to do with the gaps (hold last known frame, use nearest, show black, show error, etc.) without needing to alter the duration or rate of anything. That is, an image sequence at 24 fps, but on 2s, should still (in my view) be treated as 24fps, not 12fps.

The more I think about it I really don't see why OTIO should have to deal with missing frames and material on 2's etc. on a "low level". In my experience the software reading image sequences has it's own set of policies to deal with that, be it black frames, hold nearest frame etc. So I really like your idea of a policy enum to hold the most common alternatives. MissingFramePolicy.black or MissingFramePolicy.hold and so on. That way we could store the intended behavior and pass it on to other applications. I also share your view regarding a sequence on 2's being treated as 24fps.

Another thing about the media reference actively reacting to missing files is that you should be able to serialize an .otio file with "offline media" without it getting stored as MissingReference.

In practice, most systems I've seen try hard to avoid allowing per-frame variation in format, resolution, bit depth, channels, etc. When variations do come up, this is usually considered invalid. Per-frame data window variation is more common, but seems out of scope for OTIO. So we're left with just per-frame duration... How common is this in our industry?

This is my impression as well. Some just stick to presenting the images as the first frame decoded was.

Also, I forgot to propose this alternate way to deal with varying durations of frames: Use a Track of Clips, each of which points to a single image, thus allowing each Clip to hold the duration of that image. This moves it up into the realm of composition instead of media. For relatively few images, e.g. the storyboards for a film, this model works well. For loads and loads of images, it might not.

Yeah, this is more or less the same workflow I tried to convey in my previous comment. :)

@meshula
Copy link
Collaborator

meshula commented Oct 12, 2019

This suggestion by josh works for me:

If some frames are missing (on 2s, or missing frame 51) then a policy can specify what to do with the gaps (hold last known frame, use nearest, show black, show error, etc.) without needing to alter the duration or rate of anything. That is, an image sequence at 24 fps, but on 2s, should still (in my view) be treated as 24fps, not 12fps.

The use case that prompted me was a series of astrophotographs where the shutter is triggered at optically optimal moments, but still have a sense of a real time associated with the shutter time; not an editorial choice, but a choice reflecting the nature of the data. A track of clips seems heavy weight for this, in terms of very low information density in the resulting json file, but an explicit framerate of say 10ms with a "hold last known frame" policy would solve perfectly and satisfy my desire for recording a minimum of data. To be sure, astrophotography is not obviously in the intended use domain for OTIO, but folks doing planetarium shows and the like definitely are closely adjacent to us and use our industry's tools. I use vfx tools and libs in my astro work, and TBH there's my true motivation ;)

@reinecke
Copy link
Collaborator Author

Good discussion all around on this!

The design that backed this PR came from the analysis that in the use cases we're targeting frame sequences:

  1. Are collection of media samples
  2. Lack metadata to map a given sample to its time
  3. Have constant frame rate

Given OTIO is all about timing information, my goal was to encode metadata allowing the media reference to provide a consuming application with the URL for "on-disk" assets it would need to render a given time range.

I really like the previous suggestions about having an enum to specify missing frame behavior preference. My one concern with this approach is strictly what constitutes a "missing" frame. I think this implicit behavior about whether or not certain assets exist could result in a lot of odd error cases.
One hypothetical case could be where you've rendered every other frame in a 24fps sequence but a few of them landed on the network store with the wrong permissions. You might specify the sequence as "hold last known frame" expecting to get a 12fps version of the clip back. In this case you could get back a video that is a bit all over the place with the application raising no errors. These types of I/O problems are even more pronounced in cloud environments where you're either pre-downloading or on-demand downloading frames. Since networks are inherently unreliable, it's very important to capture errors so you can retry. For these cases I think it would be really nice if we were explicit to the application using the frames about exactly what source assets to expect so it can error appropriately.

As an aside, a use case that comes to mind that is related is "chunked" files. This is where each file contains a sub-range of the source's avaliable range. This is typically done to conform to some arbitrary file size limit, RED R3D files are a common case. If you think of frame files as "chunked" files where the time range they contain is the duration of a single frame then these two concepts are interchangeable. However I think the best idea would be to model that case separately as something like ChunkedSequenceReference.

@apetrynet
Copy link
Contributor

I see your concern from a producer of frames point of view that you'd like to make sure your content is played back as you intended and not treated as corrupt. Also your concern about the combination of faulty permissions and a sequence on 2's.
A solution might be using a combination of frame_step and enum so we say that our media should be treated like a sequence from frame A to Z incremented by frame_step and either error out (MissingFramePolicy.error) or fill in the blanks with whatever the enum dictates. This however requires that the frame_step is constant. Doesn't it? (I think it should)

Another approach could be something in the lines of what Hiero does (actually deprecated now) which is the concept of fragments. Similar to number_of_images_in_sequence() but is more generic and can potentially hold single video files, images or chunked media (for when we handle that). The Hiero approach holds objects for all frames though, which might bloat the .otio file quite a bit.

As an aside, a use case that comes to mind that is related is "chunked" files. This is where each file contains a sub-range of the source's avaliable range. This is typically done to conform to some arbitrary file size limit, RED R3D files are a common case. If you think of frame files as "chunked" files where the time range they contain is the duration of a single frame then these two concepts are interchangeable. However I think the best idea would be to model that case separately as something like ChunkedSequenceReference.

I've been thinking of this myself and came to the same conclusion, that this is a third kind of media reference.

@codecov-io
Copy link

codecov-io commented Oct 31, 2019

Codecov Report

Merging #602 into master will increase coverage by 1.36%.
The diff coverage is 85.93%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #602      +/-   ##
==========================================
+ Coverage   81.67%   83.04%   +1.36%     
==========================================
  Files          72       74       +2     
  Lines        2729     2860     +131     
==========================================
+ Hits         2229     2375     +146     
+ Misses        500      485      -15
Flag Coverage Δ
#py27 82.95% <84.37%> (+1.29%) ⬆️
#py36 83.02% <85.93%> (+1.36%) ⬆️
#py37 83.02% <85.93%> (+1.36%) ⬆️
Impacted Files Coverage Δ
src/opentimelineio/typeRegistry.cpp 85.29% <100%> (+0.14%) ⬆️
...entimelineio-bindings/otio_serializableObjects.cpp 94.95% <100%> (+0.43%) ⬆️
src/opentimelineio/imageSequenceReference.h 62.5% <62.5%> (ø)
src/opentimelineio/imageSequenceReference.cpp 91.42% <91.42%> (ø)
src/opentime/rationalTime.cpp 85.85% <0%> (+0.44%) ⬆️
src/opentimelineio/serializableObject.h 88.11% <0%> (+0.99%) ⬆️
src/opentimelineio/serialization.cpp 83.61% <0%> (+1%) ⬆️
src/opentimelineio/errorStatus.cpp 38.46% <0%> (+3.84%) ⬆️
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9e4c203...d137394. Read the comment docs.

…id complicating them with unicode string oddities.
@reinecke reinecke changed the title [WIP] Initial implementation of ImageSequenceReference Initial implementation of ImageSequenceReference Oct 31, 2019
@reinecke reinecke marked this pull request as ready for review October 31, 2019 00:43
@apetrynet
Copy link
Contributor

Hi!
I finally got some time to play around with this and I'm really excited about getting support for Image Sequences in OTIO!

However, I have a few suggestions for renaming some attributes as they feel a bit inconsistent and lengthy when writing code.
We've touched this earlier in the thread, but now I have some hands on experience :)

When writing a few examples I found myself having to look up the source code or "dir()" the media_reference quite a few times since I had a hard time remembering what the different attributes were called.

As you can see I've tried to use frame as a common term.

  • start_value -> start_frame
  • value_step -> frame_step (I mixed up step_value and value_step a lot:)
  • image_number_zero_padding -> frame_padding

When it comes to the method names I'm wondering if we should try to use the term frame as much as possible here as well.
I Know there is an inconsistency between me preaching about frames but like the name ImageSequenceReference, but since image sequences are a known term in our industry and most people instinctively know that a frame is an image of a certain number in a sequence, I believe it makes sense to refer to frames rather than image numbers.
That said, I get why methods like target_url_for_image_number() are named like this, so I'm really torn here. I'll come back with comments on this if I manage to sort the thoughts in my head.

I've also had a discussion with @reinecke on slack about introducing the additional concept of a frame_range which is similar to available_range, but is based on the start_frame attribute. In addition we discussed how we are to ask for a subset of this frame_range similar to how source_range is to available_range, either by itself or through clip.trimmed_range()

Thanks!

@reinecke
Copy link
Collaborator Author

reinecke commented Nov 8, 2019

Good suggestions on renaming those three attributes, I agree the suggested terminology feels more intuitive.

The core of what @apetrynet and I came to was that it's really useful for adapters to not only query the URLs for frames, but also get information about how to reconstruct those frame sequences into their native programs.

Specifically, the gap here is that it would be useful to be able to get:

  • First frame number to last frame number of the sequence on disk, always treating frame_step as 1 (In this case, first frame number is equivalent to start_frame on the reference)
  • First frame number and last frame number from the original range as trimmed by a clip's source_range

So, for example, imagine an ImageSequenceReference representing a time range of one hour to one hour and 2 seconds (TimeRange(RationalTime(86400, 24), RationalTime(48, 24))), running at 24fps with a start frame of 1.
The entire frame range would be 1-48
If you wanted frames trimmed to a source_range of TimeRange(RationalTime(86424, 24), RationalTime(24, 24)), the trimmed frame range would be 25-48.

The design problem here is that we want to make sure we:

  1. Don't entangle RationalTime rate with frame rate
  2. Allow ImageSequenceReference to own the logic of mapping times to on-disk frames

A possible solution would be to add methods:

  • int last_frame()
  • int frame_for_time(RationalTime t)

This would allow getting the full sequence range by doing:

(ref.start_frame, ref.last_frame()

And getting a trimmed frame range by doing:

(ref.frame_for_time(clip.source_range.start_time), ref.frame_for_time(clip.source_range.end_time_exclusive())

I think the first example is reasonable, the second example feels a touch heavy. The benefit of the frame_for_time approach is that it provides maximal functionality with minimal API footprint. To relieve the pain in python we could add a convenience method:

frame_range_for_time_range(range: TimeRange) -> Tuple[int, int]

And return the frame range for the provided time range as a tuple.

Here is where, in the rv adapter, these tools might be used: https://github.com/PixarAnimationStudios/OpenTimelineIO/blob/e20d1e0e03d7dd016b5cfda2b40181ff10940ab9/contrib/opentimelineio_contrib/adapters/extern_rv.py#L282-L303

Thoughts?

@jminor
Copy link
Collaborator

jminor commented Nov 8, 2019

I like the general idea that ImageSequenceReference works natively in frames, but the rest of OTIO stays general. That will feel very comfortable to people working with frames.

In your trimmed frame range example, shouldn't the last frame be inclusive? (ref.frame_for_time(clip.source_range.start_time), ref.frame_for_time(clip.source_range.end_time_inclusive())
Also does that imply that the rate of source_range has to match the frame rate?

@apetrynet
Copy link
Contributor

Hi @reinecke
I thought of yet another feature that would be great to squeeze in here.

abstract_target_url() which takes one argument symbol.
The method returns an assembled path similar to target_url_for_image_number(), but with frame number abstracted with the given symbol.

mr.abstract_target_url(symbol='@@@@') -> /path/to/source/image_sequence.@@@@.exr
mr.abstract_target_url(symbol='%04d') -> /path/to/source/image_sequence.%04d.exr
..and so on..

Since it's "impossible" to agree on what symbol is best suited for all, I suggest that there is no default symbol and that the method errors out if the user doesn't provide one at call time.
The name of the method and it's argument may of course be something else. It's just the name I initially thought of.

Having a method like this in ImageSequenceReference would help us keep DRY and avoid end users having to implement a "join" function for every adapter and/or use case that require frame numbers being abstracted.

@reinecke
Copy link
Collaborator Author

I like the abstract target URL idea a lot! I'll add it to my todo list.

@reinecke reinecke changed the title [WIP] Initial implementation of ImageSequenceReference Initial implementation of ImageSequenceReference Nov 19, 2019
@reinecke
Copy link
Collaborator Author

I no longer have any TODOs on the PR. Ready for feedback for final acceptance.

…rence to raise an exception with more details about what went wrong
@apetrynet
Copy link
Contributor

Nice! Good job @reinecke ! I'll compile and try it out again as soon as possible.

@apetrynet
Copy link
Contributor

@reinecke I really like this!
It feels a lot more consistent now and a lot more intuitive to create an instance etc. Seems to work really well :)

In addition to the docstring suggestion above, I have a thought. (might be silly though)
Would it be an idea to find a way to make sure target_url_base has a separator "os.sep" at the end if left out and a path is provided? Alternatively make sure to add a separator at assemble time in methods like target_url_for_image_number and abstract_target_url when needed?

What I predict will happen in a lot of use cases is:

path, filename = os.path.split('/path/to/image_sequence.1001.ext')  # -> ('/path/to', 'image_sequence.1001.ext') No separator "/" after dirname

mr = otio.schema.ImageSequenceReference(
    target_url_base=path,  # Oh, oh!
    name_base='image_sequence.',  # imagine these also being split by a regex or something :)
    name_suffix='.ext',
    start_frame=1001
)
seq = mr.abstract_target_url(symbol='%04d')  # -> '/path/toimage_sequence.%04d.ext' 

It would be nice not having to always do something like path + os.sep or write code to make sure you end up with a separator in target_url_base

@jminor
Copy link
Collaborator

jminor commented Nov 21, 2019

I don't think os.sep is the right thing to use in an URL. I was going to suggest using os.path.join or urllib.parse.urljoin but neither of those seems to do the right thing either. This runs head on into the fact that we're not very strict about whether target_url is an URL or a file path.

@reinecke
Copy link
Collaborator Author

reinecke commented Nov 21, 2019

Unfortunately urllib is a bit lest robust than os.path. I can try using pathlib.PosixPath, that should at least be correct for URLs.

@apetrynet
Copy link
Contributor

Isn't this a part of the c++ code as well? At least target_url_for_image_number. Are we able to use these libraries there too?

@reinecke
Copy link
Collaborator Author

Oh, good call. All the URL building is C++ with the exception of abstract_target_url - I opted to minimize the C++ url footprint for that one because that's mostly a pipeline integrator question and less likely to apply for application integrators.

@apetrynet
Copy link
Contributor

So to unify URL handling, should we consider either moving abstract_target_url into c++ or wrapping target_url_for_image_number in python?

@reinecke
Copy link
Collaborator Author

reinecke commented Dec 7, 2019

Finally got around to another pass on this. I believe I've addressed all the open issues.

@apetrynet
Copy link
Contributor

Great! I'll give it a spin before Thursday.

@jminor jminor changed the base branch from master to imagesequence December 13, 2019 21:21
@reinecke reinecke merged commit 053d871 into AcademySoftwareFoundation:imagesequence Dec 18, 2019
@ssteinbach ssteinbach added this to the Public Beta 13 milestone Mar 17, 2020
reinecke added a commit that referenced this pull request May 28, 2020
Added ImageSequenceReference MediaReference subclass schema.
reinecke added a commit that referenced this pull request Jun 30, 2020
* Initial implementation of ImageSequenceReference (#602)

Added ImageSequenceReference MediaReference subclass schema.

* Implement ImageSequenceReference in extern_rv  (#633)

* Added support for ImageSequenceReference to example RV plugin (#637)

* Added support for ImageSequenceReference to the example OTIO RV reader plugin

* Switched to using opentime.to_frames to compute rv's in and out frames

Co-authored-by: Daniel Flehner Heen <[email protected]>
Co-authored-by: Robyn Rindge <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

7 participants