-
Notifications
You must be signed in to change notification settings - Fork 0
video
A fairly standard system for loading and processing videos, built around the Python interface to OpenCV. It is a node system where you create nodes and set their parameters, inputs and outputs. A manager object then arranges for the data to flow through the nodes by evaluating the nodes in the correct order. I primarily use it to extract words from videos to feed to topic models, which is provided directly by the script record_words.py. Implementation is mostly pure python with scipy and opencv, but also includes some modules that have C code that they will compile and use, and even some use of OpenCL, for when speed really matters. Includes very basic visualisation support. There are many, many test scripts to figure out how to use the system from.
Of note is the background subtraction module, which is my own algorithm: 'Background Subtraction with Dirichlet Processes' by Tom SF Haines & Tao Xiang, ECCV 2012. My tests show it to be the best background subtraction algorithm available, at least at the time of writing. Saying that its shadow handling is poor and it does not compensate for camera shake.
If you are reading readme.txt then you can generate documentation by running make_doc.py
Contains the following files:
video.py
- Imports everything; typically you just include this in your code.
manager.py
- Contains the manger object, that arranges for the nodes to update in the correct order, and provides various convenience methods.
video_node.py
- The video node interface that all of the provided nodes inherit from and implement.
black.py
- Generates an entirely black video sequence.
read_cv.py
- Uses OpenCV to read in an avi file.
read_cv_cam.py
- Uses OpenCV to read from a webcam.
read_cv_is.py
- Uses OpenCV to read in an image sequence as a video.
seq.py
- Provides a sequence node that allows you to concatenate video files.
seq_make.py
- Helper, that automatically generates a sequence node given a directory of numbered video files.
frame_crop.py
- Adjusts the length of a video, by cutting frames from the beginning and/or end.
half.py
- Halves the resolution of a video.
step_scale.py
- Scales a video in integer scales.
reflect.py
- Reflects a video; choice of x axis, y axis or both.
write_cv.py
- Writes an avi file to disk.
write_frames_cv.py
- Writes a video to disk as a sequence of images.
write_frame_cv.py
- Writes a specified list of frames to disk as separate image files.
view_cv.py
- Shows a video on the screen in a window. If the user sends a keyboard or mouse button press to this window it stops, which will typically stop everything.
view_pygame.py
- Showsa video to screen, fullscreen using pygame.
record.py
- Provides a node that will save a video stream to disk.
play.py
- Reads from disk the file generated by a Record node and acts as though it is the node that Play saved to disk.
remap.py
- Remaps multiple input nodes to generate a specific set of outputs. Typically used with the Record node to decide exactly what is saved to disk.
play_words.py
- A script that plays back the file generated by the record_words.py script.
deinterlace_ev.py
- Overly complicated de-interlacing algorithm.
colour_bias.py
- Converts the colour space to a luminance/chromaticity based one.
light_correct_ms.py
- Corrects for variations in light source brightness using mean shift.
backsub_dp.py
- The background subtraction code. (Support files = backsub_dp_c.c, backsub_dp_cl.c, backsub_dp_cl.cl)
opticalflow_lk.py
- Lukas & Kanade optical flow algorithm.
five_word.py
- Given optical flow and a foreground mask this generates the '5-words on a grid' features often used with topic models to analyse video.
clip_mask.py
- Clips a mask in the sense of keeping it the same size but zeroing out all areas that are too close to the edge.
mask_flow.py
- Applies a mask to an optical flow field, zeroing out areas outside the mask. Simple way of using background subtraction to clean up optical flow.
mask_from_colour.py
- Generates masks from colour video based on exact colour matches.
mask_sabs.py
- Converts the ground truth of the SABS background subtraction test to masks that can be used to analyse results.
mask_stats.py
- Outputs assorted statistics comparing the difference between two masks, e.g. f-measure.
combine_grid.py
- Given multiple video streams as input combines them using a grid layout into a single video.
render_difference.py
- Renders the absolute difference between two input video streams, exaggerating it if requested.
render_flow.py
- Renders an optical flow video to an rgb video.
render_mask.py
- Renders a mask video to a colour video, including the ability to provide videos as the foreground and background.
render_word.py
- Renders the words generated by FiveWord.
test_backsub_dp.py
- Test the background subtraction algorithm.
test_cam_cv.py
- Test reading from a webcam with OpenCV.
test_deinterlace_ev.py
- Test deinterlacing.
test_five_word.py
- Test the 5 word parsing of a video - a basic feature extraction techneque.
test_half.py
- Test halfing the resolution of a video.
test_light_correct_ms.py
- Test correcting for changes in lighting.
test_opticalflow_lk.py
- Test the optical flow implimentation.
test_read_cv.py
- Test reading from a file using OpenCV.
test_reflect.py
- Test reflecting a video image.
test_view_cv.py
- Test visualisation using the OpenCV window system.
test_write_cv.py
- Test writting an .avi file with OpenCV.
record_words.py
- A script that reads in a video file and converts it to the standard '5-words in a grid' feature set that is so often used when using topic models for behavioural analysis.
bgs.py
- A script that when given a video file runs background subtraction and outputs a sequence of frames containing the generated foreground/background masks.
bgs_demo.py
- A real time background subtraction from the webcam demo I was planning on running at ECCV 2012, except they refused to provide me with electricty.
readme.txt
- This file, which is included in the html documentation.
make_doc.py
- Builds the html documentation.
MODE_RGB
Indicates a connection between nodes that uses a rgb colour stream, for normal video.
MODE_MASK
Indicates a connection between nodes that uses a binary stream, for comunicating masks.
MODE_FLOW
Indicates a connection between nodes that uses a pair of floating point numbers, for communicating optical flow.
MODE_WORD
Indicates a connection between nodes that uses a discrete assignment - often used to indicate some kind of labeling.
MODE_FLOAT
Indicates a connection between nodes that uses a float for each pixel - many uses.
MODE_MATRIX
Indicates a connection between nodes that sends matrices.
MODE_OTHER
Indicates a connection between nodes of an unknown type.
mode_to_string
A dictionary indexed by MODE_ variables that provides human readable descriptions.
five_word_colours
Default colours to use with the FiveWord and RenderWord classes.
num_to_seq(fn, loader)
Given a filename of the form 'directory/start#end' finds all files that match the given form, where # is an arbitrary number. The files are sorted into numerical order, and each is loaded using the provided loader (ReadCV for instance - constructor should take a single filename.), a Seq object is then created. This in effect turns a directory of numbered video files into a single video file.
Simple class that manages a bunch of objects of type VideoNode - is given a bunch of these objects and then provides a nextFrame method. This method calls the nextFrame method for each object, but does so in an order that satisfies the dependencies. For conveniance it also provides a run method for use with the ViewVideo objects - it calls the cv.WaitKey function as well as nextFrame, and optionally keeps the framerate correct - makes simple visualisations constructed with ReadVideo objects easy to do. It also manages the OpenCL context and queue, in the event that you are optimising the video processing as such, so that frames can be passed between nodes without leaving the graphics card - the useCL parameter allows OpenCL optimisation to be switched off.
add(self, videos)
Videos can be a list-like object of ReadVideo things or an actual ReadVideo object. Adds them to the manager.
getCL(self)
Returns the object that needs to be passed to OpenCL supporting nodes if you want them to dance quickly.
haveCL(self)
Returns True if OpenCL is avaliable, False otherwise.
nextFrame(self)
Calls nextFrame for all contained video, in a dependency satisfying order, returning True only if all calls return true.
run(self, real_time = True, quiet = False, callback, profile = False)
Helper method that runs the node system of this Manager object until one of the Nodes says to stop. real_time - If True it trys to run in real time - should typically be True if you are visualising the output, otherwise False to go as fast as possible. quiet - If True it doesn't print any status output to the console. callback - A function that can be called every frame; is given no parameters. profile - If True a file profile.csv will be saved to disk, containing a simple profile of how much time was used by each node in the graph.
Interface for a video procesing object that provides the next frame on demand, as a numpy array that is always indexed from the top right of the frame.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Dummy node that generates a black video feed of a given size, length and framerate.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Simple wrapper around open cv's video reading interface - limited compatability but easy for reading lots of video files.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Presents an image sequence as a video, using open cv's image loading routines to load in the image files as needed.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Defines a video created by appending several videos - effectivly pretends they are one big video. This can theoretically result in details such as frame rate and size changing as the video procedes, though that would typically be avoided as most other nodes do not handle this scenario. Breaks some of the rules as the input videos can not be part of the manager due to the unusual calling strategy.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Provides a simple wrapper to shorten a video - on the first call it skips frames till it gets to the indicated starting frame, then it stops after the given number of frames has been reached. The video being shortened must not be part of the manager.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Given a colour video stream halfs its resolution in each dimension - as simple a node as you can get really. Requires the input have even dimensions!
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Scales a video up, by an integer number of repetitons of each pixel.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Given a colour video stream reflects it in some combination of the x and y dimensions.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Simple video writting class - turns a video stream into a file on the hardrive. codec defaults to moving jpeg (MJPG), but another good choice is XVID, especially for larger files. Other options exist depending on system.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Saves a video file to disk as a sequence of image files.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
You provide this node with a list of pairs of (zero indexed) frame numbers and file names - it then saves those particular frames to disk using the open cv image writting functions.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Simple wrapper of conveniance around open cv's windows for displaying frames - ties in correctly with the manager objects run method.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
move(self, x, y)
Allows you to set where the window is on your computer screen - useful if you have several of these.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
An output node for visualising frames - uses pygame and runs in fullscreen - implimented for demo purposes.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Records the state of a single node, so it can be replayed at a later date. This consists of saving the data to a file, without loss. Records all channels of a node - you can use Remap to get something strange. Uses bzip compresion and python serialisation - hardly sophisticated. Partners with Play, which reads the file back in and spits it out, such that it is as though it is identical to the node given to the constructor.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Plays back a file that has been saved by the Record object. Has an identical output interface to the node that was fed into Record, meaning it can be used identically.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
This remaps channels, potentially combining multiple video sources and consequentially generating an arbitrary VideoNode object that can involve any data the user decides. All the standard rules of VideoNode are sustained, as long as all the input videos share resolution and fps. Primarily used when saving out node results to disk, to save the precise set of feeds you want.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Does exactly as specified - it deinterlaces. Not real time however, in fact its bloody slow - uses a system of making multiple estimates based on different assumptions and then taking a 'vote' as to which estimate has the most support. By default fast mode is on, which uses a multi-deminsional median rather than a fancy falloff function - this makes it about 3 times faster for only a slight loss in quality (Just about real time on low-resolution video, if your not doing anything else.).
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of frames for the video.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
This converts rgb colour to a luminance/chromaticity colour space - nothing special, specific or well defined. Its trick is that you can choose the scale of the luminance channel, to adjust distance in the space to emphasis differences in colour or lightness. Luminance is put into the red channel with chromaticity in green and blue. Done such that the volume of the colour space always remains as 1.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Does the exact opposite of ColourBias, assuming you provide the exact same parameters.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
This node estimates the lighting change between the current frame and the previous frame, providing several outputs to communicate this change to future nodes. Estimates the change as a per-channel multiplicative constant, for which it gets an estimate from each pixel. Mean shift is then used to find the mode, as in the paper 'Time_delayed Correlation Analysis for Multi-Camera Activity Understanding' by Loy, Xiang and Gong. There is no guarantee that after this values will remain within [0,1]. Also has a mode of operation where instead of the previous it fetches a frame from elsewhere - it does not indicate a dependency on the other source so anything can happen - its primary aim is to allow a loop with a background subtraction node such that it uses the current, or previous, background estimate. Requires that the input colour model be from the colour_bias node, as it makes assumptions dependent on that model.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
A background subtraction algorithm, implimented as a video reader interface that eats another video reader. Uses a per-pixel mixture model, specifically a Dirichlet process on Gaussian distributions - it uses Gibbs sampling with the Gaussians collapsed out. It is an implimentation of the paper 'Background Subtraction with Dirichlet Processes' by Tom SF Haines & Tao Xiang, ECCV 2012.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
setAutoPrior(self, mult = 1.0)
Sets the automatic prior, where it updates the prior based on the distribution of the current frame. mult is how much to multiply the variance by, to soften the distribution a bit. Call it with mult set to None to disable it - by default it is on with a value of 1.
setBP(self, threshold = 0.6, half_life = 0.9, iters = 6)
Sets the main parameters for the belief propagation step, the fist of which is the threshold of probability before it considers it to be a foreground pixel. Note that it is converted into a prior, and that due to the regularisation terms this is anything but hard. The half_life is used to indicate the colourmetric distance that equates to the probability of two pixels being different reaching 50:50, whilst iters is how many iterations to run, and is used only for controlling the computational cost. This BP post processing step can be switched off by setting iters to 0, though the threshold will still be used to binarise the probabilities.
setCompCount(self, count_mass = 0.1)
Sets the amount of probability mass used when calculating the component count - required because all of the probability mass should give you infinity, which is not what you are really after.
setConComp(self, threshold = 0)
Allows you to run connected components after the BP step. You provide the number of pixels below which a foreground segment is terminated. By default it is set to 0, i.e. off.
setDP(self, comp = 8, conc = 0.01, cap = 128.0, weight = 1.0)
Sets the parameters for the DP, specifically the number of components, the concentration parameter and the certainty cap, which limits how much weight can be found in the DP. Also a multiplier for the weight of a sample when combined. Because the concentration is frame rate dependent it is actually set assuming 30fps, and converted to whatever the video actually is. Same for the weight parameter.
setExtraBP(self, cert_limit = 0.005, change_limit = 1e-05, min_same_prob = 0.975, change_mult = 3.0)
Sets minor BP parameters that you are unlikelly to want to touch - specifically limits on how certain it can be with regards to it certainty that a pixel is background/foreground and its certainty that two pixels are the same/different, parameter to influence distance scaling to make sure probabilities never drop below a certain value, plus a term to reweight their relative strengths. All except for min_same_prob are set assuming a video resolution of 320x240, and adjusted to whatever the resolution actually is.
setHackDP(self, smooth = 0.0, sd_mult = 0.6, min_weight = 0.0001)
Sets some parameters that hack the DP, to help maintain stability. Specifically smooth is an assumption about noise in each sample, used to stop the distributions from ever getting too narrow, whilst min_weight is a minimum influence that a sample can have on the DP, to inhibit overconfidence. This last one is subject to frame rate adjustments - it is set under the assumption of 30 frames per second.
setLumOnly(self, lum_only = True)
If True then the algorithm will only use the luminance channel, if False all 3 channels. Set True if the input is a greyscale image, such as obtained from infra-red for instance. Must be set before the algorithm starts, defaults to colour.
setOnlyCL(self, minSize = 32, maxLayers = 8, itersPerLevel = 6)
Sets parameters that only affect the OpenCL version - minSize of a layer of the BP hierachy, for both dimensions; maxLayers is the maximum number of layers of the BP hierachy allowed; itersPerLevel is how many iterations are done at each level of the BP hierachy, except for the last which is done iterations times.
setPrior(self, weight = 1.0, mean = [ 0.5 0.5 0.5], sd = [ 0.5 0.5 0.5])
Sets the parameters for the student-t distribution prior for the pixel values.
setRecParam(self)
Sets it to use recomended parameters - I basically fill in this method with whatever I have found to be a good compromise for many data sets (or, more accuractly, the defaults for the methods it calls.). These are OpenCL only - BP iterations is too low for the C version. Combine these with the colour conversion with lum_weight set to 0.7 and noise_floor set to 0.05. Note this is called automatically on initialisation, so typically you don't need to call this.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Optical flow using Lucas & Kanade - has a pyramid and only does one iteration per pyramid level by default. Uses a median filter for regularisation. Simple, not horrifically slow but obviously nothing amazing - basically the original algorithm for translation only.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Quantises a video stream into five words per location, specifically 4 directions and no motion. Divides the image into a grid of locations and has a simple vote in each grid cell, with a threshold to decide the difference between moving and not. Has 2 inputs - a flow field from the optical flow and a mask from the background subtraction. Assumes that grid size is a multiple of the dimensions
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
suggestedColours(self)
None
width(self)
Returns the width of the video.
wordCount(self)
None
Simple class that zeros out all areas of a mask outside a given box, in terms of displacements from the edges. Good for removing an area from a video stream that we do not want analysed, such as the sky, or a body of water.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Takes as input an optical flow field and a mask - zeros out all optical flow vectors that are outside the mask. Primarilly to allow an optical flow algorithm and a background subtractions algorithms results to be combined to get a 'better', or at least cleaner, result.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
This converts a colour video stream into a pair of masks - basically you provide a list of exact colours, indicating which are background, which are foreground. This provides the main mask, but then all areas where there is colour that is not one of the known colours are assumed to be for ignoring, so a second mask is created that is only True where an exact match has been acheived. By 'exact' it operates under the assumption of 255 levels per channel.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Designed to generate the correct masking and validity information given the ground truth data of the 'Stuttgart Artificial Background Subtraction' dataset. Basically binarises the input, with black being background and every other colour being foreground, before using an erode on both channels, such that the pixels that change are marked as invalid for scoring. Outputs two masks - one indicating foreground/background, another indicating where it should be scored.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Calculates various statistics based on having two input masks, one being an estimate, the other ground truth. Can also take a validity mask, that indicates the areas where scores are to be calculated. Statistics are stored per frame, and can be queried at any time whilst running or after running. They are basically a per-frame confusion matrix, but interfaces are provided to get the recall, the precision and the f-measure, as defined by the paper 'Evaluation of Background Subtraction Techneques for Video Surveillance' by S. Brutzer, B. Hoferlin and G. Heidemann, to give one example. Averages can also be obtained over ranges of frames. Be warned that the frame indices are zero based.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
framesAvaliable(self)
Returns the number of frames avaliable.
getConfusion(self, frame)
Returns the confusion matrix of a specific frame.
getConfusionTotal(self, start, end)
Given an inclusive frame range returns the sum of the confusion matrices over that range.
getFMeasure(self, frame)
Returns the f-measure, which is the harmonic mean of the recall and precision, i.e. 2recallprecision / (recall+precision).
getFMeasureAvg(self, start, end)
Given an inclusive frame range returns the average of the f-measure for that range.
getFMeasureTotal(self, start, end)
Returns the f-measure by summing the confusion matrix over the entire range and then calculating.
getPrecision(self, frame)
Given a frame number returns that framess precision.
getRecall(self, frame)
Given a frame returns the recall for that frame.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Calculates the stats required by the changedetection.net website for analysing a background subtraction algorithm, given the data in the format they provide.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
getCon(self)
Returns the confusion matrix - [truth, guess], 0=background, 1=foreground.
getFMeasure(self)
None
getFalseNegRate(self)
None
getFalsePosRate(self)
None
getFalsePosRateShadow(self)
None
getPercentWrong(self)
None
getPrecision(self)
None
getRecall(self)
None
getSpecficity(self)
None
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Given multiple MODE_RGB streams as input this combines them into a single output, arranged as a grid. Resizes appropriatly and handles gaps.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Renders the absolute difference between two images, with a multiplicative constant to make small differences visible.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Renders a MODE_FLOW into a MODE_RGB, using the standard conversion with HSV space, where H becomes direction and S speed, such that it is white when there is no motion and gets more colour as speed increases, whilst the actual colour represents the direction of motion. There are two speed representations - linear and asymptotic.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
This class converts a MODE_MASK into a MODE_RGB, with various effects. This includes combining an image and setting a background colour.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.
Renders a grid of words, using a provided colour scheme. No word is automatically set to black.
dependencies(self)
Returns a list of video objects that this video object is dependent on - the nextframe method must be called on all of these prior to it being called on this, otherwise strange stuff will happen. The list is allowed to include duplicates.
fetch(self, channel = 0)
Returns the requested channel, as a numpy array. You can not assume that the object is persistant, i.e. it might be the same object returned each time, but with a different contents. The optional channel parameter indicates which output to get. fetch can be called multiple times for a channel between each call to nextFrame.
fps(self)
Returns the frames per second of the video, as a floating point value.
frameCount(self)
Returns the number of times you can call nextFrame before it starts returning None.
height(self)
Returns the height of the video.
inputCount(self)
Returns the number of inputs.
inputMode(self, channel = 0)
Returns the required mode for the given input.
inputName(self, channel = 0)
Returns a human readable description of the given input.
nextFrame(self)
Moves to the next frame, returning True if there is now a set of next frames that can be extracted using the fetch command, and False if not. typically False means we are out of data, as an error would lead to an exception being thrown. Must not be called until the object is setup - i.e. all inputs have been set, and any other object-specific actions.
outputCount(self)
Returns the number of outputs - a video object is allowed to have multiple outputs. The output in position 0 is the default and often the only one.
outputMode(self, channel = 0)
Returns one of the modes, which indicates the format of the entity returned by nextFrame. The optional out parameter indicates which output to indicate for. Most of these do not both with outputs.
outputName(self, channel = 0)
Returns a string indicating what the output in question is - arbitrary and for human consumption only.
source(self, toChannel, video, videoChannel = 0)
Sets a video as input to the video object, in channel to Channel, optionally including which channel to extract from video as videoChannel.
width(self)
Returns the width of the video.