-
Notifications
You must be signed in to change notification settings - Fork 0
/
filterlist1.js
1 lines (1 loc) · 686 KB
/
filterlist1.js
1
var filterList = [{"filtergroup":["acompressor"],"info":"\nA compressor is mainly used to reduce the dynamic range of a signal.\nEspecially modern music is mostly compressed at a high ratio to\nimprove the overall loudness. It's done to get the highest attention\nof a listener, \"fatten\" the sound and bring more \"power\" to the track.\nIf a signal is compressed too much it may sound dull or \"dead\"\nafterwards or it may start to \"pump\" (which could be a powerful effect\nbut can also destroy a track completely).\nThe right compression is the key to reach a professional sound and is\nthe high art of mixing and mastering. Because of its complex settings\nit may take a long time to get the right feeling for this kind of effect.\n\nCompression is done by detecting the volume above a chosen level\n@code{threshold} and dividing it by the factor set with @code{ratio}.\nSo if you set the threshold to -12dB and your signal reaches -6dB a ratio\nof 2:1 will result in a signal at -9dB. Because an exact manipulation of\nthe signal would cause distortion of the waveform the reduction can be\nlevelled over the time. This is done by setting \"Attack\" and \"Release\".\n@code{attack} determines how long the signal has to rise above the threshold\nbefore any reduction will occur and @code{release} sets the time the signal\nhas to fall below the threshold to reduce the reduction again. Shorter signals\nthan the chosen attack time will be left untouched.\nThe overall reduction of the signal can be made up afterwards with the\n@code{makeup} setting. So compressing the peaks of a signal about 6dB and\nraising the makeup to this level results in a signal twice as loud than the\nsource. To gain a softer entry in the compression the @code{knee} flattens the\nhard edge at the threshold in the range of the chosen decibels.\n\n","options":[{"names":["level_in"],"info":"Set input gain. Default is 1. Range is between 0.015625 and 64.\n\n"},{"names":["mode"],"info":"Set mode of compressor operation. Can be @code{upward} or @code{downward}.\nDefault is @code{downward}.\n\n"},{"names":["threshold"],"info":"If a signal of stream rises above this level it will affect the gain\nreduction.\nBy default it is 0.125. Range is between 0.00097563 and 1.\n\n"},{"names":["ratio"],"info":"Set a ratio by which the signal is reduced. 1:2 means that if the level\nrose 4dB above the threshold, it will be only 2dB above after the reduction.\nDefault is 2. Range is between 1 and 20.\n\n"},{"names":["attack"],"info":"Amount of milliseconds the signal has to rise above the threshold before gain\nreduction starts. Default is 20. Range is between 0.01 and 2000.\n\n"},{"names":["release"],"info":"Amount of milliseconds the signal has to fall below the threshold before\nreduction is decreased again. Default is 250. Range is between 0.01 and 9000.\n\n"},{"names":["makeup"],"info":"Set the amount by how much signal will be amplified after processing.\nDefault is 1. Range is from 1 to 64.\n\n"},{"names":["knee"],"info":"Curve the sharp knee around the threshold to enter gain reduction more softly.\nDefault is 2.82843. Range is between 1 and 8.\n\n"},{"names":["link"],"info":"Choose if the @code{average} level between all channels of input stream\nor the louder(@code{maximum}) channel of input stream affects the\nreduction. Default is @code{average}.\n\n"},{"names":["detection"],"info":"Should the exact signal be taken in case of @code{peak} or an RMS one in case\nof @code{rms}. Default is @code{rms} which is mostly smoother.\n\n"},{"names":["mix"],"info":"How much to use compressed signal in output. Default is 1.\nRange is between 0 and 1.\n\n"}]},{"filtergroup":["acontrast"],"info":"Simple audio dynamic range compression/expansion filter.\n\n","options":[{"names":["contrast"],"info":"Set contrast. Default is 33. Allowed range is between 0 and 100.\n\n"}]},{"filtergroup":["acopy"],"info":"\nCopy the input audio source unchanged to the output. This is mainly useful for\ntesting purposes.\n\n","options":[]},{"filtergroup":["acrossfade"],"info":"\nApply cross fade from one input audio stream to another input audio stream.\nThe cross fade is applied for specified duration near the end of first stream.\n\n","options":[{"names":["nb_samples","ns"],"info":"Specify the number of samples for which the cross fade effect has to last.\nAt the end of the cross fade effect the first input audio will be completely\nsilent. Default is 44100.\n\n"},{"names":["duration","d"],"info":"Specify the duration of the cross fade effect. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nBy default the duration is determined by @var{nb_samples}.\nIf set this option is used instead of @var{nb_samples}.\n\n"},{"names":["overlap","o"],"info":"Should first stream end overlap with second stream start. Default is enabled.\n\n"},{"names":["curve1"],"info":"Set curve for cross fade transition for first stream.\n\n"},{"names":["curve2"],"info":"Set curve for cross fade transition for second stream.\n\nFor description of available curve types see @ref{afade} filter description.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nCross fade from one input to another:\n@example\nffmpeg -i first.flac -i second.flac -filter_complex acrossfade=d=10:c1=exp:c2=exp output.flac\n@end example\n\n@item\nCross fade from one input to another but without overlapping:\n@example\nffmpeg -i first.flac -i second.flac -filter_complex acrossfade=d=10:o=0:c1=exp:c2=exp output.flac\n@end example\n@end itemize\n\n"}]},{"filtergroup":["acrossover"],"info":"Split audio stream into several bands.\n\nThis filter splits audio stream into two or more frequency ranges.\nSumming all streams back will give flat output.\n\n","options":[{"names":["split"],"info":"Set split frequencies. Those must be positive and increasing.\n\n"},{"names":["order"],"info":"Set filter order, can be @var{2nd}, @var{4th} or @var{8th}.\nDefault is @var{4th}.\n\n"}]},{"filtergroup":["acrusher"],"info":"\nReduce audio bit resolution.\n\nThis filter is bit crusher with enhanced functionality. A bit crusher\nis used to audibly reduce number of bits an audio signal is sampled\nwith. This doesn't change the bit depth at all, it just produces the\neffect. Material reduced in bit depth sounds more harsh and \"digital\".\nThis filter is able to even round to continuous values instead of discrete\nbit depths.\nAdditionally it has a D/C offset which results in different crushing of\nthe lower and the upper half of the signal.\nAn Anti-Aliasing setting is able to produce \"softer\" crushing sounds.\n\nAnother feature of this filter is the logarithmic mode.\nThis setting switches from linear distances between bits to logarithmic ones.\nThe result is a much more \"natural\" sounding crusher which doesn't gate low\nsignals for example. The human ear has a logarithmic perception,\nso this kind of crushing is much more pleasant.\nLogarithmic crushing is also able to get anti-aliased.\n\n","options":[{"names":["level_in"],"info":"Set level in.\n\n"},{"names":["level_out"],"info":"Set level out.\n\n"},{"names":["bits"],"info":"Set bit reduction.\n\n"},{"names":["mix"],"info":"Set mixing amount.\n\n"},{"names":["mode"],"info":"Can be linear: @code{lin} or logarithmic: @code{log}.\n\n"},{"names":["dc"],"info":"Set DC.\n\n"},{"names":["aa"],"info":"Set anti-aliasing.\n\n"},{"names":["samples"],"info":"Set sample reduction.\n\n"},{"names":["lfo"],"info":"Enable LFO. By default disabled.\n\n"},{"names":["lforange"],"info":"Set LFO range.\n\n"},{"names":["lforate"],"info":"Set LFO rate.\n\n"}]},{"filtergroup":["acue"],"info":"\nDelay audio filtering until a given wallclock timestamp. See the @ref{cue}\nfilter.\n\n","options":[]},{"filtergroup":["adeclick"],"info":"Remove impulsive noise from input audio.\n\nSamples detected as impulsive noise are replaced by interpolated samples using\nautoregressive modelling.\n\n","options":[{"names":["w"],"info":"Set window size, in milliseconds. Allowed range is from @code{10} to\n@code{100}. Default value is @code{55} milliseconds.\nThis sets size of window which will be processed at once.\n\n"},{"names":["o"],"info":"Set window overlap, in percentage of window size. Allowed range is from\n@code{50} to @code{95}. Default value is @code{75} percent.\nSetting this to a very high value increases impulsive noise removal but makes\nwhole process much slower.\n\n"},{"names":["a"],"info":"Set autoregression order, in percentage of window size. Allowed range is from\n@code{0} to @code{25}. Default value is @code{2} percent. This option also\ncontrols quality of interpolated samples using neighbour good samples.\n\n"},{"names":["t"],"info":"Set threshold value. Allowed range is from @code{1} to @code{100}.\nDefault value is @code{2}.\nThis controls the strength of impulsive noise which is going to be removed.\nThe lower value, the more samples will be detected as impulsive noise.\n\n"},{"names":["b"],"info":"Set burst fusion, in percentage of window size. Allowed range is @code{0} to\n@code{10}. Default value is @code{2}.\nIf any two samples detected as noise are spaced less than this value then any\nsample between those two samples will be also detected as noise.\n\n"},{"names":["m"],"info":"Set overlap method.\n\nIt accepts the following values:\n@item a\nSelect overlap-add method. Even not interpolated samples are slightly\nchanged with this method.\n\n@item s\nSelect overlap-save method. Not interpolated samples remain unchanged.\n\nDefault value is @code{a}.\n\n"}]},{"filtergroup":["adeclip"],"info":"Remove clipped samples from input audio.\n\nSamples detected as clipped are replaced by interpolated samples using\nautoregressive modelling.\n\n","options":[{"names":["w"],"info":"Set window size, in milliseconds. Allowed range is from @code{10} to @code{100}.\nDefault value is @code{55} milliseconds.\nThis sets size of window which will be processed at once.\n\n"},{"names":["o"],"info":"Set window overlap, in percentage of window size. Allowed range is from @code{50}\nto @code{95}. Default value is @code{75} percent.\n\n"},{"names":["a"],"info":"Set autoregression order, in percentage of window size. Allowed range is from\n@code{0} to @code{25}. Default value is @code{8} percent. This option also controls\nquality of interpolated samples using neighbour good samples.\n\n"},{"names":["t"],"info":"Set threshold value. Allowed range is from @code{1} to @code{100}.\nDefault value is @code{10}. Higher values make clip detection less aggressive.\n\n"},{"names":["n"],"info":"Set size of histogram used to detect clips. Allowed range is from @code{100} to @code{9999}.\nDefault value is @code{1000}. Higher values make clip detection less aggressive.\n\n"},{"names":["m"],"info":"Set overlap method.\n\nIt accepts the following values:\n@item a\nSelect overlap-add method. Even not interpolated samples are slightly changed\nwith this method.\n\n@item s\nSelect overlap-save method. Not interpolated samples remain unchanged.\n\nDefault value is @code{a}.\n\n"}]},{"filtergroup":["adelay"],"info":"\nDelay one or more audio channels.\n\nSamples in delayed channel are filled with silence.\n\nThe filter accepts the following option:\n\n","options":[{"names":["delays"],"info":"Set list of delays in milliseconds for each channel separated by '|'.\nUnused delays will be silently ignored. If number of given delays is\nsmaller than number of channels all remaining channels will not be delayed.\nIf you want to delay exact number of samples, append 'S' to number.\nIf you want instead to delay in seconds, append 's' to number.\n\n"},{"names":["all"],"info":"Use last set delay for all remaining channels. By default is disabled.\nThis option if enabled changes how option @code{delays} is interpreted.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDelay first channel by 1.5 seconds, the third channel by 0.5 seconds and leave\nthe second channel (and any other channels that may be present) unchanged.\n@example\nadelay=1500|0|500\n@end example\n\n@item\nDelay second channel by 500 samples, the third channel by 700 samples and leave\nthe first channel (and any other channels that may be present) unchanged.\n@example\nadelay=0|500S|700S\n@end example\n\n@item\nDelay all channels by same number of samples:\n@example\nadelay=delays=64S:all=1\n@end example\n@end itemize\n\n"}]},{"filtergroup":["aderivative","aintegral"],"info":"\nCompute derivative/integral of audio stream.\n\nApplying both filters one after another produces original audio.\n\n","options":[]},{"filtergroup":["aecho"],"info":"\nApply echoing to the input audio.\n\nEchoes are reflected sound and can occur naturally amongst mountains\n(and sometimes large buildings) when talking or shouting; digital echo\neffects emulate this behaviour and are often used to help fill out the\nsound of a single instrument or vocal. The time difference between the\noriginal signal and the reflection is the @code{delay}, and the\nloudness of the reflected signal is the @code{decay}.\nMultiple echoes can have different delays and decays.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["in_gain"],"info":"Set input gain of reflected signal. Default is @code{0.6}.\n\n"},{"names":["out_gain"],"info":"Set output gain of reflected signal. Default is @code{0.3}.\n\n"},{"names":["delays"],"info":"Set list of time intervals in milliseconds between original signal and reflections\nseparated by '|'. Allowed range for each @code{delay} is @code{(0 - 90000.0]}.\nDefault is @code{1000}.\n\n"},{"names":["decays"],"info":"Set list of loudness of reflected signals separated by '|'.\nAllowed range for each @code{decay} is @code{(0 - 1.0]}.\nDefault is @code{0.5}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake it sound as if there are twice as many instruments as are actually playing:\n@example\naecho=0.8:0.88:60:0.4\n@end example\n\n@item\nIf delay is very short, then it sounds like a (metallic) robot playing music:\n@example\naecho=0.8:0.88:6:0.4\n@end example\n\n@item\nA longer delay will sound like an open air concert in the mountains:\n@example\naecho=0.8:0.9:1000:0.3\n@end example\n\n@item\nSame as above but with one more mountain:\n@example\naecho=0.8:0.9:1000|1800:0.3|0.25\n@end example\n@end itemize\n\n"}]},{"filtergroup":["aemphasis"],"info":"Audio emphasis filter creates or restores material directly taken from LPs or\nemphased CDs with different filter curves. E.g. to store music on vinyl the\nsignal has to be altered by a filter first to even out the disadvantages of\nthis recording medium.\nOnce the material is played back the inverse filter has to be applied to\nrestore the distortion of the frequency response.\n\n","options":[{"names":["level_in"],"info":"Set input gain.\n\n"},{"names":["level_out"],"info":"Set output gain.\n\n"},{"names":["mode"],"info":"Set filter mode. For restoring material use @code{reproduction} mode, otherwise\nuse @code{production} mode. Default is @code{reproduction} mode.\n\n"},{"names":["type"],"info":"Set filter type. Selects medium. Can be one of the following:\n\n@item col\nselect Columbia.\n@item emi\nselect EMI.\n@item bsi\nselect BSI (78RPM).\n@item riaa\nselect RIAA.\n@item cd\nselect Compact Disc (CD).\n@item 50fm\nselect 50µs (FM).\n@item 75fm\nselect 75µs (FM).\n@item 50kf\nselect 50µs (FM-KF).\n@item 75kf\nselect 75µs (FM-KF).\n\n"}]},{"filtergroup":["aeval"],"info":"\nModify an audio signal according to the specified expressions.\n\nThis filter accepts one or more expressions (one for each channel),\nwhich are evaluated and used to modify a corresponding audio signal.\n\nIt accepts the following parameters:\n\n","options":[{"names":["exprs"],"info":"Set the '|'-separated expressions list for each separate channel. If\nthe number of input channels is greater than the number of\nexpressions, the last specified expression is used for the remaining\noutput channels.\n\n"},{"names":["channel_layout","c"],"info":"Set output channel layout. If not specified, the channel layout is\nspecified by the number of expressions. If set to @samp{same}, it will\nuse by default the same input channel layout.\n\nEach expression in @var{exprs} can contain the following constants and functions:\n\n@item ch\nchannel number of the current expression\n\n@item n\nnumber of the evaluated sample, starting from 0\n\n@item s\nsample rate\n\n@item t\ntime of the evaluated sample expressed in seconds\n\n@item nb_in_channels\n@item nb_out_channels\ninput and output number of channels\n\n@item val(CH)\nthe value of input channel with number @var{CH}\n\nNote: this filter is slow. For faster processing you should use a\ndedicated filter.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nHalf volume:\n@example\naeval=val(ch)/2:c=same\n@end example\n\n@item\nInvert phase of the second channel:\n@example\naeval=val(0)|-val(1)\n@end example\n@end itemize\n\n@anchor{afade}\n"}]},{"filtergroup":["afade"],"info":"\nApply fade-in/out effect to input audio.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["type","t"],"info":"Specify the effect type, can be either @code{in} for fade-in, or\n@code{out} for a fade-out effect. Default is @code{in}.\n\n"},{"names":["start_sample","ss"],"info":"Specify the number of the start sample for starting to apply the fade\neffect. Default is 0.\n\n"},{"names":["nb_samples","ns"],"info":"Specify the number of samples for which the fade effect has to last. At\nthe end of the fade-in effect the output audio will have the same\nvolume as the input audio, at the end of the fade-out transition\nthe output audio will be silence. Default is 44100.\n\n"},{"names":["start_time","st"],"info":"Specify the start time of the fade effect. Default is 0.\nThe value must be specified as a time duration; see\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nIf set this option is used instead of @var{start_sample}.\n\n"},{"names":["duration","d"],"info":"Specify the duration of the fade effect. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nAt the end of the fade-in effect the output audio will have the same\nvolume as the input audio, at the end of the fade-out transition\nthe output audio will be silence.\nBy default the duration is determined by @var{nb_samples}.\nIf set this option is used instead of @var{nb_samples}.\n\n"},{"names":["curve"],"info":"Set curve for fade transition.\n\nIt accepts the following values:\n@item tri\nselect triangular, linear slope (default)\n@item qsin\nselect quarter of sine wave\n@item hsin\nselect half of sine wave\n@item esin\nselect exponential sine wave\n@item log\nselect logarithmic\n@item ipar\nselect inverted parabola\n@item qua\nselect quadratic\n@item cub\nselect cubic\n@item squ\nselect square root\n@item cbr\nselect cubic root\n@item par\nselect parabola\n@item exp\nselect exponential\n@item iqsin\nselect inverted quarter of sine wave\n@item ihsin\nselect inverted half of sine wave\n@item dese\nselect double-exponential seat\n@item desi\nselect double-exponential sigmoid\n@item losi\nselect logistic sigmoid\n@item nofade\nno fade applied\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFade in first 15 seconds of audio:\n@example\nafade=t=in:ss=0:d=15\n@end example\n\n@item\nFade out last 25 seconds of a 900 seconds audio:\n@example\nafade=t=out:st=875:d=25\n@end example\n@end itemize\n\n"}]},{"filtergroup":["afftdn"],"info":"Denoise audio samples with FFT.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["nr"],"info":"Set the noise reduction in dB, allowed range is 0.01 to 97.\nDefault value is 12 dB.\n\n"},{"names":["nf"],"info":"Set the noise floor in dB, allowed range is -80 to -20.\nDefault value is -50 dB.\n\n"},{"names":["nt"],"info":"Set the noise type.\n\nIt accepts the following values:\n@item w\nSelect white noise.\n\n@item v\nSelect vinyl noise.\n\n@item s\nSelect shellac noise.\n\n@item c\nSelect custom noise, defined in @code{bn} option.\n\nDefault value is white noise.\n\n"},{"names":["bn"],"info":"Set custom band noise for every one of 15 bands.\nBands are separated by ' ' or '|'.\n\n"},{"names":["rf"],"info":"Set the residual floor in dB, allowed range is -80 to -20.\nDefault value is -38 dB.\n\n"},{"names":["tn"],"info":"Enable noise tracking. By default is disabled.\nWith this enabled, noise floor is automatically adjusted.\n\n"},{"names":["tr"],"info":"Enable residual tracking. By default is disabled.\n\n"},{"names":["om"],"info":"Set the output mode.\n\nIt accepts the following values:\n@item i\nPass input unchanged.\n\n@item o\nPass noise filtered out.\n\n@item n\nPass only noise.\n\nDefault value is @var{o}.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item sample_noise, sn\nStart or stop measuring noise profile.\nSyntax for the command is : \"start\" or \"stop\" string.\nAfter measuring noise profile is stopped it will be\nautomatically applied in filtering.\n\n@item noise_reduction, nr\nChange noise reduction. Argument is single float number.\nSyntax for the command is : \"@var{noise_reduction}\"\n\n@item noise_floor, nf\nChange noise floor. Argument is single float number.\nSyntax for the command is : \"@var{noise_floor}\"\n\n@item output_mode, om\nChange output mode operation.\nSyntax for the command is : \"i\", \"o\" or \"n\" string.\n\n"}]},{"filtergroup":["afftfilt"],"info":"Apply arbitrary expressions to samples in frequency domain.\n\n","options":[{"names":["real"],"info":"Set frequency domain real expression for each separate channel separated\nby '|'. Default is \"re\".\nIf the number of input channels is greater than the number of\nexpressions, the last specified expression is used for the remaining\noutput channels.\n\n"},{"names":["imag"],"info":"Set frequency domain imaginary expression for each separate channel\nseparated by '|'. Default is \"im\".\n\nEach expression in @var{real} and @var{imag} can contain the following\nconstants and functions:\n\n@item sr\nsample rate\n\n@item b\ncurrent frequency bin number\n\n@item nb\nnumber of available bins\n\n@item ch\nchannel number of the current expression\n\n@item chs\nnumber of channels\n\n@item pts\ncurrent frame pts\n\n@item re\ncurrent real part of frequency bin of current channel\n\n@item im\ncurrent imaginary part of frequency bin of current channel\n\n@item real(b, ch)\nReturn the value of real part of frequency bin at location (@var{bin},@var{channel})\n\n@item imag(b, ch)\nReturn the value of imaginary part of frequency bin at location (@var{bin},@var{channel})\n\n"},{"names":["win_size"],"info":"Set window size. Allowed range is from 16 to 131072.\nDefault is @code{4096}\n\n"},{"names":["win_func"],"info":"Set window function. Default is @code{hann}.\n\n"},{"names":["overlap"],"info":"Set window overlap. If set to 1, the recommended overlap for selected\nwindow function will be picked. Default is @code{0.75}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nLeave almost only low frequencies in audio:\n@example\nafftfilt=\"'real=re * (1-clip((b/nb)*b,0,1))':imag='im * (1-clip((b/nb)*b,0,1))'\"\n@end example\n\n@item\nApply robotize effect:\n@example\nafftfilt=\"real='hypot(re,im)*sin(0)':imag='hypot(re,im)*cos(0)':win_size=512:overlap=0.75\"\n@end example\n\n@item\nApply whisper effect:\n@example\nafftfilt=\"real='hypot(re,im)*cos((random(0)*2-1)*2*3.14)':imag='hypot(re,im)*sin((random(1)*2-1)*2*3.14)':win_size=128:overlap=0.8\"\n@end example\n@end itemize\n\n@anchor{afir}\n"}]},{"filtergroup":["afir"],"info":"\nApply an arbitrary Frequency Impulse Response filter.\n\nThis filter is designed for applying long FIR filters,\nup to 60 seconds long.\n\nIt can be used as component for digital crossover filters,\nroom equalization, cross talk cancellation, wavefield synthesis,\nauralization, ambiophonics, ambisonics and spatialization.\n\nThis filter uses the second stream as FIR coefficients.\nIf the second stream holds a single channel, it will be used\nfor all input channels in the first stream, otherwise\nthe number of channels in the second stream must be same as\nthe number of channels in the first stream.\n\nIt accepts the following parameters:\n\n","options":[{"names":["dry"],"info":"Set dry gain. This sets input gain.\n\n"},{"names":["wet"],"info":"Set wet gain. This sets final output gain.\n\n"},{"names":["length"],"info":"Set Impulse Response filter length. Default is 1, which means whole IR is processed.\n\n"},{"names":["gtype"],"info":"Enable applying gain measured from power of IR.\n\nSet which approach to use for auto gain measurement.\n\n@item none\nDo not apply any gain.\n\n@item peak\nselect peak gain, very conservative approach. This is default value.\n\n@item dc\nselect DC gain, limited application.\n\n@item gn\nselect gain to noise approach, this is most popular one.\n\n"},{"names":["irgain"],"info":"Set gain to be applied to IR coefficients before filtering.\nAllowed range is 0 to 1. This gain is applied after any gain applied with @var{gtype} option.\n\n"},{"names":["irfmt"],"info":"Set format of IR stream. Can be @code{mono} or @code{input}.\nDefault is @code{input}.\n\n"},{"names":["maxir"],"info":"Set max allowed Impulse Response filter duration in seconds. Default is 30 seconds.\nAllowed range is 0.1 to 60 seconds.\n\n"},{"names":["response"],"info":"Show IR frequency response, magnitude(magenta), phase(green) and group delay(yellow) in additional video stream.\nBy default it is disabled.\n\n"},{"names":["channel"],"info":"Set for which IR channel to display frequency response. By default is first channel\ndisplayed. This option is used only when @var{response} is enabled.\n\n"},{"names":["size"],"info":"Set video stream size. This option is used only when @var{response} is enabled.\n\n"},{"names":["rate"],"info":"Set video stream frame rate. This option is used only when @var{response} is enabled.\n\n"},{"names":["minp"],"info":"Set minimal partition size used for convolution. Default is @var{8192}.\nAllowed range is from @var{8} to @var{32768}.\nLower values decreases latency at cost of higher CPU usage.\n\n"},{"names":["maxp"],"info":"Set maximal partition size used for convolution. Default is @var{8192}.\nAllowed range is from @var{8} to @var{32768}.\nLower values may increase CPU usage.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply reverb to stream using mono IR file as second input, complete command using ffmpeg:\n@example\nffmpeg -i input.wav -i middle_tunnel_1way_mono.wav -lavfi afir output.wav\n@end example\n@end itemize\n\n@anchor{aformat}\n"}]},{"filtergroup":["aformat"],"info":"\nSet output format constraints for the input audio. The framework will\nnegotiate the most appropriate format to minimize conversions.\n\nIt accepts the following parameters:\n","options":[{"names":["sample_fmts"],"info":"A '|'-separated list of requested sample formats.\n\n"},{"names":["sample_rates"],"info":"A '|'-separated list of requested sample rates.\n\n"},{"names":["channel_layouts"],"info":"A '|'-separated list of requested channel layouts.\n\nSee @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the required syntax.\n\nIf a parameter is omitted, all values are allowed.\n\nForce the output to either unsigned 8-bit or signed 16-bit stereo\n@example\naformat=sample_fmts=u8|s16:channel_layouts=stereo\n@end example\n\n"}]},{"filtergroup":["agate"],"info":"\nA gate is mainly used to reduce lower parts of a signal. This kind of signal\nprocessing reduces disturbing noise between useful signals.\n\nGating is done by detecting the volume below a chosen level @var{threshold}\nand dividing it by the factor set with @var{ratio}. The bottom of the noise\nfloor is set via @var{range}. Because an exact manipulation of the signal\nwould cause distortion of the waveform the reduction can be levelled over\ntime. This is done by setting @var{attack} and @var{release}.\n\n@var{attack} determines how long the signal has to fall below the threshold\nbefore any reduction will occur and @var{release} sets the time the signal\nhas to rise above the threshold to reduce the reduction again.\nShorter signals than the chosen attack time will be left untouched.\n\n","options":[{"names":["level_in"],"info":"Set input level before filtering.\nDefault is 1. Allowed range is from 0.015625 to 64.\n\n"},{"names":["mode"],"info":"Set the mode of operation. Can be @code{upward} or @code{downward}.\nDefault is @code{downward}. If set to @code{upward} mode, higher parts of signal\nwill be amplified, expanding dynamic range in upward direction.\nOtherwise, in case of @code{downward} lower parts of signal will be reduced.\n\n"},{"names":["range"],"info":"Set the level of gain reduction when the signal is below the threshold.\nDefault is 0.06125. Allowed range is from 0 to 1.\nSetting this to 0 disables reduction and then filter behaves like expander.\n\n"},{"names":["threshold"],"info":"If a signal rises above this level the gain reduction is released.\nDefault is 0.125. Allowed range is from 0 to 1.\n\n"},{"names":["ratio"],"info":"Set a ratio by which the signal is reduced.\nDefault is 2. Allowed range is from 1 to 9000.\n\n"},{"names":["attack"],"info":"Amount of milliseconds the signal has to rise above the threshold before gain\nreduction stops.\nDefault is 20 milliseconds. Allowed range is from 0.01 to 9000.\n\n"},{"names":["release"],"info":"Amount of milliseconds the signal has to fall below the threshold before the\nreduction is increased again. Default is 250 milliseconds.\nAllowed range is from 0.01 to 9000.\n\n"},{"names":["makeup"],"info":"Set amount of amplification of signal after processing.\nDefault is 1. Allowed range is from 1 to 64.\n\n"},{"names":["knee"],"info":"Curve the sharp knee around the threshold to enter gain reduction more softly.\nDefault is 2.828427125. Allowed range is from 1 to 8.\n\n"},{"names":["detection"],"info":"Choose if exact signal should be taken for detection or an RMS like one.\nDefault is @code{rms}. Can be @code{peak} or @code{rms}.\n\n"},{"names":["link"],"info":"Choose if the average level between all channels or the louder channel affects\nthe reduction.\nDefault is @code{average}. Can be @code{average} or @code{maximum}.\n\n"}]},{"filtergroup":["aiir"],"info":"\nApply an arbitrary Infinite Impulse Response filter.\n\nIt accepts the following parameters:\n\n","options":[{"names":["z"],"info":"Set numerator/zeros coefficients.\n\n"},{"names":["p"],"info":"Set denominator/poles coefficients.\n\n"},{"names":["k"],"info":"Set channels gains.\n\n"},{"names":["dry_gain"],"info":"Set input gain.\n\n"},{"names":["wet_gain"],"info":"Set output gain.\n\n"},{"names":["f"],"info":"Set coefficients format.\n\n@item tf\ntransfer function\n@item zp\nZ-plane zeros/poles, cartesian (default)\n@item pr\nZ-plane zeros/poles, polar radians\n@item pd\nZ-plane zeros/poles, polar degrees\n\n"},{"names":["r"],"info":"Set kind of processing.\nCan be @code{d} - direct or @code{s} - serial cascading. Default is @code{s}.\n\n"},{"names":["e"],"info":"Set filtering precision.\n\n@item dbl\ndouble-precision floating-point (default)\n@item flt\nsingle-precision floating-point\n@item i32\n32-bit integers\n@item i16\n16-bit integers\n\n"},{"names":["mix"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["response"],"info":"Show IR frequency response, magnitude(magenta), phase(green) and group delay(yellow) in additional video stream.\nBy default it is disabled.\n\n"},{"names":["channel"],"info":"Set for which IR channel to display frequency response. By default is first channel\ndisplayed. This option is used only when @var{response} is enabled.\n\n"},{"names":["size"],"info":"Set video stream size. This option is used only when @var{response} is enabled.\n\nCoefficients in @code{tf} format are separated by spaces and are in ascending\norder.\n\nCoefficients in @code{zp} format are separated by spaces and order of coefficients\ndoesn't matter. Coefficients in @code{zp} format are complex numbers with @var{i}\nimaginary unit.\n\nDifferent coefficients and gains can be provided for every channel, in such case\nuse '|' to separate coefficients or gains. Last provided coefficients will be\nused for all remaining channels.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply 2 pole elliptic notch at around 5000Hz for 48000 Hz sample rate:\n@example\naiir=k=1:z=7.957584807809675810E-1 -2.575128568908332300 3.674839853930788710 -2.57512875289799137 7.957586296317130880E-1:p=1 -2.86950072432325953 3.63022088054647218 -2.28075678147272232 6.361362326477423500E-1:f=tf:r=d\n@end example\n\n@item\nSame as above but in @code{zp} format:\n@example\naiir=k=0.79575848078096756:z=0.80918701+0.58773007i 0.80918701-0.58773007i 0.80884700+0.58784055i 0.80884700-0.58784055i:p=0.63892345+0.59951235i 0.63892345-0.59951235i 0.79582691+0.44198673i 0.79582691-0.44198673i:f=zp:r=s\n@end example\n@end itemize\n\n"}]},{"filtergroup":["alimiter"],"info":"\nThe limiter prevents an input signal from rising over a desired threshold.\nThis limiter uses lookahead technology to prevent your signal from distorting.\nIt means that there is a small delay after the signal is processed. Keep in mind\nthat the delay it produces is the attack time you set.\n\n","options":[{"names":["level_in"],"info":"Set input gain. Default is 1.\n\n"},{"names":["level_out"],"info":"Set output gain. Default is 1.\n\n"},{"names":["limit"],"info":"Don't let signals above this level pass the limiter. Default is 1.\n\n"},{"names":["attack"],"info":"The limiter will reach its attenuation level in this amount of time in\nmilliseconds. Default is 5 milliseconds.\n\n"},{"names":["release"],"info":"Come back from limiting to attenuation 1.0 in this amount of milliseconds.\nDefault is 50 milliseconds.\n\n"},{"names":["asc"],"info":"When gain reduction is always needed ASC takes care of releasing to an\naverage reduction level rather than reaching a reduction of 0 in the release\ntime.\n\n"},{"names":["asc_level"],"info":"Select how much the release time is affected by ASC, 0 means nearly no changes\nin release time while 1 produces higher release times.\n\n"},{"names":["level"],"info":"Auto level output signal. Default is enabled.\nThis normalizes audio back to 0dB if enabled.\n\nDepending on picked setting it is recommended to upsample input 2x or 4x times\nwith @ref{aresample} before applying this filter.\n\n"}]},{"filtergroup":["allpass"],"info":"\nApply a two-pole all-pass filter with central frequency (in Hz)\n@var{frequency}, and filter-width @var{width}.\nAn all-pass filter changes the audio's frequency to phase relationship\nwithout changing its frequency to amplitude relationship.\n\n","options":[{"names":["frequency","f"],"info":"Set frequency in Hz.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange allpass frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange allpass width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange allpass width.\nSyntax for the command is : \"@var{width}\"\n\n@item mix, m\nChange allpass mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["aloop"],"info":"\nLoop audio samples.\n\n","options":[{"names":["loop"],"info":"Set the number of loops. Setting this value to -1 will result in infinite loops.\nDefault is 0.\n\n"},{"names":["size"],"info":"Set maximal number of samples. Default is 0.\n\n"},{"names":["start"],"info":"Set first sample of loop. Default is 0.\n\n@anchor{amerge}\n"}]},{"filtergroup":["amerge"],"info":"\nMerge two or more audio streams into a single multi-channel stream.\n\n","options":[{"names":["inputs"],"info":"Set the number of inputs. Default is 2.\n\n\nIf the channel layouts of the inputs are disjoint, and therefore compatible,\nthe channel layout of the output will be set accordingly and the channels\nwill be reordered as necessary. If the channel layouts of the inputs are not\ndisjoint, the output will have all the channels of the first input then all\nthe channels of the second input, in that order, and the channel layout of\nthe output will be the default value corresponding to the total number of\nchannels.\n\nFor example, if the first input is in 2.1 (FL+FR+LF) and the second input\nis FC+BL+BR, then the output will be in 5.1, with the channels in the\nfollowing order: a1, a2, b1, a3, b2, b3 (a1 is the first channel of the\nfirst input, b1 is the first channel of the second input).\n\nOn the other hand, if both input are in stereo, the output channels will be\nin the default order: a1, a2, b1, b2, and the channel layout will be\narbitrarily set to 4.0, which may or may not be the expected value.\n\nAll inputs must have the same sample rate, and format.\n\nIf inputs do not have the same duration, the output will stop with the\nshortest.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMerge two mono files into a stereo stream:\n@example\namovie=left.wav [l] ; amovie=right.mp3 [r] ; [l] [r] amerge\n@end example\n\n@item\nMultiple merges assuming 1 video stream and 6 audio streams in @file{input.mkv}:\n@example\nffmpeg -i input.mkv -filter_complex \"[0:1][0:2][0:3][0:4][0:5][0:6] amerge=inputs=6\" -c:a pcm_s16le output.mkv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["amix"],"info":"\nMixes multiple audio inputs into a single output.\n\nNote that this filter only supports float samples (the @var{amerge}\nand @var{pan} audio filters support many formats). If the @var{amix}\ninput has integer samples then @ref{aresample} will be automatically\ninserted to perform the conversion to float samples.\n\nFor example\n@example\nffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT\n@end example\nwill mix 3 input audio streams to a single output with the same duration as the\nfirst input and a dropout transition time of 3 seconds.\n\nIt accepts the following parameters:\n","options":[{"names":["inputs"],"info":"The number of inputs. If unspecified, it defaults to 2.\n\n"},{"names":["duration"],"info":"How to determine the end-of-stream.\n\n@item longest\nThe duration of the longest input. (default)\n\n@item shortest\nThe duration of the shortest input.\n\n@item first\nThe duration of the first input.\n\n\n"},{"names":["dropout_transition"],"info":"The transition time, in seconds, for volume renormalization when an input\nstream ends. The default value is 2 seconds.\n\n"},{"names":["weights"],"info":"Specify weight of each input audio stream as sequence.\nEach weight is separated by space. By default all inputs have same weight.\n\n"}]},{"filtergroup":["amultiply"],"info":"\nMultiply first audio stream with second audio stream and store result\nin output audio stream. Multiplication is done by multiplying each\nsample from first stream with sample at same position from second stream.\n\nWith this element-wise multiplication one can create amplitude fades and\namplitude modulations.\n\n","options":[]},{"filtergroup":["anequalizer"],"info":"\nHigh-order parametric multiband equalizer for each channel.\n\nIt accepts the following parameters:\n","options":[{"names":["params"],"info":"\nThis option string is in format:\n\"c@var{chn} f=@var{cf} w=@var{w} g=@var{g} t=@var{f} | ...\"\nEach equalizer band is separated by '|'.\n\n@item chn\nSet channel number to which equalization will be applied.\nIf input doesn't have that channel the entry is ignored.\n\n@item f\nSet central frequency for band.\nIf input doesn't have that frequency the entry is ignored.\n\n@item w\nSet band width in hertz.\n\n@item g\nSet band gain in dB.\n\n@item t\nSet filter type for band, optional, can be:\n\n@table @samp\n@item 0\nButterworth, this is default.\n\n@item 1\nChebyshev type 1.\n\n@item 2\nChebyshev type 2.\n\n"},{"names":["curves"],"info":"With this option activated frequency response of anequalizer is displayed\nin video stream.\n\n"},{"names":["size"],"info":"Set video stream size. Only useful if curves option is activated.\n\n"},{"names":["mgain"],"info":"Set max gain that will be displayed. Only useful if curves option is activated.\nSetting this to a reasonable value makes it possible to display gain which is derived from\nneighbour bands which are too close to each other and thus produce higher gain\nwhen both are activated.\n\n"},{"names":["fscale"],"info":"Set frequency scale used to draw frequency response in video output.\nCan be linear or logarithmic. Default is logarithmic.\n\n"},{"names":["colors"],"info":"Set color for each channel curve which is going to be displayed in video stream.\nThis is list of color names separated by space or by '|'.\nUnrecognised or missing colors will be replaced by white color.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nLower gain by 10 of central frequency 200Hz and width 100 Hz\nfor first 2 channels using Chebyshev type 1 filter:\n@example\nanequalizer=c0 f=200 w=100 g=-10 t=1|c1 f=200 w=100 g=-10 t=1\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item change\nAlter existing filter parameters.\nSyntax for the commands is : \"@var{fN}|f=@var{freq}|w=@var{width}|g=@var{gain}\"\n\n@var{fN} is existing filter number, starting from 0, if no such filter is available\nerror is returned.\n@var{freq} set new frequency parameter.\n@var{width} set new width parameter in herz.\n@var{gain} set new gain parameter in dB.\n\nFull filter invocation with asendcmd may look like this:\nasendcmd=c='4.0 anequalizer change 0|f=200|w=50|g=1',anequalizer=...\n@end table\n\n"}]},{"filtergroup":["anlmdn"],"info":"\nReduce broadband noise in audio samples using Non-Local Means algorithm.\n\nEach sample is adjusted by looking for other samples with similar contexts. This\ncontext similarity is defined by comparing their surrounding patches of size\n@option{p}. Patches are searched in an area of @option{r} around the sample.\n\n","options":[{"names":["s"],"info":"Set denoising strength. Allowed range is from 0.00001 to 10. Default value is 0.00001.\n\n"},{"names":["p"],"info":"Set patch radius duration. Allowed range is from 1 to 100 milliseconds.\nDefault value is 2 milliseconds.\n\n"},{"names":["r"],"info":"Set research radius duration. Allowed range is from 2 to 300 milliseconds.\nDefault value is 6 milliseconds.\n\n"},{"names":["o"],"info":"Set the output mode.\n\nIt accepts the following values:\n@item i\nPass input unchanged.\n\n@item o\nPass noise filtered out.\n\n@item n\nPass only noise.\n\nDefault value is @var{o}.\n\n"},{"names":["m"],"info":"Set smooth factor. Default value is @var{11}. Allowed range is from @var{1} to @var{15}.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item s\nChange denoise strength. Argument is single float number.\nSyntax for the command is : \"@var{s}\"\n\n@item o\nChange output mode.\nSyntax for the command is : \"i\", \"o\" or \"n\" string.\n\n"}]},{"filtergroup":["anlms"],"info":"Apply Normalized Least-Mean-Squares algorithm to the first audio stream using the second audio stream.\n\nThis adaptive filter is used to mimic a desired filter by finding the filter coefficients that\nrelate to producing the least mean square of the error signal (difference between the desired,\n2nd input audio stream and the actual signal, the 1st input audio stream).\n\nA description of the accepted options follows.\n\n","options":[{"names":["order"],"info":"Set filter order.\n\n"},{"names":["mu"],"info":"Set filter mu.\n\n"},{"names":["eps"],"info":"Set the filter eps.\n\n"},{"names":["leakage"],"info":"Set the filter leakage.\n\n"},{"names":["out_mode"],"info":"It accepts the following values:\n@item i\nPass the 1st input.\n\n@item d\nPass the 2nd input.\n\n@item o\nPass filtered samples.\n\n@item n\nPass difference between desired and filtered samples.\n\nDefault value is @var{o}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nOne of many usages of this filter is noise reduction, input audio is filtered\nwith same samples that are delayed by fixed amount, one such example for stereo audio is:\n@example\nasplit[a][b],[a]adelay=32S|32S[a],[b][a]anlms=order=128:leakage=0.0005:mu=.5:out_mode=o\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the same commands as options, excluding option @code{order}.\n\n"}]},{"filtergroup":["anull"],"info":"\nPass the audio source unchanged to the output.\n\n","options":[]},{"filtergroup":["apad"],"info":"\nPad the end of an audio stream with silence.\n\nThis can be used together with @command{ffmpeg} @option{-shortest} to\nextend audio streams to the same length as the video stream.\n\nA description of the accepted options follows.\n\n","options":[{"names":["packet_size"],"info":"Set silence packet size. Default value is 4096.\n\n"},{"names":["pad_len"],"info":"Set the number of samples of silence to add to the end. After the\nvalue is reached, the stream is terminated. This option is mutually\nexclusive with @option{whole_len}.\n\n"},{"names":["whole_len"],"info":"Set the minimum total number of samples in the output audio stream. If\nthe value is longer than the input audio length, silence is added to\nthe end, until the value is reached. This option is mutually exclusive\nwith @option{pad_len}.\n\n"},{"names":["pad_dur"],"info":"Specify the duration of samples of silence to add. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax. Used only if set to non-zero value.\n\n"},{"names":["whole_dur"],"info":"Specify the minimum total duration in the output audio stream. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax. Used only if set to non-zero value. If the value is longer than\nthe input audio length, silence is added to the end, until the value is reached.\nThis option is mutually exclusive with @option{pad_dur}\n\nIf neither the @option{pad_len} nor the @option{whole_len} nor @option{pad_dur}\nnor @option{whole_dur} option is set, the filter will add silence to the end of\nthe input stream indefinitely.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nAdd 1024 samples of silence to the end of the input:\n@example\napad=pad_len=1024\n@end example\n\n@item\nMake sure the audio output will contain at least 10000 samples, pad\nthe input with silence if required:\n@example\napad=whole_len=10000\n@end example\n\n@item\nUse @command{ffmpeg} to pad the audio input with silence, so that the\nvideo stream will always result the shortest and will be converted\nuntil the end in the output file when using the @option{shortest}\noption:\n@example\nffmpeg -i VIDEO -i AUDIO -filter_complex \"[1:0]apad\" -shortest OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["aphaser"],"info":"Add a phasing effect to the input audio.\n\nA phaser filter creates series of peaks and troughs in the frequency spectrum.\nThe position of the peaks and troughs are modulated so that they vary over time, creating a sweeping effect.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["in_gain"],"info":"Set input gain. Default is 0.4.\n\n"},{"names":["out_gain"],"info":"Set output gain. Default is 0.74\n\n"},{"names":["delay"],"info":"Set delay in milliseconds. Default is 3.0.\n\n"},{"names":["decay"],"info":"Set decay. Default is 0.4.\n\n"},{"names":["speed"],"info":"Set modulation speed in Hz. Default is 0.5.\n\n"},{"names":["type"],"info":"Set modulation type. Default is triangular.\n\nIt accepts the following values:\n@item triangular, t\n@item sinusoidal, s\n\n"}]},{"filtergroup":["apulsator"],"info":"\nAudio pulsator is something between an autopanner and a tremolo.\nBut it can produce funny stereo effects as well. Pulsator changes the volume\nof the left and right channel based on a LFO (low frequency oscillator) with\ndifferent waveforms and shifted phases.\nThis filter have the ability to define an offset between left and right\nchannel. An offset of 0 means that both LFO shapes match each other.\nThe left and right channel are altered equally - a conventional tremolo.\nAn offset of 50% means that the shape of the right channel is exactly shifted\nin phase (or moved backwards about half of the frequency) - pulsator acts as\nan autopanner. At 1 both curves match again. Every setting in between moves the\nphase shift gapless between all stages and produces some \"bypassing\" sounds with\nsine and triangle waveforms. The more you set the offset near 1 (starting from\nthe 0.5) the faster the signal passes from the left to the right speaker.\n\n","options":[{"names":["level_in"],"info":"Set input gain. By default it is 1. Range is [0.015625 - 64].\n\n"},{"names":["level_out"],"info":"Set output gain. By default it is 1. Range is [0.015625 - 64].\n\n"},{"names":["mode"],"info":"Set waveform shape the LFO will use. Can be one of: sine, triangle, square,\nsawup or sawdown. Default is sine.\n\n"},{"names":["amount"],"info":"Set modulation. Define how much of original signal is affected by the LFO.\n\n"},{"names":["offset_l"],"info":"Set left channel offset. Default is 0. Allowed range is [0 - 1].\n\n"},{"names":["offset_r"],"info":"Set right channel offset. Default is 0.5. Allowed range is [0 - 1].\n\n"},{"names":["width"],"info":"Set pulse width. Default is 1. Allowed range is [0 - 2].\n\n"},{"names":["timing"],"info":"Set possible timing mode. Can be one of: bpm, ms or hz. Default is hz.\n\n"},{"names":["bpm"],"info":"Set bpm. Default is 120. Allowed range is [30 - 300]. Only used if timing\nis set to bpm.\n\n"},{"names":["ms"],"info":"Set ms. Default is 500. Allowed range is [10 - 2000]. Only used if timing\nis set to ms.\n\n"},{"names":["hz"],"info":"Set frequency in Hz. Default is 2. Allowed range is [0.01 - 100]. Only used\nif timing is set to hz.\n\n@anchor{aresample}\n"}]},{"filtergroup":["aresample"],"info":"\nResample the input audio to the specified parameters, using the\nlibswresample library. If none are specified then the filter will\nautomatically convert between its input and output.\n\nThis filter is also able to stretch/squeeze the audio data to make it match\nthe timestamps or to inject silence / cut out audio to make it match the\ntimestamps, do a combination of both or do neither.\n\nThe filter accepts the syntax\n[@var{sample_rate}:]@var{resampler_options}, where @var{sample_rate}\nexpresses a sample rate and @var{resampler_options} is a list of\n@var{key}=@var{value} pairs, separated by \":\". See the\n@ref{Resampler Options,,\"Resampler Options\" section in the\nffmpeg-resampler(1) manual,ffmpeg-resampler}\nfor the complete list of supported options.\n\n@subsection Examples\n\n@itemize\n@item\nResample the input audio to 44100Hz:\n@example\naresample=44100\n@end example\n\n@item\nStretch/squeeze samples to the given timestamps, with a maximum of 1000\nsamples per second compensation:\n@example\naresample=async=1000\n@end example\n@end itemize\n\n","options":[]},{"filtergroup":["areverse"],"info":"\nReverse an audio clip.\n\nWarning: This filter requires memory to buffer the entire clip, so trimming\nis suggested.\n\n@subsection Examples\n\n@itemize\n@item\nTake the first 5 seconds of a clip, and reverse it.\n@example\natrim=end=5,areverse\n@end example\n@end itemize\n\n","options":[]},{"filtergroup":["arnndn"],"info":"\nReduce noise from speech using Recurrent Neural Networks.\n\nThis filter accepts the following options:\n\n","options":[{"names":["model","m"],"info":"Set train model file to load. This option is always required.\n\n"}]},{"filtergroup":["asetnsamples"],"info":"\nSet the number of samples per each output audio frame.\n\nThe last output packet may contain a different number of samples, as\nthe filter will flush all the remaining samples when the input audio\nsignals its end.\n\n","options":[{"names":["nb_out_samples","n"],"info":"Set the number of frames per each output audio frame. The number is\nintended as the number of samples @emph{per each channel}.\nDefault value is 1024.\n\n"},{"names":["pad","p"],"info":"If set to 1, the filter will pad the last audio frame with zeroes, so\nthat the last frame will contain the same number of samples as the\nprevious ones. Default value is 1.\n\nFor example, to set the number of per-frame samples to 1234 and\ndisable padding for the last frame, use:\n@example\nasetnsamples=n=1234:p=0\n@end example\n\n"}]},{"filtergroup":["asetrate"],"info":"\nSet the sample rate without altering the PCM data.\nThis will result in a change of speed and pitch.\n\n","options":[{"names":["sample_rate","r"],"info":"Set the output sample rate. Default is 44100 Hz.\n\n"}]},{"filtergroup":["ashowinfo"],"info":"\nShow a line containing various information for each input audio frame.\nThe input audio is not modified.\n\nThe shown line contains a sequence of key/value pairs of the form\n@var{key}:@var{value}.\n\nThe following values are shown in the output:\n\n","options":[{"names":["n"],"info":"The (sequential) number of the input frame, starting from 0.\n\n"},{"names":["pts"],"info":"The presentation timestamp of the input frame, in time base units; the time base\ndepends on the filter input pad, and is usually 1/@var{sample_rate}.\n\n"},{"names":["pts_time"],"info":"The presentation timestamp of the input frame in seconds.\n\n"},{"names":["pos"],"info":"position of the frame in the input stream, -1 if this information in\nunavailable and/or meaningless (for example in case of synthetic audio)\n\n"},{"names":["fmt"],"info":"The sample format.\n\n"},{"names":["chlayout"],"info":"The channel layout.\n\n"},{"names":["rate"],"info":"The sample rate for the audio frame.\n\n"},{"names":["nb_samples"],"info":"The number of samples (per channel) in the frame.\n\n"},{"names":["checksum"],"info":"The Adler-32 checksum (printed in hexadecimal) of the audio data. For planar\naudio, the data is treated as if all the planes were concatenated.\n\n"},{"names":["plane_checksums"],"info":"A list of Adler-32 checksums for each data plane.\n\n"}]},{"filtergroup":["asoftclip"],"info":"Apply audio soft clipping.\n\nSoft clipping is a type of distortion effect where the amplitude of a signal is saturated\nalong a smooth curve, rather than the abrupt shape of hard-clipping.\n\nThis filter accepts the following options:\n\n","options":[{"names":["type"],"info":"Set type of soft-clipping.\n\nIt accepts the following values:\n@item tanh\n@item atan\n@item cubic\n@item exp\n@item alg\n@item quintic\n@item sin\n\n"},{"names":["param"],"info":"Set additional parameter which controls sigmoid function.\n\n"}]},{"filtergroup":["asr"],"info":"Automatic Speech Recognition\n\nThis filter uses PocketSphinx for speech recognition. To enable\ncompilation of this filter, you need to configure FFmpeg with\n@code{--enable-pocketsphinx}.\n\nIt accepts the following options:\n\n","options":[{"names":["rate"],"info":"Set sampling rate of input audio. Defaults is @code{16000}.\nThis need to match speech models, otherwise one will get poor results.\n\n"},{"names":["hmm"],"info":"Set dictionary containing acoustic model files.\n\n"},{"names":["dict"],"info":"Set pronunciation dictionary.\n\n"},{"names":["lm"],"info":"Set language model file.\n\n"},{"names":["lmctl"],"info":"Set language model set.\n\n"},{"names":["lmname"],"info":"Set which language model to use.\n\n"},{"names":["logfn"],"info":"Set output for log messages.\n\nThe filter exports recognized speech as the frame metadata @code{lavfi.asr.text}.\n\n@anchor{astats}\n"}]},{"filtergroup":["astats"],"info":"\nDisplay time domain statistical information about the audio channels.\nStatistics are calculated and displayed for each audio channel and,\nwhere applicable, an overall figure is also given.\n\nIt accepts the following option:\n","options":[{"names":["length"],"info":"Short window length in seconds, used for peak and trough RMS measurement.\nDefault is @code{0.05} (50 milliseconds). Allowed range is @code{[0.01 - 10]}.\n\n"},{"names":["metadata"],"info":"\nSet metadata injection. All the metadata keys are prefixed with @code{lavfi.astats.X},\nwhere @code{X} is channel number starting from 1 or string @code{Overall}. Default is\ndisabled.\n\nAvailable keys for each channel are:\nDC_offset\nMin_level\nMax_level\nMin_difference\nMax_difference\nMean_difference\nRMS_difference\nPeak_level\nRMS_peak\nRMS_trough\nCrest_factor\nFlat_factor\nPeak_count\nBit_depth\nDynamic_range\nZero_crossings\nZero_crossings_rate\nNumber_of_NaNs\nNumber_of_Infs\nNumber_of_denormals\n\nand for Overall:\nDC_offset\nMin_level\nMax_level\nMin_difference\nMax_difference\nMean_difference\nRMS_difference\nPeak_level\nRMS_level\nRMS_peak\nRMS_trough\nFlat_factor\nPeak_count\nBit_depth\nNumber_of_samples\nNumber_of_NaNs\nNumber_of_Infs\nNumber_of_denormals\n\nFor example full key look like this @code{lavfi.astats.1.DC_offset} or\nthis @code{lavfi.astats.Overall.Peak_count}.\n\nFor description what each key means read below.\n\n"},{"names":["reset"],"info":"Set number of frame after which stats are going to be recalculated.\nDefault is disabled.\n\n"},{"names":["measure_perchannel"],"info":"Select the entries which need to be measured per channel. The metadata keys can\nbe used as flags, default is @option{all} which measures everything.\n@option{none} disables all per channel measurement.\n\n"},{"names":["measure_overall"],"info":"Select the entries which need to be measured overall. The metadata keys can\nbe used as flags, default is @option{all} which measures everything.\n@option{none} disables all overall measurement.\n\n\nA description of each shown parameter follows:\n\n@item DC offset\nMean amplitude displacement from zero.\n\n@item Min level\nMinimal sample level.\n\n@item Max level\nMaximal sample level.\n\n@item Min difference\nMinimal difference between two consecutive samples.\n\n@item Max difference\nMaximal difference between two consecutive samples.\n\n@item Mean difference\nMean difference between two consecutive samples.\nThe average of each difference between two consecutive samples.\n\n@item RMS difference\nRoot Mean Square difference between two consecutive samples.\n\n@item Peak level dB\n@item RMS level dB\nStandard peak and RMS level measured in dBFS.\n\n@item RMS peak dB\n@item RMS trough dB\nPeak and trough values for RMS level measured over a short window.\n\n@item Crest factor\nStandard ratio of peak to RMS level (note: not in dB).\n\n@item Flat factor\nFlatness (i.e. consecutive samples with the same value) of the signal at its peak levels\n(i.e. either @var{Min level} or @var{Max level}).\n\n@item Peak count\nNumber of occasions (not the number of samples) that the signal attained either\n@var{Min level} or @var{Max level}.\n\n@item Bit depth\nOverall bit depth of audio. Number of bits used for each sample.\n\n@item Dynamic range\nMeasured dynamic range of audio in dB.\n\n@item Zero crossings\nNumber of points where the waveform crosses the zero level axis.\n\n@item Zero crossings rate\nRate of Zero crossings and number of audio samples.\n\n"}]},{"filtergroup":["atempo"],"info":"\nAdjust audio tempo.\n\nThe filter accepts exactly one parameter, the audio tempo. If not\nspecified then the filter will assume nominal 1.0 tempo. Tempo must\nbe in the [0.5, 100.0] range.\n\nNote that tempo greater than 2 will skip some samples rather than\nblend them in. If for any reason this is a concern it is always\npossible to daisy-chain several instances of atempo to achieve the\ndesired product tempo.\n\n@subsection Examples\n\n@itemize\n@item\nSlow down audio to 80% tempo:\n@example\natempo=0.8\n@end example\n\n@item\nTo speed up audio to 300% tempo:\n@example\natempo=3\n@end example\n\n@item\nTo speed up audio to 300% tempo by daisy-chaining two atempo instances:\n@example\natempo=sqrt(3),atempo=sqrt(3)\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n","options":[{"names":["tempo"],"info":"Change filter tempo scale factor.\nSyntax for the command is : \"@var{tempo}\"\n\n"}]},{"filtergroup":["atrim"],"info":"\nTrim the input so that the output contains one continuous subpart of the input.\n\nIt accepts the following parameters:\n","options":[{"names":["start"],"info":"Timestamp (in seconds) of the start of the section to keep. I.e. the audio\nsample with the timestamp @var{start} will be the first sample in the output.\n\n"},{"names":["end"],"info":"Specify time of the first audio sample that will be dropped, i.e. the\naudio sample immediately preceding the one with the timestamp @var{end} will be\nthe last sample in the output.\n\n"},{"names":["start_pts"],"info":"Same as @var{start}, except this option sets the start timestamp in samples\ninstead of seconds.\n\n"},{"names":["end_pts"],"info":"Same as @var{end}, except this option sets the end timestamp in samples instead\nof seconds.\n\n"},{"names":["duration"],"info":"The maximum duration of the output in seconds.\n\n"},{"names":["start_sample"],"info":"The number of the first sample that should be output.\n\n"},{"names":["end_sample"],"info":"The number of the first sample that should be dropped.\n\n@option{start}, @option{end}, and @option{duration} are expressed as time\nduration specifications; see\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.\n\nNote that the first two sets of the start/end options and the @option{duration}\noption look at the frame timestamp, while the _sample options simply count the\nsamples that pass through the filter. So start/end_pts and start/end_sample will\ngive different results when the timestamps are wrong, inexact or do not start at\nzero. Also note that this filter does not modify the timestamps. If you wish\nto have the output timestamps start at zero, insert the asetpts filter after the\natrim filter.\n\nIf multiple start or end options are set, this filter tries to be greedy and\nkeep all samples that match at least one of the specified constraints. To keep\nonly the part that matches all the constraints at once, chain multiple atrim\nfilters.\n\nThe defaults are such that all the input is kept. So it is possible to set e.g.\njust the end values to keep everything before the specified time.\n\nExamples:\n@itemize\n"},{"names":[],"info":"Drop everything except the second minute of input:\n@example\nffmpeg -i INPUT -af atrim=60:120\n@end example\n\n"},{"names":[],"info":"Keep only the first 1000 samples:\n@example\nffmpeg -i INPUT -af atrim=end_sample=1000\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["axcorrelate"],"info":"Calculate normalized cross-correlation between two input audio streams.\n\nResulted samples are always between -1 and 1 inclusive.\nIf result is 1 it means two input samples are highly correlated in that selected segment.\nResult 0 means they are not correlated at all.\nIf result is -1 it means two input samples are out of phase, which means they cancel each\nother.\n\n","options":[{"names":["size"],"info":"Set size of segment over which cross-correlation is calculated.\nDefault is 256. Allowed range is from 2 to 131072.\n\n"},{"names":["algo"],"info":"Set algorithm for cross-correlation. Can be @code{slow} or @code{fast}.\nDefault is @code{slow}. Fast algorithm assumes mean values over any given segment\nare always zero and thus need much less calculations to make.\nThis is generally not true, but is valid for typical audio streams.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nCalculate correlation between channels in stereo audio stream:\n@example\nffmpeg -i stereo.wav -af channelsplit,axcorrelate=size=1024:algo=fast correlation.wav\n@end example\n@end itemize\n\n"}]},{"filtergroup":["bandpass"],"info":"\nApply a two-pole Butterworth band-pass filter with central\nfrequency @var{frequency}, and (3dB-point) band-width width.\nThe @var{csg} option selects a constant skirt gain (peak gain = Q)\ninstead of the default: constant 0dB peak gain.\nThe filter roll off at 6dB per octave (20dB per decade).\n\n","options":[{"names":["frequency","f"],"info":"Set the filter's central frequency. Default is @code{3000}.\n\n"},{"names":["csg"],"info":"Constant skirt gain if set to 1. Defaults to 0.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange bandpass frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange bandpass width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange bandpass width.\nSyntax for the command is : \"@var{width}\"\n\n@item mix, m\nChange bandpass mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["bandreject"],"info":"\nApply a two-pole Butterworth band-reject filter with central\nfrequency @var{frequency}, and (3dB-point) band-width @var{width}.\nThe filter roll off at 6dB per octave (20dB per decade).\n\n","options":[{"names":["frequency","f"],"info":"Set the filter's central frequency. Default is @code{3000}.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange bandreject frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange bandreject width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange bandreject width.\nSyntax for the command is : \"@var{width}\"\n\n@item mix, m\nChange bandreject mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["bass","lowshelf"],"info":"\nBoost or cut the bass (lower) frequencies of the audio using a two-pole\nshelving filter with a response similar to that of a standard\nhi-fi's tone-controls. This is also known as shelving equalisation (EQ).\n\n","options":[{"names":["gain","g"],"info":"Give the gain at 0 Hz. Its useful range is about -20\n(for a large cut) to +20 (for a large boost).\nBeware of clipping when using a positive gain.\n\n"},{"names":["frequency","f"],"info":"Set the filter's central frequency and so can be used\nto extend or reduce the frequency range to be boosted or cut.\nThe default value is @code{100} Hz.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Determine how steep is the filter's shelf transition.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange bass frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange bass width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange bass width.\nSyntax for the command is : \"@var{width}\"\n\n@item gain, g\nChange bass gain.\nSyntax for the command is : \"@var{gain}\"\n\n@item mix, m\nChange bass mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["biquad"],"info":"\nApply a biquad IIR filter with the given coefficients.\nWhere @var{b0}, @var{b1}, @var{b2} and @var{a0}, @var{a1}, @var{a2}\nare the numerator and denominator coefficients respectively.\nand @var{channels}, @var{c} specify which channels to filter, by default all\navailable are filtered.\n\n@subsection Commands\n\nThis filter supports the following commands:\n","options":[{"names":["a0"],"info":""},{"names":["a1"],"info":""},{"names":["a2"],"info":""},{"names":["b0"],"info":""},{"names":["b1"],"info":""},{"names":["b2"],"info":"Change biquad parameter.\nSyntax for the command is : \"@var{value}\"\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n"}]},{"filtergroup":["bs2b"],"info":"Bauer stereo to binaural transformation, which improves headphone listening of\nstereo audio records.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libbs2b}.\n\nIt accepts the following parameters:\n","options":[{"names":["profile"],"info":"Pre-defined crossfeed level.\n\n@item default\nDefault level (fcut=700, feed=50).\n\n@item cmoy\nChu Moy circuit (fcut=700, feed=60).\n\n@item jmeier\nJan Meier circuit (fcut=650, feed=95).\n\n\n"},{"names":["fcut"],"info":"Cut frequency (in Hz).\n\n"},{"names":["feed"],"info":"Feed level (in Hz).\n\n\n"}]},{"filtergroup":["channelmap"],"info":"\nRemap input channels to new locations.\n\nIt accepts the following parameters:\n","options":[{"names":["map"],"info":"Map channels from input to output. The argument is a '|'-separated list of\nmappings, each in the @code{@var{in_channel}-@var{out_channel}} or\n@var{in_channel} form. @var{in_channel} can be either the name of the input\nchannel (e.g. FL for front left) or its index in the input channel layout.\n@var{out_channel} is the name of the output channel or its index in the output\nchannel layout. If @var{out_channel} is not given then it is implicitly an\nindex, starting with zero and increasing by one for each mapping.\n\n"},{"names":["channel_layout"],"info":"The channel layout of the output stream.\n\nIf no mapping is present, the filter will implicitly map input channels to\noutput channels, preserving indices.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFor example, assuming a 5.1+downmix input MOV file,\n@example\nffmpeg -i in.mov -filter 'channelmap=map=DL-FL|DR-FR' out.wav\n@end example\nwill create an output WAV file tagged as stereo from the downmix channels of\nthe input.\n\n@item\nTo fix a 5.1 WAV improperly encoded in AAC's native channel order\n@example\nffmpeg -i in.wav -filter 'channelmap=1|2|0|5|3|4:5.1' out.wav\n@end example\n@end itemize\n\n"}]},{"filtergroup":["channelsplit"],"info":"\nSplit each channel from an input audio stream into a separate output stream.\n\nIt accepts the following parameters:\n","options":[{"names":["channel_layout"],"info":"The channel layout of the input stream. The default is \"stereo\".\n"},{"names":["channels"],"info":"A channel layout describing the channels to be extracted as separate output streams\nor \"all\" to extract each input channel as a separate stream. The default is \"all\".\n\nChoosing channels not present in channel layout in the input will result in an error.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFor example, assuming a stereo input MP3 file,\n@example\nffmpeg -i in.mp3 -filter_complex channelsplit out.mkv\n@end example\nwill create an output Matroska file with two audio streams, one containing only\nthe left channel and the other the right channel.\n\n@item\nSplit a 5.1 WAV file into per-channel files:\n@example\nffmpeg -i in.wav -filter_complex\n'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'\n-map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'\nfront_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'\nside_right.wav\n@end example\n\n@item\nExtract only LFE from a 5.1 WAV file:\n@example\nffmpeg -i in.wav -filter_complex 'channelsplit=channel_layout=5.1:channels=LFE[LFE]'\n-map '[LFE]' lfe.wav\n@end example\n@end itemize\n\n"}]},{"filtergroup":["chorus"],"info":"Add a chorus effect to the audio.\n\nCan make a single vocal sound like a chorus, but can also be applied to instrumentation.\n\nChorus resembles an echo effect with a short delay, but whereas with echo the delay is\nconstant, with chorus, it is varied using using sinusoidal or triangular modulation.\nThe modulation depth defines the range the modulated delay is played before or after\nthe delay. Hence the delayed sound will sound slower or faster, that is the delayed\nsound tuned around the original one, like in a chorus where some vocals are slightly\noff key.\n\nIt accepts the following parameters:\n","options":[{"names":["in_gain"],"info":"Set input gain. Default is 0.4.\n\n"},{"names":["out_gain"],"info":"Set output gain. Default is 0.4.\n\n"},{"names":["delays"],"info":"Set delays. A typical delay is around 40ms to 60ms.\n\n"},{"names":["decays"],"info":"Set decays.\n\n"},{"names":["speeds"],"info":"Set speeds.\n\n"},{"names":["depths"],"info":"Set depths.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nA single delay:\n@example\nchorus=0.7:0.9:55:0.4:0.25:2\n@end example\n\n@item\nTwo delays:\n@example\nchorus=0.6:0.9:50|60:0.4|0.32:0.25|0.4:2|1.3\n@end example\n\n@item\nFuller sounding chorus with three delays:\n@example\nchorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3\n@end example\n@end itemize\n\n"}]},{"filtergroup":["compand"],"info":"Compress or expand the audio's dynamic range.\n\nIt accepts the following parameters:\n\n","options":[{"names":["attacks"],"info":""},{"names":["decays"],"info":"A list of times in seconds for each channel over which the instantaneous level\nof the input signal is averaged to determine its volume. @var{attacks} refers to\nincrease of volume and @var{decays} refers to decrease of volume. For most\nsituations, the attack time (response to the audio getting louder) should be\nshorter than the decay time, because the human ear is more sensitive to sudden\nloud audio than sudden soft audio. A typical value for attack is 0.3 seconds and\na typical value for decay is 0.8 seconds.\nIf specified number of attacks & decays is lower than number of channels, the last\nset attack/decay will be used for all remaining channels.\n\n"},{"names":["points"],"info":"A list of points for the transfer function, specified in dB relative to the\nmaximum possible signal amplitude. Each key points list must be defined using\nthe following syntax: @code{x0/y0|x1/y1|x2/y2|....} or\n@code{x0/y0 x1/y1 x2/y2 ....}\n\nThe input values must be in strictly increasing order but the transfer function\ndoes not have to be monotonically rising. The point @code{0/0} is assumed but\nmay be overridden (by @code{0/out-dBn}). Typical values for the transfer\nfunction are @code{-70/-70|-60/-20|1/0}.\n\n"},{"names":["soft-knee"],"info":"Set the curve radius in dB for all joints. It defaults to 0.01.\n\n"},{"names":["gain"],"info":"Set the additional gain in dB to be applied at all points on the transfer\nfunction. This allows for easy adjustment of the overall gain.\nIt defaults to 0.\n\n"},{"names":["volume"],"info":"Set an initial volume, in dB, to be assumed for each channel when filtering\nstarts. This permits the user to supply a nominal level initially, so that, for\nexample, a very large gain is not applied to initial signal levels before the\ncompanding has begun to operate. A typical value for audio which is initially\nquiet is -90 dB. It defaults to 0.\n\n"},{"names":["delay"],"info":"Set a delay, in seconds. The input audio is analyzed immediately, but audio is\ndelayed before being fed to the volume adjuster. Specifying a delay\napproximately equal to the attack/decay times allows the filter to effectively\noperate in predictive rather than reactive mode. It defaults to 0.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake music with both quiet and loud passages suitable for listening to in a\nnoisy environment:\n@example\ncompand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2\n@end example\n\nAnother example for audio with whisper and explosion parts:\n@example\ncompand=0|0:1|1:-90/-900|-70/-70|-30/-9|0/-3:6:0:0:0\n@end example\n\n@item\nA noise gate for when the noise is at a lower level than the signal:\n@example\ncompand=.1|.1:.2|.2:-900/-900|-50.1/-900|-50/-50:.01:0:-90:.1\n@end example\n\n@item\nHere is another noise gate, this time for when the noise is at a higher level\nthan the signal (making it, in some ways, similar to squelch):\n@example\ncompand=.1|.1:.1|.1:-45.1/-45.1|-45/-900|0/-900:.01:45:-90:.1\n@end example\n\n@item\n2:1 compression starting at -6dB:\n@example\ncompand=points=-80/-80|-6/-6|0/-3.8|20/3.5\n@end example\n\n@item\n2:1 compression starting at -9dB:\n@example\ncompand=points=-80/-80|-9/-9|0/-5.3|20/2.9\n@end example\n\n@item\n2:1 compression starting at -12dB:\n@example\ncompand=points=-80/-80|-12/-12|0/-6.8|20/1.9\n@end example\n\n@item\n2:1 compression starting at -18dB:\n@example\ncompand=points=-80/-80|-18/-18|0/-9.8|20/0.7\n@end example\n\n@item\n3:1 compression starting at -15dB:\n@example\ncompand=points=-80/-80|-15/-15|0/-10.8|20/-5.2\n@end example\n\n@item\nCompressor/Gate:\n@example\ncompand=points=-80/-105|-62/-80|-15.4/-15.4|0/-12|20/-7.6\n@end example\n\n@item\nExpander:\n@example\ncompand=attacks=0:points=-80/-169|-54/-80|-49.5/-64.6|-41.1/-41.1|-25.8/-15|-10.8/-4.5|0/0|20/8.3\n@end example\n\n@item\nHard limiter at -6dB:\n@example\ncompand=attacks=0:points=-80/-80|-6/-6|20/-6\n@end example\n\n@item\nHard limiter at -12dB:\n@example\ncompand=attacks=0:points=-80/-80|-12/-12|20/-12\n@end example\n\n@item\nHard noise gate at -35 dB:\n@example\ncompand=attacks=0:points=-80/-115|-35.1/-80|-35/-35|20/20\n@end example\n\n@item\nSoft limiter:\n@example\ncompand=attacks=0:points=-80/-80|-12.4/-12.4|-6/-8|0/-6.8|20/-2.8\n@end example\n@end itemize\n\n"}]},{"filtergroup":["compensationdelay"],"info":"\nCompensation Delay Line is a metric based delay to compensate differing\npositions of microphones or speakers.\n\nFor example, you have recorded guitar with two microphones placed in\ndifferent locations. Because the front of sound wave has fixed speed in\nnormal conditions, the phasing of microphones can vary and depends on\ntheir location and interposition. The best sound mix can be achieved when\nthese microphones are in phase (synchronized). Note that a distance of\n~30 cm between microphones makes one microphone capture the signal in\nantiphase to the other microphone. That makes the final mix sound moody.\nThis filter helps to solve phasing problems by adding different delays\nto each microphone track and make them synchronized.\n\nThe best result can be reached when you take one track as base and\nsynchronize other tracks one by one with it.\nRemember that synchronization/delay tolerance depends on sample rate, too.\nHigher sample rates will give more tolerance.\n\nThe filter accepts the following parameters:\n\n","options":[{"names":["mm"],"info":"Set millimeters distance. This is compensation distance for fine tuning.\nDefault is 0.\n\n"},{"names":["cm"],"info":"Set cm distance. This is compensation distance for tightening distance setup.\nDefault is 0.\n\n"},{"names":["m"],"info":"Set meters distance. This is compensation distance for hard distance setup.\nDefault is 0.\n\n"},{"names":["dry"],"info":"Set dry amount. Amount of unprocessed (dry) signal.\nDefault is 0.\n\n"},{"names":["wet"],"info":"Set wet amount. Amount of processed (wet) signal.\nDefault is 1.\n\n"},{"names":["temp"],"info":"Set temperature in degrees Celsius. This is the temperature of the environment.\nDefault is 20.\n\n"}]},{"filtergroup":["crossfeed"],"info":"Apply headphone crossfeed filter.\n\nCrossfeed is the process of blending the left and right channels of stereo\naudio recording.\nIt is mainly used to reduce extreme stereo separation of low frequencies.\n\nThe intent is to produce more speaker like sound to the listener.\n\n","options":[{"names":["strength"],"info":"Set strength of crossfeed. Default is 0.2. Allowed range is from 0 to 1.\nThis sets gain of low shelf filter for side part of stereo image.\nDefault is -6dB. Max allowed is -30db when strength is set to 1.\n\n"},{"names":["range"],"info":"Set soundstage wideness. Default is 0.5. Allowed range is from 0 to 1.\nThis sets cut off frequency of low shelf filter. Default is cut off near\n1550 Hz. With range set to 1 cut off frequency is set to 2100 Hz.\n\n"},{"names":["level_in"],"info":"Set input gain. Default is 0.9.\n\n"},{"names":["level_out"],"info":"Set output gain. Default is 1.\n\n"}]},{"filtergroup":["crystalizer"],"info":"Simple algorithm to expand audio dynamic range.\n\n","options":[{"names":["i"],"info":"Sets the intensity of effect (default: 2.0). Must be in range between 0.0\n(unchanged sound) to 10.0 (maximum effect).\n\n"},{"names":["c"],"info":"Enable clipping. By default is enabled.\n\n"}]},{"filtergroup":["dcshift"],"info":"Apply a DC shift to the audio.\n\nThis can be useful to remove a DC offset (caused perhaps by a hardware problem\nin the recording chain) from the audio. The effect of a DC offset is reduced\nheadroom and hence volume. The @ref{astats} filter can be used to determine if\na signal has a DC offset.\n\n","options":[{"names":["shift"],"info":"Set the DC shift, allowed range is [-1, 1]. It indicates the amount to shift\nthe audio.\n\n"},{"names":["limitergain"],"info":"Optional. It should have a value much less than 1 (e.g. 0.05 or 0.02) and is\nused to prevent clipping.\n\n"}]},{"filtergroup":["deesser"],"info":"\nApply de-essing to the audio samples.\n\n","options":[{"names":["i"],"info":"Set intensity for triggering de-essing. Allowed range is from 0 to 1.\nDefault is 0.\n\n"},{"names":["m"],"info":"Set amount of ducking on treble part of sound. Allowed range is from 0 to 1.\nDefault is 0.5.\n\n"},{"names":["f"],"info":"How much of original frequency content to keep when de-essing. Allowed range is from 0 to 1.\nDefault is 0.5.\n\n"},{"names":["s"],"info":"Set the output mode.\n\nIt accepts the following values:\n@item i\nPass input unchanged.\n\n@item o\nPass ess filtered out.\n\n@item e\nPass only ess.\n\nDefault value is @var{o}.\n\n\n"}]},{"filtergroup":["drmeter"],"info":"Measure audio dynamic range.\n\nDR values of 14 and higher is found in very dynamic material. DR of 8 to 13\nis found in transition material. And anything less that 8 have very poor dynamics\nand is very compressed.\n\n","options":[{"names":["length"],"info":"Set window length in seconds used to split audio into segments of equal length.\nDefault is 3 seconds.\n\n"}]},{"filtergroup":["dynaudnorm"],"info":"Dynamic Audio Normalizer.\n\nThis filter applies a certain amount of gain to the input audio in order\nto bring its peak magnitude to a target level (e.g. 0 dBFS). However, in\ncontrast to more \"simple\" normalization algorithms, the Dynamic Audio\nNormalizer *dynamically* re-adjusts the gain factor to the input audio.\nThis allows for applying extra gain to the \"quiet\" sections of the audio\nwhile avoiding distortions or clipping the \"loud\" sections. In other words:\nThe Dynamic Audio Normalizer will \"even out\" the volume of quiet and loud\nsections, in the sense that the volume of each section is brought to the\nsame target level. Note, however, that the Dynamic Audio Normalizer achieves\nthis goal *without* applying \"dynamic range compressing\". It will retain 100%\nof the dynamic range *within* each section of the audio file.\n\n","options":[{"names":["framelen","f"],"info":"Set the frame length in milliseconds. In range from 10 to 8000 milliseconds.\nDefault is 500 milliseconds.\nThe Dynamic Audio Normalizer processes the input audio in small chunks,\nreferred to as frames. This is required, because a peak magnitude has no\nmeaning for just a single sample value. Instead, we need to determine the\npeak magnitude for a contiguous sequence of sample values. While a \"standard\"\nnormalizer would simply use the peak magnitude of the complete file, the\nDynamic Audio Normalizer determines the peak magnitude individually for each\nframe. The length of a frame is specified in milliseconds. By default, the\nDynamic Audio Normalizer uses a frame length of 500 milliseconds, which has\nbeen found to give good results with most files.\nNote that the exact frame length, in number of samples, will be determined\nautomatically, based on the sampling rate of the individual input audio file.\n\n"},{"names":["gausssize","g"],"info":"Set the Gaussian filter window size. In range from 3 to 301, must be odd\nnumber. Default is 31.\nProbably the most important parameter of the Dynamic Audio Normalizer is the\n@code{window size} of the Gaussian smoothing filter. The filter's window size\nis specified in frames, centered around the current frame. For the sake of\nsimplicity, this must be an odd number. Consequently, the default value of 31\ntakes into account the current frame, as well as the 15 preceding frames and\nthe 15 subsequent frames. Using a larger window results in a stronger\nsmoothing effect and thus in less gain variation, i.e. slower gain\nadaptation. Conversely, using a smaller window results in a weaker smoothing\neffect and thus in more gain variation, i.e. faster gain adaptation.\nIn other words, the more you increase this value, the more the Dynamic Audio\nNormalizer will behave like a \"traditional\" normalization filter. On the\ncontrary, the more you decrease this value, the more the Dynamic Audio\nNormalizer will behave like a dynamic range compressor.\n\n"},{"names":["peak","p"],"info":"Set the target peak value. This specifies the highest permissible magnitude\nlevel for the normalized audio input. This filter will try to approach the\ntarget peak magnitude as closely as possible, but at the same time it also\nmakes sure that the normalized signal will never exceed the peak magnitude.\nA frame's maximum local gain factor is imposed directly by the target peak\nmagnitude. The default value is 0.95 and thus leaves a headroom of 5%*.\nIt is not recommended to go above this value.\n\n"},{"names":["maxgain","m"],"info":"Set the maximum gain factor. In range from 1.0 to 100.0. Default is 10.0.\nThe Dynamic Audio Normalizer determines the maximum possible (local) gain\nfactor for each input frame, i.e. the maximum gain factor that does not\nresult in clipping or distortion. The maximum gain factor is determined by\nthe frame's highest magnitude sample. However, the Dynamic Audio Normalizer\nadditionally bounds the frame's maximum gain factor by a predetermined\n(global) maximum gain factor. This is done in order to avoid excessive gain\nfactors in \"silent\" or almost silent frames. By default, the maximum gain\nfactor is 10.0, For most inputs the default value should be sufficient and\nit usually is not recommended to increase this value. Though, for input\nwith an extremely low overall volume level, it may be necessary to allow even\nhigher gain factors. Note, however, that the Dynamic Audio Normalizer does\nnot simply apply a \"hard\" threshold (i.e. cut off values above the threshold).\nInstead, a \"sigmoid\" threshold function will be applied. This way, the\ngain factors will smoothly approach the threshold value, but never exceed that\nvalue.\n\n"},{"names":["targetrms","r"],"info":"Set the target RMS. In range from 0.0 to 1.0. Default is 0.0 - disabled.\nBy default, the Dynamic Audio Normalizer performs \"peak\" normalization.\nThis means that the maximum local gain factor for each frame is defined\n(only) by the frame's highest magnitude sample. This way, the samples can\nbe amplified as much as possible without exceeding the maximum signal\nlevel, i.e. without clipping. Optionally, however, the Dynamic Audio\nNormalizer can also take into account the frame's root mean square,\nabbreviated RMS. In electrical engineering, the RMS is commonly used to\ndetermine the power of a time-varying signal. It is therefore considered\nthat the RMS is a better approximation of the \"perceived loudness\" than\njust looking at the signal's peak magnitude. Consequently, by adjusting all\nframes to a constant RMS value, a uniform \"perceived loudness\" can be\nestablished. If a target RMS value has been specified, a frame's local gain\nfactor is defined as the factor that would result in exactly that RMS value.\nNote, however, that the maximum local gain factor is still restricted by the\nframe's highest magnitude sample, in order to prevent clipping.\n\n"},{"names":["coupling","n"],"info":"Enable channels coupling. By default is enabled.\nBy default, the Dynamic Audio Normalizer will amplify all channels by the same\namount. This means the same gain factor will be applied to all channels, i.e.\nthe maximum possible gain factor is determined by the \"loudest\" channel.\nHowever, in some recordings, it may happen that the volume of the different\nchannels is uneven, e.g. one channel may be \"quieter\" than the other one(s).\nIn this case, this option can be used to disable the channel coupling. This way,\nthe gain factor will be determined independently for each channel, depending\nonly on the individual channel's highest magnitude sample. This allows for\nharmonizing the volume of the different channels.\n\n"},{"names":["correctdc","c"],"info":"Enable DC bias correction. By default is disabled.\nAn audio signal (in the time domain) is a sequence of sample values.\nIn the Dynamic Audio Normalizer these sample values are represented in the\n-1.0 to 1.0 range, regardless of the original input format. Normally, the\naudio signal, or \"waveform\", should be centered around the zero point.\nThat means if we calculate the mean value of all samples in a file, or in a\nsingle frame, then the result should be 0.0 or at least very close to that\nvalue. If, however, there is a significant deviation of the mean value from\n0.0, in either positive or negative direction, this is referred to as a\nDC bias or DC offset. Since a DC bias is clearly undesirable, the Dynamic\nAudio Normalizer provides optional DC bias correction.\nWith DC bias correction enabled, the Dynamic Audio Normalizer will determine\nthe mean value, or \"DC correction\" offset, of each input frame and subtract\nthat value from all of the frame's sample values which ensures those samples\nare centered around 0.0 again. Also, in order to avoid \"gaps\" at the frame\nboundaries, the DC correction offset values will be interpolated smoothly\nbetween neighbouring frames.\n\n"},{"names":["altboundary","b"],"info":"Enable alternative boundary mode. By default is disabled.\nThe Dynamic Audio Normalizer takes into account a certain neighbourhood\naround each frame. This includes the preceding frames as well as the\nsubsequent frames. However, for the \"boundary\" frames, located at the very\nbeginning and at the very end of the audio file, not all neighbouring\nframes are available. In particular, for the first few frames in the audio\nfile, the preceding frames are not known. And, similarly, for the last few\nframes in the audio file, the subsequent frames are not known. Thus, the\nquestion arises which gain factors should be assumed for the missing frames\nin the \"boundary\" region. The Dynamic Audio Normalizer implements two modes\nto deal with this situation. The default boundary mode assumes a gain factor\nof exactly 1.0 for the missing frames, resulting in a smooth \"fade in\" and\n\"fade out\" at the beginning and at the end of the input, respectively.\n\n"},{"names":["compress","s"],"info":"Set the compress factor. In range from 0.0 to 30.0. Default is 0.0.\nBy default, the Dynamic Audio Normalizer does not apply \"traditional\"\ncompression. This means that signal peaks will not be pruned and thus the\nfull dynamic range will be retained within each local neighbourhood. However,\nin some cases it may be desirable to combine the Dynamic Audio Normalizer's\nnormalization algorithm with a more \"traditional\" compression.\nFor this purpose, the Dynamic Audio Normalizer provides an optional compression\n(thresholding) function. If (and only if) the compression feature is enabled,\nall input frames will be processed by a soft knee thresholding function prior\nto the actual normalization process. Put simply, the thresholding function is\ngoing to prune all samples whose magnitude exceeds a certain threshold value.\nHowever, the Dynamic Audio Normalizer does not simply apply a fixed threshold\nvalue. Instead, the threshold value will be adjusted for each individual\nframe.\nIn general, smaller parameters result in stronger compression, and vice versa.\nValues below 3.0 are not recommended, because audible distortion may appear.\n\n"}]},{"filtergroup":["earwax"],"info":"\nMake audio easier to listen to on headphones.\n\nThis filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio\nso that when listened to on headphones the stereo image is moved from\ninside your head (standard for headphones) to outside and in front of\nthe listener (standard for speakers).\n\nPorted from SoX.\n\n","options":[]},{"filtergroup":["equalizer"],"info":"\nApply a two-pole peaking equalisation (EQ) filter. With this\nfilter, the signal-level at and around a selected frequency can\nbe increased or decreased, whilst (unlike bandpass and bandreject\nfilters) that at all other frequencies is unchanged.\n\nIn order to produce complex equalisation curves, this filter can\nbe given several times, each with a different central frequency.\n\n","options":[{"names":["frequency","f"],"info":"Set the filter's central frequency in Hz.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\n\n"},{"names":["gain","g"],"info":"Set the required gain or attenuation in dB.\nBeware of clipping when using a positive gain.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n","examples":"@subsection Examples\n@itemize\n@item\nAttenuate 10 dB at 1000 Hz, with a bandwidth of 200 Hz:\n@example\nequalizer=f=1000:t=h:width=200:g=-10\n@end example\n\n@item\nApply 2 dB gain at 1000 Hz with Q 1 and attenuate 5 dB at 100 Hz with Q 2:\n@example\nequalizer=f=1000:t=q:w=1:g=2,equalizer=f=100:t=q:w=2:g=-5\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item frequency, f\nChange equalizer frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange equalizer width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange equalizer width.\nSyntax for the command is : \"@var{width}\"\n\n@item gain, g\nChange equalizer gain.\nSyntax for the command is : \"@var{gain}\"\n\n@item mix, m\nChange equalizer mix.\nSyntax for the command is : \"@var{mix}\"\n@end table\n\n"}]},{"filtergroup":["extrastereo"],"info":"\nLinearly increases the difference between left and right channels which\nadds some sort of \"live\" effect to playback.\n\n","options":[{"names":["m"],"info":"Sets the difference coefficient (default: 2.5). 0.0 means mono sound\n(average of both channels), with 1.0 sound will be unchanged, with\n-1.0 left and right channels will be swapped.\n\n"},{"names":["c"],"info":"Enable clipping. By default is enabled.\n\n"}]},{"filtergroup":["firequalizer"],"info":"Apply FIR Equalization using arbitrary frequency response.\n\nThe filter accepts the following option:\n\n","options":[{"names":["gain"],"info":"Set gain curve equation (in dB). The expression can contain variables:\n@item f\nthe evaluated frequency\n@item sr\nsample rate\n@item ch\nchannel number, set to 0 when multichannels evaluation is disabled\n@item chid\nchannel id, see libavutil/channel_layout.h, set to the first channel id when\nmultichannels evaluation is disabled\n@item chs\nnumber of channels\n@item chlayout\nchannel_layout, see libavutil/channel_layout.h\n\nand functions:\n@item gain_interpolate(f)\ninterpolate gain on frequency f based on gain_entry\n@item cubic_interpolate(f)\nsame as gain_interpolate, but smoother\nThis option is also available as command. Default is @code{gain_interpolate(f)}.\n\n"},{"names":["gain_entry"],"info":"Set gain entry for gain_interpolate function. The expression can\ncontain functions:\n@item entry(f, g)\nstore gain entry at frequency f with value g\nThis option is also available as command.\n\n"},{"names":["delay"],"info":"Set filter delay in seconds. Higher value means more accurate.\nDefault is @code{0.01}.\n\n"},{"names":["accuracy"],"info":"Set filter accuracy in Hz. Lower value means more accurate.\nDefault is @code{5}.\n\n"},{"names":["wfunc"],"info":"Set window function. Acceptable values are:\n@item rectangular\nrectangular window, useful when gain curve is already smooth\n@item hann\nhann window (default)\n@item hamming\nhamming window\n@item blackman\nblackman window\n@item nuttall3\n3-terms continuous 1st derivative nuttall window\n@item mnuttall3\nminimum 3-terms discontinuous nuttall window\n@item nuttall\n4-terms continuous 1st derivative nuttall window\n@item bnuttall\nminimum 4-terms discontinuous nuttall (blackman-nuttall) window\n@item bharris\nblackman-harris window\n@item tukey\ntukey window\n\n"},{"names":["fixed"],"info":"If enabled, use fixed number of audio samples. This improves speed when\nfiltering with large delay. Default is disabled.\n\n"},{"names":["multi"],"info":"Enable multichannels evaluation on gain. Default is disabled.\n\n"},{"names":["zero_phase"],"info":"Enable zero phase mode by subtracting timestamp to compensate delay.\nDefault is disabled.\n\n"},{"names":["scale"],"info":"Set scale used by gain. Acceptable values are:\n@item linlin\nlinear frequency, linear gain\n@item linlog\nlinear frequency, logarithmic (in dB) gain (default)\n@item loglin\nlogarithmic (in octave scale where 20 Hz is 0) frequency, linear gain\n@item loglog\nlogarithmic frequency, logarithmic gain\n\n"},{"names":["dumpfile"],"info":"Set file for dumping, suitable for gnuplot.\n\n"},{"names":["dumpscale"],"info":"Set scale for dumpfile. Acceptable values are same with scale option.\nDefault is linlog.\n\n"},{"names":["fft2"],"info":"Enable 2-channel convolution using complex FFT. This improves speed significantly.\nDefault is disabled.\n\n"},{"names":["min_phase"],"info":"Enable minimum phase impulse response. Default is disabled.\n\n","examples":"@subsection Examples\n@itemize\n@item\nlowpass at 1000 Hz:\n@example\nfirequalizer=gain='if(lt(f,1000), 0, -INF)'\n@end example\n@item\nlowpass at 1000 Hz with gain_entry:\n@example\nfirequalizer=gain_entry='entry(1000,0); entry(1001, -INF)'\n@end example\n@item\ncustom equalization:\n@example\nfirequalizer=gain_entry='entry(100,0); entry(400, -4); entry(1000, -6); entry(2000, 0)'\n@end example\n@item\nhigher delay with zero phase to compensate delay:\n@example\nfirequalizer=delay=0.1:fixed=on:zero_phase=on\n@end example\n@item\nlowpass on left channel, highpass on right channel:\n@example\nfirequalizer=gain='if(eq(chid,1), gain_interpolate(f), if(eq(chid,2), gain_interpolate(1e6+f), 0))'\n:gain_entry='entry(1000, 0); entry(1001,-INF); entry(1e6+1000,0)':multi=on\n@end example\n@end itemize\n\n"}]},{"filtergroup":["flanger"],"info":"Apply a flanging effect to the audio.\n\n","options":[{"names":["delay"],"info":"Set base delay in milliseconds. Range from 0 to 30. Default value is 0.\n\n"},{"names":["depth"],"info":"Set added sweep delay in milliseconds. Range from 0 to 10. Default value is 2.\n\n"},{"names":["regen"],"info":"Set percentage regeneration (delayed signal feedback). Range from -95 to 95.\nDefault value is 0.\n\n"},{"names":["width"],"info":"Set percentage of delayed signal mixed with original. Range from 0 to 100.\nDefault value is 71.\n\n"},{"names":["speed"],"info":"Set sweeps per second (Hz). Range from 0.1 to 10. Default value is 0.5.\n\n"},{"names":["shape"],"info":"Set swept wave shape, can be @var{triangular} or @var{sinusoidal}.\nDefault value is @var{sinusoidal}.\n\n"},{"names":["phase"],"info":"Set swept wave percentage-shift for multi channel. Range from 0 to 100.\nDefault value is 25.\n\n"},{"names":["interp"],"info":"Set delay-line interpolation, @var{linear} or @var{quadratic}.\nDefault is @var{linear}.\n\n"}]},{"filtergroup":["haas"],"info":"Apply Haas effect to audio.\n\nNote that this makes most sense to apply on mono signals.\nWith this filter applied to mono signals it give some directionality and\nstretches its stereo image.\n\n","options":[{"names":["level_in"],"info":"Set input level. By default is @var{1}, or 0dB\n\n"},{"names":["level_out"],"info":"Set output level. By default is @var{1}, or 0dB.\n\n"},{"names":["side_gain"],"info":"Set gain applied to side part of signal. By default is @var{1}.\n\n"},{"names":["middle_source"],"info":"Set kind of middle source. Can be one of the following:\n\n@item left\nPick left channel.\n\n@item right\nPick right channel.\n\n@item mid\nPick middle part signal of stereo image.\n\n@item side\nPick side part signal of stereo image.\n\n"},{"names":["middle_phase"],"info":"Change middle phase. By default is disabled.\n\n"},{"names":["left_delay"],"info":"Set left channel delay. By default is @var{2.05} milliseconds.\n\n"},{"names":["left_balance"],"info":"Set left channel balance. By default is @var{-1}.\n\n"},{"names":["left_gain"],"info":"Set left channel gain. By default is @var{1}.\n\n"},{"names":["left_phase"],"info":"Change left phase. By default is disabled.\n\n"},{"names":["right_delay"],"info":"Set right channel delay. By defaults is @var{2.12} milliseconds.\n\n"},{"names":["right_balance"],"info":"Set right channel balance. By default is @var{1}.\n\n"},{"names":["right_gain"],"info":"Set right channel gain. By default is @var{1}.\n\n"},{"names":["right_phase"],"info":"Change right phase. By default is enabled.\n\n"}]},{"filtergroup":["hdcd"],"info":"\nDecodes High Definition Compatible Digital (HDCD) data. A 16-bit PCM stream with\nembedded HDCD codes is expanded into a 20-bit PCM stream.\n\nThe filter supports the Peak Extend and Low-level Gain Adjustment features\nof HDCD, and detects the Transient Filter flag.\n\n@example\nffmpeg -i HDCD16.flac -af hdcd OUT24.flac\n@end example\n\nWhen using the filter with wav, note the default encoding for wav is 16-bit,\nso the resulting 20-bit stream will be truncated back to 16-bit. Use something\nlike @command{-acodec pcm_s24le} after the filter to get 24-bit PCM output.\n@example\nffmpeg -i HDCD16.wav -af hdcd OUT16.wav\nffmpeg -i HDCD16.wav -af hdcd -c:a pcm_s24le OUT24.wav\n@end example\n\n","options":[{"names":["disable_autoconvert"],"info":"Disable any automatic format conversion or resampling in the filter graph.\n\n"},{"names":["process_stereo"],"info":"Process the stereo channels together. If target_gain does not match between\nchannels, consider it invalid and use the last valid target_gain.\n\n"},{"names":["cdt_ms"],"info":"Set the code detect timer period in ms.\n\n"},{"names":["force_pe"],"info":"Always extend peaks above -3dBFS even if PE isn't signaled.\n\n"},{"names":["analyze_mode"],"info":"Replace audio with a solid tone and adjust the amplitude to signal some\nspecific aspect of the decoding process. The output file can be loaded in\nan audio editor alongside the original to aid analysis.\n\n@code{analyze_mode=pe:force_pe=true} can be used to see all samples above the PE level.\n\nModes are:\n@item 0, off\nDisabled\n@item 1, lle\nGain adjustment level at each sample\n@item 2, pe\nSamples where peak extend occurs\n@item 3, cdt\nSamples where the code detect timer is active\n@item 4, tgm\nSamples where the target gain does not match between channels\n\n"}]},{"filtergroup":["headphone"],"info":"\nApply head-related transfer functions (HRTFs) to create virtual\nloudspeakers around the user for binaural listening via headphones.\nThe HRIRs are provided via additional streams, for each channel\none stereo input stream is needed.\n\n","options":[{"names":["map"],"info":"Set mapping of input streams for convolution.\nThe argument is a '|'-separated list of channel names in order as they\nare given as additional stream inputs for filter.\nThis also specify number of input streams. Number of input streams\nmust be not less than number of channels in first stream plus one.\n\n"},{"names":["gain"],"info":"Set gain applied to audio. Value is in dB. Default is 0.\n\n"},{"names":["type"],"info":"Set processing type. Can be @var{time} or @var{freq}. @var{time} is\nprocessing audio in time domain which is slow.\n@var{freq} is processing audio in frequency domain which is fast.\nDefault is @var{freq}.\n\n"},{"names":["lfe"],"info":"Set custom gain for LFE channels. Value is in dB. Default is 0.\n\n"},{"names":["size"],"info":"Set size of frame in number of samples which will be processed at once.\nDefault value is @var{1024}. Allowed range is from 1024 to 96000.\n\n"},{"names":["hrir"],"info":"Set format of hrir stream.\nDefault value is @var{stereo}. Alternative value is @var{multich}.\nIf value is set to @var{stereo}, number of additional streams should\nbe greater or equal to number of input channels in first input stream.\nAlso each additional stream should have stereo number of channels.\nIf value is set to @var{multich}, number of additional streams should\nbe exactly one. Also number of input channels of additional stream\nshould be equal or greater than twice number of channels of first input\nstream.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFull example using wav files as coefficients with amovie filters for 7.1 downmix,\neach amovie filter use stereo file with IR coefficients as input.\nThe files give coefficients for each position of virtual loudspeaker:\n@example\nffmpeg -i input.wav\n-filter_complex \"amovie=azi_270_ele_0_DFC.wav[sr];amovie=azi_90_ele_0_DFC.wav[sl];amovie=azi_225_ele_0_DFC.wav[br];amovie=azi_135_ele_0_DFC.wav[bl];amovie=azi_0_ele_0_DFC.wav,asplit[fc][lfe];amovie=azi_35_ele_0_DFC.wav[fl];amovie=azi_325_ele_0_DFC.wav[fr];[0:a][fl][fr][fc][lfe][bl][br][sl][sr]headphone=FL|FR|FC|LFE|BL|BR|SL|SR\"\noutput.wav\n@end example\n\n@item\nFull example using wav files as coefficients with amovie filters for 7.1 downmix,\nbut now in @var{multich} @var{hrir} format.\n@example\nffmpeg -i input.wav -filter_complex \"amovie=minp.wav[hrirs];[0:a][hrirs]headphone=map=FL|FR|FC|LFE|BL|BR|SL|SR:hrir=multich\"\noutput.wav\n@end example\n@end itemize\n\n"}]},{"filtergroup":["highpass"],"info":"\nApply a high-pass filter with 3dB point frequency.\nThe filter can be either single-pole, or double-pole (the default).\nThe filter roll off at 6dB per pole per octave (20dB per pole per decade).\n\n","options":[{"names":["frequency","f"],"info":"Set frequency in Hz. Default is 3000.\n\n"},{"names":["poles","p"],"info":"Set number of poles. Default is 2.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\nApplies only to double-pole filter.\nThe default is 0.707q and gives a Butterworth response.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange highpass frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange highpass width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange highpass width.\nSyntax for the command is : \"@var{width}\"\n\n@item mix, m\nChange highpass mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["join"],"info":"\nJoin multiple input streams into one multi-channel stream.\n\nIt accepts the following parameters:\n","options":[{"names":["inputs"],"info":"The number of input streams. It defaults to 2.\n\n"},{"names":["channel_layout"],"info":"The desired output channel layout. It defaults to stereo.\n\n"},{"names":["map"],"info":"Map channels from inputs to output. The argument is a '|'-separated list of\nmappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}}\nform. @var{input_idx} is the 0-based index of the input stream. @var{in_channel}\ncan be either the name of the input channel (e.g. FL for front left) or its\nindex in the specified input stream. @var{out_channel} is the name of the output\nchannel.\n\nThe filter will attempt to guess the mappings when they are not specified\nexplicitly. It does so by first trying to find an unused matching input channel\nand if that fails it picks the first unused input channel.\n\nJoin 3 inputs (with properly set channel layouts):\n@example\nffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT\n@end example\n\nBuild a 5.1 output from 6 single-channel streams:\n@example\nffmpeg -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex\n'join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-SL|4.0-SR|5.0-LFE'\nout\n@end example\n\n"}]},{"filtergroup":["ladspa"],"info":"\nLoad a LADSPA (Linux Audio Developer's Simple Plugin API) plugin.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-ladspa}.\n\n","options":[{"names":["file","f"],"info":"Specifies the name of LADSPA plugin library to load. If the environment\nvariable @env{LADSPA_PATH} is defined, the LADSPA plugin is searched in\neach one of the directories specified by the colon separated list in\n@env{LADSPA_PATH}, otherwise in the standard LADSPA paths, which are in\nthis order: @file{HOME/.ladspa/lib/}, @file{/usr/local/lib/ladspa/},\n@file{/usr/lib/ladspa/}.\n\n"},{"names":["plugin","p"],"info":"Specifies the plugin within the library. Some libraries contain only\none plugin, but others contain many of them. If this is not set filter\nwill list all available plugins within the specified library.\n\n"},{"names":["controls","c"],"info":"Set the '|' separated list of controls which are zero or more floating point\nvalues that determine the behavior of the loaded plugin (for example delay,\nthreshold or gain).\nControls need to be defined using the following syntax:\nc0=@var{value0}|c1=@var{value1}|c2=@var{value2}|..., where\n@var{valuei} is the value set on the @var{i}-th control.\nAlternatively they can be also defined using the following syntax:\n@var{value0}|@var{value1}|@var{value2}|..., where\n@var{valuei} is the value set on the @var{i}-th control.\nIf @option{controls} is set to @code{help}, all available controls and\ntheir valid ranges are printed.\n\n"},{"names":["sample_rate","s"],"info":"Specify the sample rate, default to 44100. Only used if plugin have\nzero inputs.\n\n"},{"names":["nb_samples","n"],"info":"Set the number of samples per channel per each output frame, default\nis 1024. Only used if plugin have zero inputs.\n\n"},{"names":["duration","d"],"info":"Set the minimum duration of the sourced audio. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nNote that the resulting duration may be greater than the specified duration,\nas the generated audio is always cut at the end of a complete frame.\nIf not specified, or the expressed duration is negative, the audio is\nsupposed to be generated forever.\nOnly used if plugin have zero inputs.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nList all available plugins within amp (LADSPA example plugin) library:\n@example\nladspa=file=amp\n@end example\n\n@item\nList all available controls and their valid ranges for @code{vcf_notch}\nplugin from @code{VCF} library:\n@example\nladspa=f=vcf:p=vcf_notch:c=help\n@end example\n\n@item\nSimulate low quality audio equipment using @code{Computer Music Toolkit} (CMT)\nplugin library:\n@example\nladspa=file=cmt:plugin=lofi:controls=c0=22|c1=12|c2=12\n@end example\n\n@item\nAdd reverberation to the audio using TAP-plugins\n(Tom's Audio Processing plugins):\n@example\nladspa=file=tap_reverb:tap_reverb\n@end example\n\n@item\nGenerate white noise, with 0.2 amplitude:\n@example\nladspa=file=cmt:noise_source_white:c=c0=.2\n@end example\n\n@item\nGenerate 20 bpm clicks using plugin @code{C* Click - Metronome} from the\n@code{C* Audio Plugin Suite} (CAPS) library:\n@example\nladspa=file=caps:Click:c=c1=20'\n@end example\n\n@item\nApply @code{C* Eq10X2 - Stereo 10-band equaliser} effect:\n@example\nladspa=caps:Eq10X2:c=c0=-48|c9=-24|c3=12|c4=2\n@end example\n\n@item\nIncrease volume by 20dB using fast lookahead limiter from Steve Harris\n@code{SWH Plugins} collection:\n@example\nladspa=fast_lookahead_limiter_1913:fastLookaheadLimiter:20|0|2\n@end example\n\n@item\nAttenuate low frequencies using Multiband EQ from Steve Harris\n@code{SWH Plugins} collection:\n@example\nladspa=mbeq_1197:mbeq:-24|-24|-24|0|0|0|0|0|0|0|0|0|0|0|0\n@end example\n\n@item\nReduce stereo image using @code{Narrower} from the @code{C* Audio Plugin Suite}\n(CAPS) library:\n@example\nladspa=caps:Narrower\n@end example\n\n@item\nAnother white noise, now using @code{C* Audio Plugin Suite} (CAPS) library:\n@example\nladspa=caps:White:.2\n@end example\n\n@item\nSome fractal noise, using @code{C* Audio Plugin Suite} (CAPS) library:\n@example\nladspa=caps:Fractal:c=c1=1\n@end example\n\n@item\nDynamic volume normalization using @code{VLevel} plugin:\n@example\nladspa=vlevel-ladspa:vlevel_mono\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item cN\nModify the @var{N}-th control value.\n\nIf the specified value is not valid, it is ignored and prior one is kept.\n@end table\n\n"}]},{"filtergroup":["loudnorm"],"info":"\nEBU R128 loudness normalization. Includes both dynamic and linear normalization modes.\nSupport for both single pass (livestreams, files) and double pass (files) modes.\nThis algorithm can target IL, LRA, and maximum true peak. To accurately detect true peaks,\nthe audio stream will be upsampled to 192 kHz unless the normalization mode is linear.\nUse the @code{-ar} option or @code{aresample} filter to explicitly set an output sample rate.\n\n","options":[{"names":["I","i"],"info":"Set integrated loudness target.\nRange is -70.0 - -5.0. Default value is -24.0.\n\n"},{"names":["LRA","lra"],"info":"Set loudness range target.\nRange is 1.0 - 20.0. Default value is 7.0.\n\n"},{"names":["TP","tp"],"info":"Set maximum true peak.\nRange is -9.0 - +0.0. Default value is -2.0.\n\n"},{"names":["measured_I","measured_i"],"info":"Measured IL of input file.\nRange is -99.0 - +0.0.\n\n"},{"names":["measured_LRA","measured_lra"],"info":"Measured LRA of input file.\nRange is 0.0 - 99.0.\n\n"},{"names":["measured_TP","measured_tp"],"info":"Measured true peak of input file.\nRange is -99.0 - +99.0.\n\n"},{"names":["measured_thresh"],"info":"Measured threshold of input file.\nRange is -99.0 - +0.0.\n\n"},{"names":["offset"],"info":"Set offset gain. Gain is applied before the true-peak limiter.\nRange is -99.0 - +99.0. Default is +0.0.\n\n"},{"names":["linear"],"info":"Normalize linearly if possible.\nmeasured_I, measured_LRA, measured_TP, and measured_thresh must also\nto be specified in order to use this mode.\nOptions are true or false. Default is true.\n\n"},{"names":["dual_mono"],"info":"Treat mono input files as \"dual-mono\". If a mono file is intended for playback\non a stereo system, its EBU R128 measurement will be perceptually incorrect.\nIf set to @code{true}, this option will compensate for this effect.\nMulti-channel input files are not affected by this option.\nOptions are true or false. Default is false.\n\n"},{"names":["print_format"],"info":"Set print format for stats. Options are summary, json, or none.\nDefault value is none.\n\n"}]},{"filtergroup":["lowpass"],"info":"\nApply a low-pass filter with 3dB point frequency.\nThe filter can be either single-pole or double-pole (the default).\nThe filter roll off at 6dB per pole per octave (20dB per pole per decade).\n\n","options":[{"names":["frequency","f"],"info":"Set frequency in Hz. Default is 500.\n\n"},{"names":["poles","p"],"info":"Set number of poles. Default is 2.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Specify the band-width of a filter in width_type units.\nApplies only to double-pole filter.\nThe default is 0.707q and gives a Butterworth response.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n","examples":"@subsection Examples\n@itemize\n@item\nLowpass only LFE channel, it LFE is not present it does nothing:\n@example\nlowpass=c=LFE\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item frequency, f\nChange lowpass frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange lowpass width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange lowpass width.\nSyntax for the command is : \"@var{width}\"\n\n@item mix, m\nChange lowpass mix.\nSyntax for the command is : \"@var{mix}\"\n@end table\n\n"}]},{"filtergroup":["lv2"],"info":"\nLoad a LV2 (LADSPA Version 2) plugin.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-lv2}.\n\n","options":[{"names":["plugin","p"],"info":"Specifies the plugin URI. You may need to escape ':'.\n\n"},{"names":["controls","c"],"info":"Set the '|' separated list of controls which are zero or more floating point\nvalues that determine the behavior of the loaded plugin (for example delay,\nthreshold or gain).\nIf @option{controls} is set to @code{help}, all available controls and\ntheir valid ranges are printed.\n\n"},{"names":["sample_rate","s"],"info":"Specify the sample rate, default to 44100. Only used if plugin have\nzero inputs.\n\n"},{"names":["nb_samples","n"],"info":"Set the number of samples per channel per each output frame, default\nis 1024. Only used if plugin have zero inputs.\n\n"},{"names":["duration","d"],"info":"Set the minimum duration of the sourced audio. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nNote that the resulting duration may be greater than the specified duration,\nas the generated audio is always cut at the end of a complete frame.\nIf not specified, or the expressed duration is negative, the audio is\nsupposed to be generated forever.\nOnly used if plugin have zero inputs.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply bass enhancer plugin from Calf:\n@example\nlv2=p=http\\\\\\\\://calf.sourceforge.net/plugins/BassEnhancer:c=amount=2\n@end example\n\n@item\nApply vinyl plugin from Calf:\n@example\nlv2=p=http\\\\\\\\://calf.sourceforge.net/plugins/Vinyl:c=drone=0.2|aging=0.5\n@end example\n\n@item\nApply bit crusher plugin from ArtyFX:\n@example\nlv2=p=http\\\\\\\\://www.openavproductions.com/artyfx#bitta:c=crush=0.3\n@end example\n@end itemize\n\n"}]},{"filtergroup":["mcompand"],"info":"Multiband Compress or expand the audio's dynamic range.\n\nThe input audio is divided into bands using 4th order Linkwitz-Riley IIRs.\nThis is akin to the crossover of a loudspeaker, and results in flat frequency\nresponse when absent compander action.\n\nIt accepts the following parameters:\n\n","options":[{"names":["args"],"info":"This option syntax is:\nattack,decay,[attack,decay..] soft-knee points crossover_frequency [delay [initial_volume [gain]]] | attack,decay ...\nFor explanation of each item refer to compand filter documentation.\n\n@anchor{pan}\n"}]},{"filtergroup":["pan"],"info":"\nMix channels with specific gain levels. The filter accepts the output\nchannel layout followed by a set of channels definitions.\n\nThis filter is also designed to efficiently remap the channels of an audio\nstream.\n\nThe filter accepts parameters of the form:\n\"@var{l}|@var{outdef}|@var{outdef}|...\"\n\n","options":[{"names":["l"],"info":"output channel layout or number of channels\n\n"},{"names":["outdef"],"info":"output channel specification, of the form:\n\"@var{out_name}=[@var{gain}*]@var{in_name}[(+-)[@var{gain}*]@var{in_name}...]\"\n\n"},{"names":["out_name"],"info":"output channel to define, either a channel name (FL, FR, etc.) or a channel\nnumber (c0, c1, etc.)\n\n"},{"names":["gain"],"info":"multiplicative coefficient for the channel, 1 leaving the volume unchanged\n\n"},{"names":["in_name"],"info":"input channel to use, see out_name for details; it is not possible to mix\nnamed and numbered input channels\n\nIf the `=' in a channel specification is replaced by `<', then the gains for\nthat specification will be renormalized so that the total is 1, thus\navoiding clipping noise.\n\n@subsection Mixing examples\n\nFor example, if you want to down-mix from stereo to mono, but with a bigger\nfactor for the left channel:\n@example\npan=1c|c0=0.9*c0+0.1*c1\n@end example\n\nA customized down-mix to stereo that works automatically for 3-, 4-, 5- and\n7-channels surround:\n@example\npan=stereo| FL < FL + 0.5*FC + 0.6*BL + 0.6*SL | FR < FR + 0.5*FC + 0.6*BR + 0.6*SR\n@end example\n\nNote that @command{ffmpeg} integrates a default down-mix (and up-mix) system\nthat should be preferred (see \"-ac\" option) unless you have very specific\nneeds.\n\n@subsection Remapping examples\n\nThe channel remapping will be effective if, and only if:\n\n@itemize\n"},{"names":["gain","coefficients","are","zeroes","or","ones"],"info":""},{"names":["only","one","input","per","channel","output"],"info":"@end itemize\n\nIf all these conditions are satisfied, the filter will notify the user (\"Pure\nchannel mapping detected\"), and use an optimized and lossless method to do the\nremapping.\n\nFor example, if you have a 5.1 source and want a stereo audio stream by\ndropping the extra channels:\n@example\npan=\"stereo| c0=FL | c1=FR\"\n@end example\n\nGiven the same source, you can also switch front left and front right channels\nand keep the input channel layout:\n@example\npan=\"5.1| c0=c1 | c1=c0 | c2=c2 | c3=c3 | c4=c4 | c5=c5\"\n@end example\n\nIf the input is a stereo audio stream, you can mute the front left channel (and\nstill keep the stereo channel layout) with:\n@example\npan=\"stereo|c1=c1\"\n@end example\n\nStill with a stereo audio stream input, you can copy the right channel in both\nfront left and right:\n@example\npan=\"stereo| c0=FR | c1=FR\"\n@end example\n\n"}]},{"filtergroup":["replaygain"],"info":"\nReplayGain scanner filter. This filter takes an audio stream as an input and\noutputs it unchanged.\nAt end of filtering it displays @code{track_gain} and @code{track_peak}.\n\n","options":[]},{"filtergroup":["resample"],"info":"\nConvert the audio sample format, sample rate and channel layout. It is\nnot meant to be used directly.\n\n","options":[]},{"filtergroup":["rubberband"],"info":"Apply time-stretching and pitch-shifting with librubberband.\n\nTo enable compilation of this filter, you need to configure FFmpeg with\n@code{--enable-librubberband}.\n\n","options":[{"names":["tempo"],"info":"Set tempo scale factor.\n\n"},{"names":["pitch"],"info":"Set pitch scale factor.\n\n"},{"names":["transients"],"info":"Set transients detector.\nPossible values are:\n@item crisp\n@item mixed\n@item smooth\n\n"},{"names":["detector"],"info":"Set detector.\nPossible values are:\n@item compound\n@item percussive\n@item soft\n\n"},{"names":["phase"],"info":"Set phase.\nPossible values are:\n@item laminar\n@item independent\n\n"},{"names":["window"],"info":"Set processing window size.\nPossible values are:\n@item standard\n@item short\n@item long\n\n"},{"names":["smoothing"],"info":"Set smoothing.\nPossible values are:\n@item off\n@item on\n\n"},{"names":["formant"],"info":"Enable formant preservation when shift pitching.\nPossible values are:\n@item shifted\n@item preserved\n\n"},{"names":["pitchq"],"info":"Set pitch quality.\nPossible values are:\n@item quality\n@item speed\n@item consistency\n\n"},{"names":["channels"],"info":"Set channels.\nPossible values are:\n@item apart\n@item together\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item tempo\nChange filter tempo scale factor.\nSyntax for the command is : \"@var{tempo}\"\n\n@item pitch\nChange filter pitch scale factor.\nSyntax for the command is : \"@var{pitch}\"\n\n"}]},{"filtergroup":["sidechaincompress"],"info":"\nThis filter acts like normal compressor but has the ability to compress\ndetected signal using second input signal.\nIt needs two input streams and returns one output stream.\nFirst input stream will be processed depending on second stream signal.\nThe filtered signal then can be filtered with other filters in later stages of\nprocessing. See @ref{pan} and @ref{amerge} filter.\n\n","options":[{"names":["level_in"],"info":"Set input gain. Default is 1. Range is between 0.015625 and 64.\n\n"},{"names":["mode"],"info":"Set mode of compressor operation. Can be @code{upward} or @code{downward}.\nDefault is @code{downward}.\n\n"},{"names":["threshold"],"info":"If a signal of second stream raises above this level it will affect the gain\nreduction of first stream.\nBy default is 0.125. Range is between 0.00097563 and 1.\n\n"},{"names":["ratio"],"info":"Set a ratio about which the signal is reduced. 1:2 means that if the level\nraised 4dB above the threshold, it will be only 2dB above after the reduction.\nDefault is 2. Range is between 1 and 20.\n\n"},{"names":["attack"],"info":"Amount of milliseconds the signal has to rise above the threshold before gain\nreduction starts. Default is 20. Range is between 0.01 and 2000.\n\n"},{"names":["release"],"info":"Amount of milliseconds the signal has to fall below the threshold before\nreduction is decreased again. Default is 250. Range is between 0.01 and 9000.\n\n"},{"names":["makeup"],"info":"Set the amount by how much signal will be amplified after processing.\nDefault is 1. Range is from 1 to 64.\n\n"},{"names":["knee"],"info":"Curve the sharp knee around the threshold to enter gain reduction more softly.\nDefault is 2.82843. Range is between 1 and 8.\n\n"},{"names":["link"],"info":"Choose if the @code{average} level between all channels of side-chain stream\nor the louder(@code{maximum}) channel of side-chain stream affects the\nreduction. Default is @code{average}.\n\n"},{"names":["detection"],"info":"Should the exact signal be taken in case of @code{peak} or an RMS one in case\nof @code{rms}. Default is @code{rms} which is mainly smoother.\n\n"},{"names":["level_sc"],"info":"Set sidechain gain. Default is 1. Range is between 0.015625 and 64.\n\n"},{"names":["mix"],"info":"How much to use compressed signal in output. Default is 1.\nRange is between 0 and 1.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFull ffmpeg example taking 2 audio inputs, 1st input to be compressed\ndepending on the signal of 2nd input and later compressed signal to be\nmerged with 2nd input:\n@example\nffmpeg -i main.flac -i sidechain.flac -filter_complex \"[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress[compr];[compr][mix]amerge\"\n@end example\n@end itemize\n\n"}]},{"filtergroup":["sidechaingate"],"info":"\nA sidechain gate acts like a normal (wideband) gate but has the ability to\nfilter the detected signal before sending it to the gain reduction stage.\nNormally a gate uses the full range signal to detect a level above the\nthreshold.\nFor example: If you cut all lower frequencies from your sidechain signal\nthe gate will decrease the volume of your track only if not enough highs\nappear. With this technique you are able to reduce the resonation of a\nnatural drum or remove \"rumbling\" of muted strokes from a heavily distorted\nguitar.\nIt needs two input streams and returns one output stream.\nFirst input stream will be processed depending on second stream signal.\n\n","options":[{"names":["level_in"],"info":"Set input level before filtering.\nDefault is 1. Allowed range is from 0.015625 to 64.\n\n"},{"names":["mode"],"info":"Set the mode of operation. Can be @code{upward} or @code{downward}.\nDefault is @code{downward}. If set to @code{upward} mode, higher parts of signal\nwill be amplified, expanding dynamic range in upward direction.\nOtherwise, in case of @code{downward} lower parts of signal will be reduced.\n\n"},{"names":["range"],"info":"Set the level of gain reduction when the signal is below the threshold.\nDefault is 0.06125. Allowed range is from 0 to 1.\nSetting this to 0 disables reduction and then filter behaves like expander.\n\n"},{"names":["threshold"],"info":"If a signal rises above this level the gain reduction is released.\nDefault is 0.125. Allowed range is from 0 to 1.\n\n"},{"names":["ratio"],"info":"Set a ratio about which the signal is reduced.\nDefault is 2. Allowed range is from 1 to 9000.\n\n"},{"names":["attack"],"info":"Amount of milliseconds the signal has to rise above the threshold before gain\nreduction stops.\nDefault is 20 milliseconds. Allowed range is from 0.01 to 9000.\n\n"},{"names":["release"],"info":"Amount of milliseconds the signal has to fall below the threshold before the\nreduction is increased again. Default is 250 milliseconds.\nAllowed range is from 0.01 to 9000.\n\n"},{"names":["makeup"],"info":"Set amount of amplification of signal after processing.\nDefault is 1. Allowed range is from 1 to 64.\n\n"},{"names":["knee"],"info":"Curve the sharp knee around the threshold to enter gain reduction more softly.\nDefault is 2.828427125. Allowed range is from 1 to 8.\n\n"},{"names":["detection"],"info":"Choose if exact signal should be taken for detection or an RMS like one.\nDefault is rms. Can be peak or rms.\n\n"},{"names":["link"],"info":"Choose if the average level between all channels or the louder channel affects\nthe reduction.\nDefault is average. Can be average or maximum.\n\n"},{"names":["level_sc"],"info":"Set sidechain gain. Default is 1. Range is from 0.015625 to 64.\n\n"}]},{"filtergroup":["silencedetect"],"info":"\nDetect silence in an audio stream.\n\nThis filter logs a message when it detects that the input audio volume is less\nor equal to a noise tolerance value for a duration greater or equal to the\nminimum detected noise duration.\n\nThe printed times and duration are expressed in seconds. The\n@code{lavfi.silence_start} or @code{lavfi.silence_start.X} metadata key\nis set on the first frame whose timestamp equals or exceeds the detection\nduration and it contains the timestamp of the first frame of the silence.\n\nThe @code{lavfi.silence_duration} or @code{lavfi.silence_duration.X}\nand @code{lavfi.silence_end} or @code{lavfi.silence_end.X} metadata\nkeys are set on the first frame after the silence. If @option{mono} is\nenabled, and each channel is evaluated separately, the @code{.X}\nsuffixed keys are used, and @code{X} corresponds to the channel number.\n\n","options":[{"names":["noise","n"],"info":"Set noise tolerance. Can be specified in dB (in case \"dB\" is appended to the\nspecified value) or amplitude ratio. Default is -60dB, or 0.001.\n\n"},{"names":["duration","d"],"info":"Set silence duration until notification (default is 2 seconds). See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\n\n"},{"names":["mono","m"],"info":"Process each channel separately, instead of combined. By default is disabled.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDetect 5 seconds of silence with -50dB noise tolerance:\n@example\nsilencedetect=n=-50dB:d=5\n@end example\n\n@item\nComplete example with @command{ffmpeg} to detect silence with 0.0001 noise\ntolerance in @file{silence.mp3}:\n@example\nffmpeg -i silence.mp3 -af silencedetect=noise=0.0001 -f null -\n@end example\n@end itemize\n\n"}]},{"filtergroup":["silenceremove"],"info":"\nRemove silence from the beginning, middle or end of the audio.\n\n","options":[{"names":["start_periods"],"info":"This value is used to indicate if audio should be trimmed at beginning of\nthe audio. A value of zero indicates no silence should be trimmed from the\nbeginning. When specifying a non-zero value, it trims audio up until it\nfinds non-silence. Normally, when trimming silence from beginning of audio\nthe @var{start_periods} will be @code{1} but it can be increased to higher\nvalues to trim all audio up to specific count of non-silence periods.\nDefault value is @code{0}.\n\n"},{"names":["start_duration"],"info":"Specify the amount of time that non-silence must be detected before it stops\ntrimming audio. By increasing the duration, bursts of noises can be treated\nas silence and trimmed off. Default value is @code{0}.\n\n"},{"names":["start_threshold"],"info":"This indicates what sample value should be treated as silence. For digital\naudio, a value of @code{0} may be fine but for audio recorded from analog,\nyou may wish to increase the value to account for background noise.\nCan be specified in dB (in case \"dB\" is appended to the specified value)\nor amplitude ratio. Default value is @code{0}.\n\n"},{"names":["start_silence"],"info":"Specify max duration of silence at beginning that will be kept after\ntrimming. Default is 0, which is equal to trimming all samples detected\nas silence.\n\n"},{"names":["start_mode"],"info":"Specify mode of detection of silence end in start of multi-channel audio.\nCan be @var{any} or @var{all}. Default is @var{any}.\nWith @var{any}, any sample that is detected as non-silence will cause\nstopped trimming of silence.\nWith @var{all}, only if all channels are detected as non-silence will cause\nstopped trimming of silence.\n\n"},{"names":["stop_periods"],"info":"Set the count for trimming silence from the end of audio.\nTo remove silence from the middle of a file, specify a @var{stop_periods}\nthat is negative. This value is then treated as a positive value and is\nused to indicate the effect should restart processing as specified by\n@var{start_periods}, making it suitable for removing periods of silence\nin the middle of the audio.\nDefault value is @code{0}.\n\n"},{"names":["stop_duration"],"info":"Specify a duration of silence that must exist before audio is not copied any\nmore. By specifying a higher duration, silence that is wanted can be left in\nthe audio.\nDefault value is @code{0}.\n\n"},{"names":["stop_threshold"],"info":"This is the same as @option{start_threshold} but for trimming silence from\nthe end of audio.\nCan be specified in dB (in case \"dB\" is appended to the specified value)\nor amplitude ratio. Default value is @code{0}.\n\n"},{"names":["stop_silence"],"info":"Specify max duration of silence at end that will be kept after\ntrimming. Default is 0, which is equal to trimming all samples detected\nas silence.\n\n"},{"names":["stop_mode"],"info":"Specify mode of detection of silence start in end of multi-channel audio.\nCan be @var{any} or @var{all}. Default is @var{any}.\nWith @var{any}, any sample that is detected as non-silence will cause\nstopped trimming of silence.\nWith @var{all}, only if all channels are detected as non-silence will cause\nstopped trimming of silence.\n\n"},{"names":["detection"],"info":"Set how is silence detected. Can be @code{rms} or @code{peak}. Second is faster\nand works better with digital silence which is exactly 0.\nDefault value is @code{rms}.\n\n"},{"names":["window"],"info":"Set duration in number of seconds used to calculate size of window in number\nof samples for detecting silence.\nDefault value is @code{0.02}. Allowed range is from @code{0} to @code{10}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nThe following example shows how this filter can be used to start a recording\nthat does not contain the delay at the start which usually occurs between\npressing the record button and the start of the performance:\n@example\nsilenceremove=start_periods=1:start_duration=5:start_threshold=0.02\n@end example\n\n@item\nTrim all silence encountered from beginning to end where there is more than 1\nsecond of silence in audio:\n@example\nsilenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-90dB\n@end example\n\n@item\nTrim all digital silence samples, using peak detection, from beginning to end\nwhere there is more than 0 samples of digital silence in audio and digital\nsilence is detected in all channels at same positions in stream:\n@example\nsilenceremove=window=0:detection=peak:stop_mode=all:start_mode=all:stop_periods=-1:stop_threshold=0\n@end example\n@end itemize\n\n"}]},{"filtergroup":["sofalizer"],"info":"\nSOFAlizer uses head-related transfer functions (HRTFs) to create virtual\nloudspeakers around the user for binaural listening via headphones (audio\nformats up to 9 channels supported).\nThe HRTFs are stored in SOFA files (see @url{http://www.sofacoustics.org/} for a database).\nSOFAlizer is developed at the Acoustics Research Institute (ARI) of the\nAustrian Academy of Sciences.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libmysofa}.\n\n","options":[{"names":["sofa"],"info":"Set the SOFA file used for rendering.\n\n"},{"names":["gain"],"info":"Set gain applied to audio. Value is in dB. Default is 0.\n\n"},{"names":["rotation"],"info":"Set rotation of virtual loudspeakers in deg. Default is 0.\n\n"},{"names":["elevation"],"info":"Set elevation of virtual speakers in deg. Default is 0.\n\n"},{"names":["radius"],"info":"Set distance in meters between loudspeakers and the listener with near-field\nHRTFs. Default is 1.\n\n"},{"names":["type"],"info":"Set processing type. Can be @var{time} or @var{freq}. @var{time} is\nprocessing audio in time domain which is slow.\n@var{freq} is processing audio in frequency domain which is fast.\nDefault is @var{freq}.\n\n"},{"names":["speakers"],"info":"Set custom positions of virtual loudspeakers. Syntax for this option is:\n<CH> <AZIM> <ELEV>[|<CH> <AZIM> <ELEV>|...].\nEach virtual loudspeaker is described with short channel name following with\nazimuth and elevation in degrees.\nEach virtual loudspeaker description is separated by '|'.\nFor example to override front left and front right channel positions use:\n'speakers=FL 45 15|FR 345 15'.\nDescriptions with unrecognised channel names are ignored.\n\n"},{"names":["lfegain"],"info":"Set custom gain for LFE channels. Value is in dB. Default is 0.\n\n"},{"names":["framesize"],"info":"Set custom frame size in number of samples. Default is 1024.\nAllowed range is from 1024 to 96000. Only used if option @samp{type}\nis set to @var{freq}.\n\n"},{"names":["normalize"],"info":"Should all IRs be normalized upon importing SOFA file.\nBy default is enabled.\n\n"},{"names":["interpolate"],"info":"Should nearest IRs be interpolated with neighbor IRs if exact position\ndoes not match. By default is disabled.\n\n"},{"names":["minphase"],"info":"Minphase all IRs upon loading of SOFA file. By default is disabled.\n\n"},{"names":["anglestep"],"info":"Set neighbor search angle step. Only used if option @var{interpolate} is enabled.\n\n"},{"names":["radstep"],"info":"Set neighbor search radius step. Only used if option @var{interpolate} is enabled.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nUsing ClubFritz6 sofa file:\n@example\nsofalizer=sofa=/path/to/ClubFritz6.sofa:type=freq:radius=1\n@end example\n\n@item\nUsing ClubFritz12 sofa file and bigger radius with small rotation:\n@example\nsofalizer=sofa=/path/to/ClubFritz12.sofa:type=freq:radius=2:rotation=5\n@end example\n\n@item\nSimilar as above but with custom speaker positions for front left, front right, back left and back right\nand also with custom gain:\n@example\n\"sofalizer=sofa=/path/to/ClubFritz6.sofa:type=freq:radius=2:speakers=FL 45|FR 315|BL 135|BR 225:gain=28\"\n@end example\n@end itemize\n\n"}]},{"filtergroup":["stereotools"],"info":"\nThis filter has some handy utilities to manage stereo signals, for converting\nM/S stereo recordings to L/R signal while having control over the parameters\nor spreading the stereo image of master track.\n\n","options":[{"names":["level_in"],"info":"Set input level before filtering for both channels. Defaults is 1.\nAllowed range is from 0.015625 to 64.\n\n"},{"names":["level_out"],"info":"Set output level after filtering for both channels. Defaults is 1.\nAllowed range is from 0.015625 to 64.\n\n"},{"names":["balance_in"],"info":"Set input balance between both channels. Default is 0.\nAllowed range is from -1 to 1.\n\n"},{"names":["balance_out"],"info":"Set output balance between both channels. Default is 0.\nAllowed range is from -1 to 1.\n\n"},{"names":["softclip"],"info":"Enable softclipping. Results in analog distortion instead of harsh digital 0dB\nclipping. Disabled by default.\n\n"},{"names":["mutel"],"info":"Mute the left channel. Disabled by default.\n\n"},{"names":["muter"],"info":"Mute the right channel. Disabled by default.\n\n"},{"names":["phasel"],"info":"Change the phase of the left channel. Disabled by default.\n\n"},{"names":["phaser"],"info":"Change the phase of the right channel. Disabled by default.\n\n"},{"names":["mode"],"info":"Set stereo mode. Available values are:\n\n@item lr>lr\nLeft/Right to Left/Right, this is default.\n\n@item lr>ms\nLeft/Right to Mid/Side.\n\n@item ms>lr\nMid/Side to Left/Right.\n\n@item lr>ll\nLeft/Right to Left/Left.\n\n@item lr>rr\nLeft/Right to Right/Right.\n\n@item lr>l+r\nLeft/Right to Left + Right.\n\n@item lr>rl\nLeft/Right to Right/Left.\n\n@item ms>ll\nMid/Side to Left/Left.\n\n@item ms>rr\nMid/Side to Right/Right.\n\n"},{"names":["slev"],"info":"Set level of side signal. Default is 1.\nAllowed range is from 0.015625 to 64.\n\n"},{"names":["sbal"],"info":"Set balance of side signal. Default is 0.\nAllowed range is from -1 to 1.\n\n"},{"names":["mlev"],"info":"Set level of the middle signal. Default is 1.\nAllowed range is from 0.015625 to 64.\n\n"},{"names":["mpan"],"info":"Set middle signal pan. Default is 0. Allowed range is from -1 to 1.\n\n"},{"names":["base"],"info":"Set stereo base between mono and inversed channels. Default is 0.\nAllowed range is from -1 to 1.\n\n"},{"names":["delay"],"info":"Set delay in milliseconds how much to delay left from right channel and\nvice versa. Default is 0. Allowed range is from -20 to 20.\n\n"},{"names":["sclevel"],"info":"Set S/C level. Default is 1. Allowed range is from 1 to 100.\n\n"},{"names":["phase"],"info":"Set the stereo phase in degrees. Default is 0. Allowed range is from 0 to 360.\n\n"},{"names":["bmode_in","bmode_out"],"info":"Set balance mode for balance_in/balance_out option.\n\nCan be one of the following:\n\n@item balance\nClassic balance mode. Attenuate one channel at time.\nGain is raised up to 1.\n\n@item amplitude\nSimilar as classic mode above but gain is raised up to 2.\n\n@item power\nEqual power distribution, from -6dB to +6dB range.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply karaoke like effect:\n@example\nstereotools=mlev=0.015625\n@end example\n\n@item\nConvert M/S signal to L/R:\n@example\n\"stereotools=mode=ms>lr\"\n@end example\n@end itemize\n\n"}]},{"filtergroup":["stereowiden"],"info":"\nThis filter enhance the stereo effect by suppressing signal common to both\nchannels and by delaying the signal of left into right and vice versa,\nthereby widening the stereo effect.\n\n","options":[{"names":["delay"],"info":"Time in milliseconds of the delay of left signal into right and vice versa.\nDefault is 20 milliseconds.\n\n"},{"names":["feedback"],"info":"Amount of gain in delayed signal into right and vice versa. Gives a delay\neffect of left signal in right output and vice versa which gives widening\neffect. Default is 0.3.\n\n"},{"names":["crossfeed"],"info":"Cross feed of left into right with inverted phase. This helps in suppressing\nthe mono. If the value is 1 it will cancel all the signal common to both\nchannels. Default is 0.3.\n\n"},{"names":["drymix"],"info":"Set level of input signal of original channel. Default is 0.8.\n\n"}]},{"filtergroup":["superequalizer"],"info":"Apply 18 band equalizer.\n\n","options":[{"names":["1b"],"info":"Set 65Hz band gain.\n"},{"names":["2b"],"info":"Set 92Hz band gain.\n"},{"names":["3b"],"info":"Set 131Hz band gain.\n"},{"names":["4b"],"info":"Set 185Hz band gain.\n"},{"names":["5b"],"info":"Set 262Hz band gain.\n"},{"names":["6b"],"info":"Set 370Hz band gain.\n"},{"names":["7b"],"info":"Set 523Hz band gain.\n"},{"names":["8b"],"info":"Set 740Hz band gain.\n"},{"names":["9b"],"info":"Set 1047Hz band gain.\n"},{"names":["10b"],"info":"Set 1480Hz band gain.\n"},{"names":["11b"],"info":"Set 2093Hz band gain.\n"},{"names":["12b"],"info":"Set 2960Hz band gain.\n"},{"names":["13b"],"info":"Set 4186Hz band gain.\n"},{"names":["14b"],"info":"Set 5920Hz band gain.\n"},{"names":["15b"],"info":"Set 8372Hz band gain.\n"},{"names":["16b"],"info":"Set 11840Hz band gain.\n"},{"names":["17b"],"info":"Set 16744Hz band gain.\n"},{"names":["18b"],"info":"Set 20000Hz band gain.\n\n"}]},{"filtergroup":["surround"],"info":"Apply audio surround upmix filter.\n\nThis filter allows to produce multichannel output from audio stream.\n\n","options":[{"names":["chl_out"],"info":"Set output channel layout. By default, this is @var{5.1}.\n\nSee @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the required syntax.\n\n"},{"names":["chl_in"],"info":"Set input channel layout. By default, this is @var{stereo}.\n\nSee @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the required syntax.\n\n"},{"names":["level_in"],"info":"Set input volume level. By default, this is @var{1}.\n\n"},{"names":["level_out"],"info":"Set output volume level. By default, this is @var{1}.\n\n"},{"names":["lfe"],"info":"Enable LFE channel output if output channel layout has it. By default, this is enabled.\n\n"},{"names":["lfe_low"],"info":"Set LFE low cut off frequency. By default, this is @var{128} Hz.\n\n"},{"names":["lfe_high"],"info":"Set LFE high cut off frequency. By default, this is @var{256} Hz.\n\n"},{"names":["lfe_mode"],"info":"Set LFE mode, can be @var{add} or @var{sub}. Default is @var{add}.\nIn @var{add} mode, LFE channel is created from input audio and added to output.\nIn @var{sub} mode, LFE channel is created from input audio and added to output but\nalso all non-LFE output channels are subtracted with output LFE channel.\n\n"},{"names":["angle"],"info":"Set angle of stereo surround transform, Allowed range is from @var{0} to @var{360}.\nDefault is @var{90}.\n\n"},{"names":["fc_in"],"info":"Set front center input volume. By default, this is @var{1}.\n\n"},{"names":["fc_out"],"info":"Set front center output volume. By default, this is @var{1}.\n\n"},{"names":["fl_in"],"info":"Set front left input volume. By default, this is @var{1}.\n\n"},{"names":["fl_out"],"info":"Set front left output volume. By default, this is @var{1}.\n\n"},{"names":["fr_in"],"info":"Set front right input volume. By default, this is @var{1}.\n\n"},{"names":["fr_out"],"info":"Set front right output volume. By default, this is @var{1}.\n\n"},{"names":["sl_in"],"info":"Set side left input volume. By default, this is @var{1}.\n\n"},{"names":["sl_out"],"info":"Set side left output volume. By default, this is @var{1}.\n\n"},{"names":["sr_in"],"info":"Set side right input volume. By default, this is @var{1}.\n\n"},{"names":["sr_out"],"info":"Set side right output volume. By default, this is @var{1}.\n\n"},{"names":["bl_in"],"info":"Set back left input volume. By default, this is @var{1}.\n\n"},{"names":["bl_out"],"info":"Set back left output volume. By default, this is @var{1}.\n\n"},{"names":["br_in"],"info":"Set back right input volume. By default, this is @var{1}.\n\n"},{"names":["br_out"],"info":"Set back right output volume. By default, this is @var{1}.\n\n"},{"names":["bc_in"],"info":"Set back center input volume. By default, this is @var{1}.\n\n"},{"names":["bc_out"],"info":"Set back center output volume. By default, this is @var{1}.\n\n"},{"names":["lfe_in"],"info":"Set LFE input volume. By default, this is @var{1}.\n\n"},{"names":["lfe_out"],"info":"Set LFE output volume. By default, this is @var{1}.\n\n"},{"names":["allx"],"info":"Set spread usage of stereo image across X axis for all channels.\n\n"},{"names":["ally"],"info":"Set spread usage of stereo image across Y axis for all channels.\n\n"},{"names":["fcx","flx","frx","blx","brx","slx","srx","bcx"],"info":"Set spread usage of stereo image across X axis for each channel.\n\n"},{"names":["fcy","fly","fry","bly","bry","sly","sry","bcy"],"info":"Set spread usage of stereo image across Y axis for each channel.\n\n"},{"names":["win_size"],"info":"Set window size. Allowed range is from @var{1024} to @var{65536}. Default size is @var{4096}.\n\n"},{"names":["win_func"],"info":"Set window function.\n\nIt accepts the following values:\n@item rect\n@item bartlett\n@item hann, hanning\n@item hamming\n@item blackman\n@item welch\n@item flattop\n@item bharris\n@item bnuttall\n@item bhann\n@item sine\n@item nuttall\n@item lanczos\n@item gauss\n@item tukey\n@item dolph\n@item cauchy\n@item parzen\n@item poisson\n@item bohman\nDefault is @code{hann}.\n\n"},{"names":["overlap"],"info":"Set window overlap. If set to 1, the recommended overlap for selected\nwindow function will be picked. Default is @code{0.5}.\n\n"}]},{"filtergroup":["treble","highshelf"],"info":"\nBoost or cut treble (upper) frequencies of the audio using a two-pole\nshelving filter with a response similar to that of a standard\nhi-fi's tone-controls. This is also known as shelving equalisation (EQ).\n\n","options":[{"names":["gain","g"],"info":"Give the gain at whichever is the lower of ~22 kHz and the\nNyquist frequency. Its useful range is about -20 (for a large cut)\nto +20 (for a large boost). Beware of clipping when using a positive gain.\n\n"},{"names":["frequency","f"],"info":"Set the filter's central frequency and so can be used\nto extend or reduce the frequency range to be boosted or cut.\nThe default value is @code{3000} Hz.\n\n"},{"names":["width_type","t"],"info":"Set method to specify band-width of filter.\n@item h\nHz\n@item q\nQ-Factor\n@item o\noctave\n@item s\nslope\n@item k\nkHz\n\n"},{"names":["width","w"],"info":"Determine how steep is the filter's shelf transition.\n\n"},{"names":["mix","m"],"info":"How much to use filtered signal in output. Default is 1.\nRange is between 0 and 1.\n\n"},{"names":["channels","c"],"info":"Specify which channels to filter, by default all available are filtered.\n\n"},{"names":["normalize","n"],"info":"Normalize biquad coefficients, by default is disabled.\nEnabling it will normalize magnitude response at DC to 0dB.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item frequency, f\nChange treble frequency.\nSyntax for the command is : \"@var{frequency}\"\n\n@item width_type, t\nChange treble width_type.\nSyntax for the command is : \"@var{width_type}\"\n\n@item width, w\nChange treble width.\nSyntax for the command is : \"@var{width}\"\n\n@item gain, g\nChange treble gain.\nSyntax for the command is : \"@var{gain}\"\n\n@item mix, m\nChange treble mix.\nSyntax for the command is : \"@var{mix}\"\n\n"}]},{"filtergroup":["tremolo"],"info":"\nSinusoidal amplitude modulation.\n\n","options":[{"names":["f"],"info":"Modulation frequency in Hertz. Modulation frequencies in the subharmonic range\n(20 Hz or lower) will result in a tremolo effect.\nThis filter may also be used as a ring modulator by specifying\na modulation frequency higher than 20 Hz.\nRange is 0.1 - 20000.0. Default value is 5.0 Hz.\n\n"},{"names":["d"],"info":"Depth of modulation as a percentage. Range is 0.0 - 1.0.\nDefault value is 0.5.\n\n"}]},{"filtergroup":["vibrato"],"info":"\nSinusoidal phase modulation.\n\n","options":[{"names":["f"],"info":"Modulation frequency in Hertz.\nRange is 0.1 - 20000.0. Default value is 5.0 Hz.\n\n"},{"names":["d"],"info":"Depth of modulation as a percentage. Range is 0.0 - 1.0.\nDefault value is 0.5.\n\n"}]},{"filtergroup":["volume"],"info":"\nAdjust the input audio volume.\n\nIt accepts the following parameters:\n","options":[{"names":["volume"],"info":"Set audio volume expression.\n\nOutput values are clipped to the maximum value.\n\nThe output audio volume is given by the relation:\n@example\n@var{output_volume} = @var{volume} * @var{input_volume}\n@end example\n\nThe default value for @var{volume} is \"1.0\".\n\n"},{"names":["precision"],"info":"This parameter represents the mathematical precision.\n\nIt determines which input sample formats will be allowed, which affects the\nprecision of the volume scaling.\n\n@item fixed\n8-bit fixed-point; this limits input sample format to U8, S16, and S32.\n@item float\n32-bit floating-point; this limits input sample format to FLT. (default)\n@item double\n64-bit floating-point; this limits input sample format to DBL.\n\n"},{"names":["replaygain"],"info":"Choose the behaviour on encountering ReplayGain side data in input frames.\n\n@item drop\nRemove ReplayGain side data, ignoring its contents (the default).\n\n@item ignore\nIgnore ReplayGain side data, but leave it in the frame.\n\n@item track\nPrefer the track gain, if present.\n\n@item album\nPrefer the album gain, if present.\n\n"},{"names":["replaygain_preamp"],"info":"Pre-amplification gain in dB to apply to the selected replaygain gain.\n\nDefault value for @var{replaygain_preamp} is 0.0.\n\n"},{"names":["eval"],"info":"Set when the volume expression is evaluated.\n\nIt accepts the following values:\n@item once\nonly evaluate expression once during the filter initialization, or\nwhen the @samp{volume} command is sent\n\n@item frame\nevaluate expression for each incoming frame\n\nDefault value is @samp{once}.\n\nThe volume expression can contain the following parameters.\n\n@item n\nframe number (starting at zero)\n@item nb_channels\nnumber of channels\n@item nb_consumed_samples\nnumber of samples consumed by the filter\n@item nb_samples\nnumber of samples in the current frame\n@item pos\noriginal frame position in the file\n@item pts\nframe PTS\n@item sample_rate\nsample rate\n@item startpts\nPTS at start of stream\n@item startt\ntime at start of stream\n@item t\nframe time\n@item tb\ntimestamp timebase\n@item volume\nlast set volume value\n\nNote that when @option{eval} is set to @samp{once} only the\n@var{sample_rate} and @var{tb} variables are available, all other\nvariables will evaluate to NAN.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item volume\nModify the volume expression.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n@item replaygain_noclip\nPrevent clipping by limiting the gain applied.\n\nDefault value for @var{replaygain_noclip} is 1.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nHalve the input audio volume:\n@example\nvolume=volume=0.5\nvolume=volume=1/2\nvolume=volume=-6.0206dB\n@end example\n\nIn all the above example the named key for @option{volume} can be\nomitted, for example like in:\n@example\nvolume=0.5\n@end example\n\n@item\nIncrease input audio power by 6 decibels using fixed-point precision:\n@example\nvolume=volume=6dB:precision=fixed\n@end example\n\n@item\nFade volume after time 10 with an annihilation period of 5 seconds:\n@example\nvolume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame\n@end example\n@end itemize\n\n"}]},{"filtergroup":["volumedetect"],"info":"\nDetect the volume of the input video.\n\nThe filter has no parameters. The input is not modified. Statistics about\nthe volume will be printed in the log when the input stream end is reached.\n\nIn particular it will show the mean volume (root mean square), maximum\nvolume (on a per-sample basis), and the beginning of a histogram of the\nregistered volume values (from the maximum value to a cumulated 1/1000 of\nthe samples).\n\nAll volumes are in decibels relative to the maximum PCM value.\n\n@subsection Examples\n\nHere is an excerpt of the output:\n@example\n[Parsed_volumedetect_0 @ 0xa23120] mean_volume: -27 dB\n[Parsed_volumedetect_0 @ 0xa23120] max_volume: -4 dB\n[Parsed_volumedetect_0 @ 0xa23120] histogram_4db: 6\n[Parsed_volumedetect_0 @ 0xa23120] histogram_5db: 62\n[Parsed_volumedetect_0 @ 0xa23120] histogram_6db: 286\n[Parsed_volumedetect_0 @ 0xa23120] histogram_7db: 1042\n[Parsed_volumedetect_0 @ 0xa23120] histogram_8db: 2551\n[Parsed_volumedetect_0 @ 0xa23120] histogram_9db: 4609\n[Parsed_volumedetect_0 @ 0xa23120] histogram_10db: 8409\n@end example\n\nIt means that:\n@itemize\n@item\nThe mean square energy is approximately -27 dB, or 10^-2.7.\n@item\nThe largest sample is at -4 dB, or more precisely between -4 dB and -5 dB.\n@item\nThere are 6 samples at -4 dB, 62 at -5 dB, 286 at -6 dB, etc.\n@end itemize\n\nIn other words, raising the volume by +4 dB does not cause any clipping,\nraising it by +5 dB causes clipping for 6 samples, etc.\n\n@c man end AUDIO FILTERS\n\n@chapter Audio Sources\n@c man begin AUDIO SOURCES\n\nBelow is a description of the currently available audio sources.\n\n","options":[]},{"filtergroup":["abuffer"],"info":"\nBuffer audio frames, and make them available to the filter chain.\n\nThis source is mainly intended for a programmatic use, in particular\nthrough the interface defined in @file{libavfilter/asrc_abuffer.h}.\n\nIt accepts the following parameters:\n","options":[{"names":["time_base"],"info":"The timebase which will be used for timestamps of submitted frames. It must be\neither a floating-point number or in @var{numerator}/@var{denominator} form.\n\n"},{"names":["sample_rate"],"info":"The sample rate of the incoming audio buffers.\n\n"},{"names":["sample_fmt"],"info":"The sample format of the incoming audio buffers.\nEither a sample format name or its corresponding integer representation from\nthe enum AVSampleFormat in @file{libavutil/samplefmt.h}\n\n"},{"names":["channel_layout"],"info":"The channel layout of the incoming audio buffers.\nEither a channel layout name from channel_layout_map in\n@file{libavutil/channel_layout.c} or its corresponding integer representation\nfrom the AV_CH_LAYOUT_* macros in @file{libavutil/channel_layout.h}\n\n"},{"names":["channels"],"info":"The number of channels of the incoming audio buffers.\nIf both @var{channels} and @var{channel_layout} are specified, then they\nmust be consistent.\n\n\n","examples":"@subsection Examples\n\n@example\nabuffer=sample_rate=44100:sample_fmt=s16p:channel_layout=stereo\n@end example\n\nwill instruct the source to accept planar 16bit signed stereo at 44100Hz.\nSince the sample format with name \"s16p\" corresponds to the number\n6 and the \"stereo\" channel layout corresponds to the value 0x3, this is\nequivalent to:\n@example\nabuffer=sample_rate=44100:sample_fmt=6:channel_layout=0x3\n@end example\n\n"}]},{"filtergroup":["aevalsrc"],"info":"\nGenerate an audio signal specified by an expression.\n\nThis source accepts in input one or more expressions (one for each\nchannel), which are evaluated and used to generate a corresponding\naudio signal.\n\nThis source accepts the following options:\n\n","options":[{"names":["exprs"],"info":"Set the '|'-separated expressions list for each separate channel. In case the\n@option{channel_layout} option is not specified, the selected channel layout\ndepends on the number of provided expressions. Otherwise the last\nspecified expression is applied to the remaining output channels.\n\n"},{"names":["channel_layout","c"],"info":"Set the channel layout. The number of channels in the specified layout\nmust be equal to the number of specified expressions.\n\n"},{"names":["duration","d"],"info":"Set the minimum duration of the sourced audio. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nNote that the resulting duration may be greater than the specified\nduration, as the generated audio is always cut at the end of a\ncomplete frame.\n\nIf not specified, or the expressed duration is negative, the audio is\nsupposed to be generated forever.\n\n"},{"names":["nb_samples","n"],"info":"Set the number of samples per channel per each output frame,\ndefault to 1024.\n\n"},{"names":["sample_rate","s"],"info":"Specify the sample rate, default to 44100.\n\nEach expression in @var{exprs} can contain the following constants:\n\n@item n\nnumber of the evaluated sample, starting from 0\n\n@item t\ntime of the evaluated sample expressed in seconds, starting from 0\n\n@item s\nsample rate\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nGenerate silence:\n@example\naevalsrc=0\n@end example\n\n@item\nGenerate a sin signal with frequency of 440 Hz, set sample rate to\n8000 Hz:\n@example\naevalsrc=\"sin(440*2*PI*t):s=8000\"\n@end example\n\n@item\nGenerate a two channels signal, specify the channel layout (Front\nCenter + Back Center) explicitly:\n@example\naevalsrc=\"sin(420*2*PI*t)|cos(430*2*PI*t):c=FC|BC\"\n@end example\n\n@item\nGenerate white noise:\n@example\naevalsrc=\"-2+random(0)\"\n@end example\n\n@item\nGenerate an amplitude modulated signal:\n@example\naevalsrc=\"sin(10*2*PI*t)*sin(880*2*PI*t)\"\n@end example\n\n@item\nGenerate 2.5 Hz binaural beats on a 360 Hz carrier:\n@example\naevalsrc=\"0.1*sin(2*PI*(360-2.5/2)*t) | 0.1*sin(2*PI*(360+2.5/2)*t)\"\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["anullsrc"],"info":"\nThe null audio source, return unprocessed audio frames. It is mainly useful\nas a template and to be employed in analysis / debugging tools, or as\nthe source for filters which ignore the input data (for example the sox\nsynth filter).\n\nThis source accepts the following options:\n\n","options":[{"names":["channel_layout","cl"],"info":"\nSpecifies the channel layout, and can be either an integer or a string\nrepresenting a channel layout. The default value of @var{channel_layout}\nis \"stereo\".\n\nCheck the channel_layout_map definition in\n@file{libavutil/channel_layout.c} for the mapping between strings and\nchannel layout values.\n\n"},{"names":["sample_rate","r"],"info":"Specifies the sample rate, and defaults to 44100.\n\n"},{"names":["nb_samples","n"],"info":"Set the number of samples per requested frames.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSet the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.\n@example\nanullsrc=r=48000:cl=4\n@end example\n\n@item\nDo the same operation with a more obvious syntax:\n@example\nanullsrc=r=48000:cl=mono\n@end example\n@end itemize\n\nAll the parameters need to be explicitly defined.\n\n"}]},{"filtergroup":["flite"],"info":"\nSynthesize a voice utterance using the libflite library.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libflite}.\n\nNote that versions of the flite library prior to 2.0 are not thread-safe.\n\n","options":[{"names":["list_voices"],"info":"If set to 1, list the names of the available voices and exit\nimmediately. Default value is 0.\n\n"},{"names":["nb_samples","n"],"info":"Set the maximum number of samples per frame. Default value is 512.\n\n"},{"names":["textfile"],"info":"Set the filename containing the text to speak.\n\n"},{"names":["text"],"info":"Set the text to speak.\n\n"},{"names":["voice","v"],"info":"Set the voice to use for the speech synthesis. Default value is\n@code{kal}. See also the @var{list_voices} option.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nRead from file @file{speech.txt}, and synthesize the text using the\nstandard flite voice:\n@example\nflite=textfile=speech.txt\n@end example\n\n@item\nRead the specified text selecting the @code{slt} voice:\n@example\nflite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt\n@end example\n\n@item\nInput text to ffmpeg:\n@example\nffmpeg -f lavfi -i flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt\n@end example\n\n@item\nMake @file{ffplay} speak the specified text, using @code{flite} and\nthe @code{lavfi} device:\n@example\nffplay -f lavfi flite=text='No more be grieved for which that thou hast done.'\n@end example\n@end itemize\n\nFor more information about libflite, check:\n@url{http://www.festvox.org/flite/}\n\n"}]},{"filtergroup":["anoisesrc"],"info":"\nGenerate a noise audio signal.\n\n","options":[{"names":["sample_rate","r"],"info":"Specify the sample rate. Default value is 48000 Hz.\n\n"},{"names":["amplitude","a"],"info":"Specify the amplitude (0.0 - 1.0) of the generated audio stream. Default value\nis 1.0.\n\n"},{"names":["duration","d"],"info":"Specify the duration of the generated audio stream. Not specifying this option\nresults in noise with an infinite length.\n\n"},{"names":["color","colour","c"],"info":"Specify the color of noise. Available noise colors are white, pink, brown,\nblue and violet. Default color is white.\n\n"},{"names":["seed","s"],"info":"Specify a value used to seed the PRNG.\n\n"},{"names":["nb_samples","n"],"info":"Set the number of samples per each output frame, default is 1024.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nGenerate 60 seconds of pink noise, with a 44.1 kHz sampling rate and an amplitude of 0.5:\n@example\nanoisesrc=d=60:c=pink:r=44100:a=0.5\n@end example\n@end itemize\n\n"}]},{"filtergroup":["hilbert"],"info":"\nGenerate odd-tap Hilbert transform FIR coefficients.\n\nThe resulting stream can be used with @ref{afir} filter for phase-shifting\nthe signal by 90 degrees.\n\nThis is used in many matrix coding schemes and for analytic signal generation.\nThe process is often written as a multiplication by i (or j), the imaginary unit.\n\n","options":[{"names":["sample_rate","s"],"info":"Set sample rate, default is 44100.\n\n"},{"names":["taps","t"],"info":"Set length of FIR filter, default is 22051.\n\n"},{"names":["nb_samples","n"],"info":"Set number of samples per each frame.\n\n"},{"names":["win_func","w"],"info":"Set window function to be used when generating FIR coefficients.\n\n"}]},{"filtergroup":["sinc"],"info":"\nGenerate a sinc kaiser-windowed low-pass, high-pass, band-pass, or band-reject FIR coefficients.\n\nThe resulting stream can be used with @ref{afir} filter for filtering the audio signal.\n\n","options":[{"names":["sample_rate","r"],"info":"Set sample rate, default is 44100.\n\n"},{"names":["nb_samples","n"],"info":"Set number of samples per each frame. Default is 1024.\n\n"},{"names":["hp"],"info":"Set high-pass frequency. Default is 0.\n\n"},{"names":["lp"],"info":"Set low-pass frequency. Default is 0.\nIf high-pass frequency is lower than low-pass frequency and low-pass frequency\nis higher than 0 then filter will create band-pass filter coefficients,\notherwise band-reject filter coefficients.\n\n"},{"names":["phase"],"info":"Set filter phase response. Default is 50. Allowed range is from 0 to 100.\n\n"},{"names":["beta"],"info":"Set Kaiser window beta.\n\n"},{"names":["att"],"info":"Set stop-band attenuation. Default is 120dB, allowed range is from 40 to 180 dB.\n\n"},{"names":["round"],"info":"Enable rounding, by default is disabled.\n\n"},{"names":["hptaps"],"info":"Set number of taps for high-pass filter.\n\n"},{"names":["lptaps"],"info":"Set number of taps for low-pass filter.\n\n"}]},{"filtergroup":["sine"],"info":"\nGenerate an audio signal made of a sine wave with amplitude 1/8.\n\nThe audio signal is bit-exact.\n\n","options":[{"names":["frequency","f"],"info":"Set the carrier frequency. Default is 440 Hz.\n\n"},{"names":["beep_factor","b"],"info":"Enable a periodic beep every second with frequency @var{beep_factor} times\nthe carrier frequency. Default is 0, meaning the beep is disabled.\n\n"},{"names":["sample_rate","r"],"info":"Specify the sample rate, default is 44100.\n\n"},{"names":["duration","d"],"info":"Specify the duration of the generated audio stream.\n\n"},{"names":["samples_per_frame"],"info":"Set the number of samples per output frame.\n\nThe expression can contain the following constants:\n\n@item n\nThe (sequential) number of the output audio frame, starting from 0.\n\n@item pts\nThe PTS (Presentation TimeStamp) of the output audio frame,\nexpressed in @var{TB} units.\n\n@item t\nThe PTS of the output audio frame, expressed in seconds.\n\n@item TB\nThe timebase of the output audio frames.\n\nDefault is @code{1024}.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nGenerate a simple 440 Hz sine wave:\n@example\nsine\n@end example\n\n@item\nGenerate a 220 Hz sine wave with a 880 Hz beep each second, for 5 seconds:\n@example\nsine=220:4:d=5\nsine=f=220:b=4:d=5\nsine=frequency=220:beep_factor=4:duration=5\n@end example\n\n@item\nGenerate a 1 kHz sine wave following @code{1602,1601,1602,1601,1602} NTSC\npattern:\n@example\nsine=1000:samples_per_frame='st(0,mod(n,5)); 1602-not(not(eq(ld(0),1)+eq(ld(0),3)))'\n@end example\n@end itemize\n\n@c man end AUDIO SOURCES\n\n@chapter Audio Sinks\n@c man begin AUDIO SINKS\n\nBelow is a description of the currently available audio sinks.\n\n"}]},{"filtergroup":["abuffersink"],"info":"\nBuffer audio frames, and make them available to the end of filter chain.\n\nThis sink is mainly intended for programmatic use, in particular\nthrough the interface defined in @file{libavfilter/buffersink.h}\nor the options system.\n\nIt accepts a pointer to an AVABufferSinkContext structure, which\ndefines the incoming buffers' formats, to be passed as the opaque\nparameter to @code{avfilter_init_filter} for initialization.\n","options":[]},{"filtergroup":["anullsink"],"info":"\nNull audio sink; do absolutely nothing with the input audio. It is\nmainly useful as a template and for use in analysis / debugging\ntools.\n\n@c man end AUDIO SINKS\n\n@chapter Video Filters\n@c man begin VIDEO FILTERS\n\nWhen you configure your FFmpeg build, you can disable any of the\nexisting filters using @code{--disable-filters}.\nThe configure output will show the video filters included in your\nbuild.\n\nBelow is a description of the currently available video filters.\n\n","options":[]},{"filtergroup":["addroi"],"info":"\nMark a region of interest in a video frame.\n\nThe frame data is passed through unchanged, but metadata is attached\nto the frame indicating regions of interest which can affect the\nbehaviour of later encoding. Multiple regions can be marked by\napplying the filter multiple times.\n\n","options":[{"names":["x"],"info":"Region distance in pixels from the left edge of the frame.\n"},{"names":["y"],"info":"Region distance in pixels from the top edge of the frame.\n"},{"names":["w"],"info":"Region width in pixels.\n"},{"names":["h"],"info":"Region height in pixels.\n\nThe parameters @var{x}, @var{y}, @var{w} and @var{h} are expressions,\nand may contain the following variables:\n@item iw\nWidth of the input frame.\n@item ih\nHeight of the input frame.\n\n"},{"names":["qoffset"],"info":"Quantisation offset to apply within the region.\n\nThis must be a real value in the range -1 to +1. A value of zero\nindicates no quality change. A negative value asks for better quality\n(less quantisation), while a positive value asks for worse quality\n(greater quantisation).\n\nThe range is calibrated so that the extreme values indicate the\nlargest possible offset - if the rest of the frame is encoded with the\nworst possible quality, an offset of -1 indicates that this region\nshould be encoded with the best possible quality anyway. Intermediate\nvalues are then interpolated in some codec-dependent way.\n\nFor example, in 10-bit H.264 the quantisation parameter varies between\n-12 and 51. A typical qoffset value of -1/10 therefore indicates that\nthis region should be encoded with a QP around one-tenth of the full\nrange better than the rest of the frame. So, if most of the frame\nwere to be encoded with a QP of around 30, this region would get a QP\nof around 24 (an offset of approximately -1/10 * (51 - -12) = -6.3).\nAn extreme value of -1 would indicate that this region should be\nencoded with the best possible quality regardless of the treatment of\nthe rest of the frame - that is, should be encoded at a QP of -12.\n"},{"names":["clear"],"info":"If set to true, remove any existing regions of interest marked on the\nframe before adding the new one.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMark the centre quarter of the frame as interesting.\n@example\naddroi=iw/4:ih/4:iw/2:ih/2:-1/10\n@end example\n@item\nMark the 100-pixel-wide region on the left edge of the frame as very\nuninteresting (to be encoded at much lower quality than the rest of\nthe frame).\n@example\naddroi=0:0:100:ih:+1/5\n@end example\n@end itemize\n\n"}]},{"filtergroup":["alphaextract"],"info":"\nExtract the alpha component from the input as a grayscale video. This\nis especially useful with the @var{alphamerge} filter.\n\n","options":[]},{"filtergroup":["alphamerge"],"info":"\nAdd or replace the alpha component of the primary input with the\ngrayscale value of a second input. This is intended for use with\n@var{alphaextract} to allow the transmission or storage of frame\nsequences that have alpha in a format that doesn't support an alpha\nchannel.\n\nFor example, to reconstruct full frames from a normal YUV-encoded video\nand a separate video created with @var{alphaextract}, you might use:\n@example\nmovie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]\n@end example\n\nSince this filter is designed for reconstruction, it operates on frame\nsequences without considering timestamps, and terminates when either\ninput reaches end of stream. This will cause problems if your encoding\npipeline drops frames. If you're trying to apply an image as an\noverlay to a video stream, consider the @var{overlay} filter instead.\n\n","options":[]},{"filtergroup":["amplify"],"info":"\nAmplify differences between current pixel and pixels of adjacent frames in\nsame pixel location.\n\nThis filter accepts the following options:\n\n","options":[{"names":["radius"],"info":"Set frame radius. Default is 2. Allowed range is from 1 to 63.\nFor example radius of 3 will instruct filter to calculate average of 7 frames.\n\n"},{"names":["factor"],"info":"Set factor to amplify difference. Default is 2. Allowed range is from 0 to 65535.\n\n"},{"names":["threshold"],"info":"Set threshold for difference amplification. Any difference greater or equal to\nthis value will not alter source pixel. Default is 10.\nAllowed range is from 0 to 65535.\n\n"},{"names":["tolerance"],"info":"Set tolerance for difference amplification. Any difference lower to\nthis value will not alter source pixel. Default is 0.\nAllowed range is from 0 to 65535.\n\n"},{"names":["low"],"info":"Set lower limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535.\nThis option controls maximum possible value that will decrease source pixel value.\n\n"},{"names":["high"],"info":"Set high limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535.\nThis option controls maximum possible value that will increase source pixel value.\n\n"},{"names":["planes"],"info":"Set which planes to filter. Default is all. Allowed range is from 0 to 15.\n\n@subsection Commands\n\nThis filter supports the following @ref{commands} that corresponds to option of same name:\n@item factor\n@item threshold\n@item tolerance\n@item low\n@item high\n@item planes\n\n"}]},{"filtergroup":["ass"],"info":"\nSame as the @ref{subtitles} filter, except that it doesn't require libavcodec\nand libavformat to work. On the other hand, it is limited to ASS (Advanced\nSubstation Alpha) subtitles files.\n\nThis filter accepts the following option in addition to the common options from\nthe @ref{subtitles} filter:\n\n","options":[{"names":["shaping"],"info":"Set the shaping engine\n\nAvailable values are:\n@item auto\nThe default libass shaping engine, which is the best available.\n@item simple\nFast, font-agnostic shaper that can do only substitutions\n@item complex\nSlower shaper using OpenType for substitutions and positioning\n\nThe default is @code{auto}.\n\n"}]},{"filtergroup":["atadenoise"],"info":"Apply an Adaptive Temporal Averaging Denoiser to the video input.\n\n","options":[{"names":["0a"],"info":"Set threshold A for 1st plane. Default is 0.02.\nValid range is 0 to 0.3.\n\n"},{"names":["0b"],"info":"Set threshold B for 1st plane. Default is 0.04.\nValid range is 0 to 5.\n\n"},{"names":["1a"],"info":"Set threshold A for 2nd plane. Default is 0.02.\nValid range is 0 to 0.3.\n\n"},{"names":["1b"],"info":"Set threshold B for 2nd plane. Default is 0.04.\nValid range is 0 to 5.\n\n"},{"names":["2a"],"info":"Set threshold A for 3rd plane. Default is 0.02.\nValid range is 0 to 0.3.\n\n"},{"names":["2b"],"info":"Set threshold B for 3rd plane. Default is 0.04.\nValid range is 0 to 5.\n\nThreshold A is designed to react on abrupt changes in the input signal and\nthreshold B is designed to react on continuous changes in the input signal.\n\n"},{"names":["s"],"info":"Set number of frames filter will use for averaging. Default is 9. Must be odd\nnumber in range [5, 129].\n\n"},{"names":["p"],"info":"Set what planes of frame filter will use for averaging. Default is all.\n\n"},{"names":["a"],"info":"Set what variant of algorithm filter will use for averaging. Default is @code{p} parallel.\nAlternatively can be set to @code{s} serial.\n\nParallel can be faster then serial, while other way around is never true.\nParallel will abort early on first change being greater then thresholds, while serial\nwill continue processing other side of frames if they are equal or bellow thresholds.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options except option @code{s}.\nThe command accepts the same syntax of the corresponding option.\n\n"}]},{"filtergroup":["avgblur"],"info":"\nApply average blur filter.\n\n","options":[{"names":["sizeX"],"info":"Set horizontal radius size.\n\n"},{"names":["planes"],"info":"Set which planes to filter. By default all planes are filtered.\n\n"},{"names":["sizeY"],"info":"Set vertical radius size, if zero it will be same as @code{sizeX}.\nDefault is @code{0}.\n\n@subsection Commands\nThis filter supports same commands as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["bbox"],"info":"\nCompute the bounding box for the non-black pixels in the input frame\nluminance plane.\n\nThis filter computes the bounding box containing all the pixels with a\nluminance value greater than the minimum allowed value.\nThe parameters describing the bounding box are printed on the filter\nlog.\n\nThe filter accepts the following option:\n\n","options":[{"names":["min_val"],"info":"Set the minimal luminance value. Default is @code{16}.\n\n"}]},{"filtergroup":["bilateral"],"info":"Apply bilateral filter, spatial smoothing while preserving edges.\n\n","options":[{"names":["sigmaS"],"info":"Set sigma of gaussian function to calculate spatial weight.\nAllowed range is 0 to 10. Default is 0.1.\n\n"},{"names":["sigmaR"],"info":"Set sigma of gaussian function to calculate range weight.\nAllowed range is 0 to 1. Default is 0.1.\n\n"},{"names":["planes"],"info":"Set planes to filter. Default is first only.\n\n"}]},{"filtergroup":["bitplanenoise"],"info":"\nShow and measure bit plane noise.\n\n","options":[{"names":["bitplane"],"info":"Set which plane to analyze. Default is @code{1}.\n\n"},{"names":["filter"],"info":"Filter out noisy pixels from @code{bitplane} set above.\nDefault is disabled.\n\n"}]},{"filtergroup":["blackdetect"],"info":"\nDetect video intervals that are (almost) completely black. Can be\nuseful to detect chapter transitions, commercials, or invalid\nrecordings. Output lines contains the time for the start, end and\nduration of the detected black interval expressed in seconds.\n\nIn order to display the output lines, you need to set the loglevel at\nleast to the AV_LOG_INFO value.\n\n","options":[{"names":["black_min_duration","d"],"info":"Set the minimum detected black duration expressed in seconds. It must\nbe a non-negative floating point number.\n\nDefault value is 2.0.\n\n"},{"names":["picture_black_ratio_th","pic_th"],"info":"Set the threshold for considering a picture \"black\".\nExpress the minimum value for the ratio:\n@example\n@var{nb_black_pixels} / @var{nb_pixels}\n@end example\n\nfor which a picture is considered black.\nDefault value is 0.98.\n\n"},{"names":["pixel_black_th","pix_th"],"info":"Set the threshold for considering a pixel \"black\".\n\nThe threshold expresses the maximum pixel luminance value for which a\npixel is considered \"black\". The provided value is scaled according to\nthe following equation:\n@example\n@var{absolute_threshold} = @var{luminance_minimum_value} + @var{pixel_black_th} * @var{luminance_range_size}\n@end example\n\n@var{luminance_range_size} and @var{luminance_minimum_value} depend on\nthe input video format, the range is [0-255] for YUV full-range\nformats and [16-235] for YUV non full-range formats.\n\nDefault value is 0.10.\n\nThe following example sets the maximum pixel threshold to the minimum\nvalue, and detects only black intervals of 2 or more seconds:\n@example\nblackdetect=d=2:pix_th=0.00\n@end example\n\n"}]},{"filtergroup":["blackframe"],"info":"\nDetect frames that are (almost) completely black. Can be useful to\ndetect chapter transitions or commercials. Output lines consist of\nthe frame number of the detected frame, the percentage of blackness,\nthe position in the file if known or -1 and the timestamp in seconds.\n\nIn order to display the output lines, you need to set the loglevel at\nleast to the AV_LOG_INFO value.\n\nThis filter exports frame metadata @code{lavfi.blackframe.pblack}.\nThe value represents the percentage of pixels in the picture that\nare below the threshold value.\n\nIt accepts the following parameters:\n\n","options":[{"names":["amount"],"info":"The percentage of the pixels that have to be below the threshold; it defaults to\n@code{98}.\n\n"},{"names":["threshold","thresh"],"info":"The threshold below which a pixel value is considered black; it defaults to\n@code{32}.\n\n\n"}]},{"filtergroup":["blend","tblend"],"info":"\nBlend two video frames into each other.\n\nThe @code{blend} filter takes two input streams and outputs one\nstream, the first input is the \"top\" layer and second input is\n\"bottom\" layer. By default, the output terminates when the longest input terminates.\n\nThe @code{tblend} (time blend) filter takes two consecutive frames\nfrom one single stream, and outputs the result obtained by blending\nthe new frame on top of the old frame.\n\nA description of the accepted options follows.\n\n","options":[{"names":["c0_mode"],"info":""},{"names":["c1_mode"],"info":""},{"names":["c2_mode"],"info":""},{"names":["c3_mode"],"info":""},{"names":["all_mode"],"info":"Set blend mode for specific pixel component or all pixel components in case\nof @var{all_mode}. Default value is @code{normal}.\n\nAvailable values for component modes are:\n@item addition\n@item grainmerge\n@item and\n@item average\n@item burn\n@item darken\n@item difference\n@item grainextract\n@item divide\n@item dodge\n@item freeze\n@item exclusion\n@item extremity\n@item glow\n@item hardlight\n@item hardmix\n@item heat\n@item lighten\n@item linearlight\n@item multiply\n@item multiply128\n@item negation\n@item normal\n@item or\n@item overlay\n@item phoenix\n@item pinlight\n@item reflect\n@item screen\n@item softlight\n@item subtract\n@item vividlight\n@item xor\n\n"},{"names":["c0_opacity"],"info":""},{"names":["c1_opacity"],"info":""},{"names":["c2_opacity"],"info":""},{"names":["c3_opacity"],"info":""},{"names":["all_opacity"],"info":"Set blend opacity for specific pixel component or all pixel components in case\nof @var{all_opacity}. Only used in combination with pixel component blend modes.\n\n"},{"names":["c0_expr"],"info":""},{"names":["c1_expr"],"info":""},{"names":["c2_expr"],"info":""},{"names":["c3_expr"],"info":""},{"names":["all_expr"],"info":"Set blend expression for specific pixel component or all pixel components in case\nof @var{all_expr}. Note that related mode options will be ignored if those are set.\n\nThe expressions can use the following variables:\n\n@item N\nThe sequential number of the filtered frame, starting from @code{0}.\n\n@item X\n@item Y\nthe coordinates of the current sample\n\n@item W\n@item H\nthe width and height of currently filtered plane\n\n@item SW\n@item SH\nWidth and height scale for the plane being filtered. It is the\nratio between the dimensions of the current plane to the luma plane,\ne.g. for a @code{yuv420p} frame, the values are @code{1,1} for\nthe luma plane and @code{0.5,0.5} for the chroma planes.\n\n@item T\nTime of the current frame, expressed in seconds.\n\n@item TOP, A\nValue of pixel component at current location for first video frame (top layer).\n\n@item BOTTOM, B\nValue of pixel component at current location for second video frame (bottom layer).\n\nThe @code{blend} filter also supports the @ref{framesync} options.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply transition from bottom layer to top layer in first 10 seconds:\n@example\nblend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-(if(gte(T,10),1,T/10)))'\n@end example\n\n@item\nApply linear horizontal transition from top layer to bottom layer:\n@example\nblend=all_expr='A*(X/W)+B*(1-X/W)'\n@end example\n\n@item\nApply 1x1 checkerboard effect:\n@example\nblend=all_expr='if(eq(mod(X,2),mod(Y,2)),A,B)'\n@end example\n\n@item\nApply uncover left effect:\n@example\nblend=all_expr='if(gte(N*SW+X,W),A,B)'\n@end example\n\n@item\nApply uncover down effect:\n@example\nblend=all_expr='if(gte(Y-N*SH,0),A,B)'\n@end example\n\n@item\nApply uncover up-left effect:\n@example\nblend=all_expr='if(gte(T*SH*40+Y,H)*gte((T*40*SW+X)*W/H,W),A,B)'\n@end example\n\n@item\nSplit diagonally video and shows top and bottom layer on each side:\n@example\nblend=all_expr='if(gt(X,Y*(W/H)),A,B)'\n@end example\n\n@item\nDisplay differences between the current and the previous frame:\n@example\ntblend=all_mode=grainextract\n@end example\n@end itemize\n\n"}]},{"filtergroup":["bm3d"],"info":"\nDenoise frames using Block-Matching 3D algorithm.\n\nThe filter accepts the following options.\n\n","options":[{"names":["sigma"],"info":"Set denoising strength. Default value is 1.\nAllowed range is from 0 to 999.9.\nThe denoising algorithm is very sensitive to sigma, so adjust it\naccording to the source.\n\n"},{"names":["block"],"info":"Set local patch size. This sets dimensions in 2D.\n\n"},{"names":["bstep"],"info":"Set sliding step for processing blocks. Default value is 4.\nAllowed range is from 1 to 64.\nSmaller values allows processing more reference blocks and is slower.\n\n"},{"names":["group"],"info":"Set maximal number of similar blocks for 3rd dimension. Default value is 1.\nWhen set to 1, no block matching is done. Larger values allows more blocks\nin single group.\nAllowed range is from 1 to 256.\n\n"},{"names":["range"],"info":"Set radius for search block matching. Default is 9.\nAllowed range is from 1 to INT32_MAX.\n\n"},{"names":["mstep"],"info":"Set step between two search locations for block matching. Default is 1.\nAllowed range is from 1 to 64. Smaller is slower.\n\n"},{"names":["thmse"],"info":"Set threshold of mean square error for block matching. Valid range is 0 to\nINT32_MAX.\n\n"},{"names":["hdthr"],"info":"Set thresholding parameter for hard thresholding in 3D transformed domain.\nLarger values results in stronger hard-thresholding filtering in frequency\ndomain.\n\n"},{"names":["estim"],"info":"Set filtering estimation mode. Can be @code{basic} or @code{final}.\nDefault is @code{basic}.\n\n"},{"names":["ref"],"info":"If enabled, filter will use 2nd stream for block matching.\nDefault is disabled for @code{basic} value of @var{estim} option,\nand always enabled if value of @var{estim} is @code{final}.\n\n"},{"names":["planes"],"info":"Set planes to filter. Default is all available except alpha.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nBasic filtering with bm3d:\n@example\nbm3d=sigma=3:block=4:bstep=2:group=1:estim=basic\n@end example\n\n@item\nSame as above, but filtering only luma:\n@example\nbm3d=sigma=3:block=4:bstep=2:group=1:estim=basic:planes=1\n@end example\n\n@item\nSame as above, but with both estimation modes:\n@example\nsplit[a][b],[a]bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic[a],[b][a]bm3d=sigma=3:block=4:bstep=2:group=16:estim=final:ref=1\n@end example\n\n@item\nSame as above, but prefilter with @ref{nlmeans} filter instead:\n@example\nsplit[a][b],[a]nlmeans=s=3:r=7:p=3[a],[b][a]bm3d=sigma=3:block=4:bstep=2:group=16:estim=final:ref=1\n@end example\n@end itemize\n\n"}]},{"filtergroup":["boxblur"],"info":"\nApply a boxblur algorithm to the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["luma_radius","lr"],"info":""},{"names":["luma_power","lp"],"info":""},{"names":["chroma_radius","cr"],"info":""},{"names":["chroma_power","cp"],"info":""},{"names":["alpha_radius","ar"],"info":""},{"names":["alpha_power","ap"],"info":"\n\nA description of the accepted options follows.\n\n@item luma_radius, lr\n@item chroma_radius, cr\n@item alpha_radius, ar\nSet an expression for the box radius in pixels used for blurring the\ncorresponding input plane.\n\nThe radius value must be a non-negative number, and must not be\ngreater than the value of the expression @code{min(w,h)/2} for the\nluma and alpha planes, and of @code{min(cw,ch)/2} for the chroma\nplanes.\n\nDefault value for @option{luma_radius} is \"2\". If not specified,\n@option{chroma_radius} and @option{alpha_radius} default to the\ncorresponding value set for @option{luma_radius}.\n\nThe expressions can contain the following constants:\n@table @option\n@item w\n@item h\nThe input width and height in pixels.\n\n@item cw\n@item ch\nThe input chroma image width and height in pixels.\n\n@item hsub\n@item vsub\nThe horizontal and vertical chroma subsample values. For example, for the\npixel format \"yuv422p\", @var{hsub} is 2 and @var{vsub} is 1.\n\n"},{"names":["luma_power","lp"],"info":""},{"names":["chroma_power","cp"],"info":""},{"names":["alpha_power","ap"],"info":"Specify how many times the boxblur filter is applied to the\ncorresponding plane.\n\nDefault value for @option{luma_power} is 2. If not specified,\n@option{chroma_power} and @option{alpha_power} default to the\ncorresponding value set for @option{luma_power}.\n\nA value of 0 will disable the effect.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply a boxblur filter with the luma, chroma, and alpha radii\nset to 2:\n@example\nboxblur=luma_radius=2:luma_power=1\nboxblur=2:1\n@end example\n\n@item\nSet the luma radius to 2, and alpha and chroma radius to 0:\n@example\nboxblur=2:1:cr=0:ar=0\n@end example\n\n@item\nSet the luma and chroma radii to a fraction of the video dimension:\n@example\nboxblur=luma_radius=min(h\\,w)/10:luma_power=1:chroma_radius=min(cw\\,ch)/10:chroma_power=1\n@end example\n@end itemize\n\n"}]},{"filtergroup":["bwdif"],"info":"\nDeinterlace the input video (\"bwdif\" stands for \"Bob Weaver\nDeinterlacing Filter\").\n\nMotion adaptive deinterlacing based on yadif with the use of w3fdif and cubic\ninterpolation algorithms.\nIt accepts the following parameters:\n\n","options":[{"names":["mode"],"info":"The interlacing mode to adopt. It accepts one of the following values:\n\n@item 0, send_frame\nOutput one frame for each frame.\n@item 1, send_field\nOutput one frame for each field.\n\nThe default value is @code{send_field}.\n\n"},{"names":["parity"],"info":"The picture field parity assumed for the input interlaced video. It accepts one\nof the following values:\n\n@item 0, tff\nAssume the top field is first.\n@item 1, bff\nAssume the bottom field is first.\n@item -1, auto\nEnable automatic detection of field parity.\n\nThe default value is @code{auto}.\nIf the interlacing is unknown or the decoder does not export this information,\ntop field first will be assumed.\n\n"},{"names":["deint"],"info":"Specify which frames to deinterlace. Accepts one of the following\nvalues:\n\n@item 0, all\nDeinterlace all frames.\n@item 1, interlaced\nOnly deinterlace frames marked as interlaced.\n\nThe default value is @code{all}.\n\n"}]},{"filtergroup":["chromahold"],"info":"Remove all color information for all colors except for certain one.\n\n","options":[{"names":["color"],"info":"The color which will not be replaced with neutral chroma.\n\n"},{"names":["similarity"],"info":"Similarity percentage with the above color.\n0.01 matches only the exact key color, while 1.0 matches everything.\n\n"},{"names":["blend"],"info":"Blend percentage.\n0.0 makes pixels either fully gray, or not gray at all.\nHigher values result in more preserved color.\n\n"},{"names":["yuv"],"info":"Signals that the color passed is already in YUV instead of RGB.\n\nLiteral colors like \"green\" or \"red\" don't make sense with this enabled anymore.\nThis can be used to pass exact YUV values as hexadecimal numbers.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["chromakey"],"info":"YUV colorspace color/chroma keying.\n\n","options":[{"names":["color"],"info":"The color which will be replaced with transparency.\n\n"},{"names":["similarity"],"info":"Similarity percentage with the key color.\n\n0.01 matches only the exact key color, while 1.0 matches everything.\n\n"},{"names":["blend"],"info":"Blend percentage.\n\n0.0 makes pixels either fully transparent, or not transparent at all.\n\nHigher values result in semi-transparent pixels, with a higher transparency\nthe more similar the pixels color is to the key color.\n\n"},{"names":["yuv"],"info":"Signals that the color passed is already in YUV instead of RGB.\n\nLiteral colors like \"green\" or \"red\" don't make sense with this enabled anymore.\nThis can be used to pass exact YUV values as hexadecimal numbers.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake every green pixel in the input image transparent:\n@example\nffmpeg -i input.png -vf chromakey=green out.png\n@end example\n\n@item\nOverlay a greenscreen-video on top of a static black background.\n@example\nffmpeg -f lavfi -i color=c=black:s=1280x720 -i video.mp4 -shortest -filter_complex \"[1:v]chromakey=0x70de77:0.1:0.2[ckout];[0:v][ckout]overlay[out]\" -map \"[out]\" output.mkv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["chromashift"],"info":"Shift chroma pixels horizontally and/or vertically.\n\n","options":[{"names":["cbh"],"info":"Set amount to shift chroma-blue horizontally.\n"},{"names":["cbv"],"info":"Set amount to shift chroma-blue vertically.\n"},{"names":["crh"],"info":"Set amount to shift chroma-red horizontally.\n"},{"names":["crv"],"info":"Set amount to shift chroma-red vertically.\n"},{"names":["edge"],"info":"Set edge mode, can be @var{smear}, default, or @var{warp}.\n\n@subsection Commands\n\nThis filter supports the all above options as @ref{commands}.\n\n"}]},{"filtergroup":["ciescope"],"info":"\nDisplay CIE color diagram with pixels overlaid onto it.\n\n","options":[{"names":["system"],"info":"Set color system.\n\n@item ntsc, 470m\n@item ebu, 470bg\n@item smpte\n@item 240m\n@item apple\n@item widergb\n@item cie1931\n@item rec709, hdtv\n@item uhdtv, rec2020\n@item dcip3\n\n"},{"names":["cie"],"info":"Set CIE system.\n\n@item xyy\n@item ucs\n@item luv\n\n"},{"names":["gamuts"],"info":"Set what gamuts to draw.\n\nSee @code{system} option for available values.\n\n"},{"names":["size","s"],"info":"Set ciescope size, by default set to 512.\n\n"},{"names":["intensity","i"],"info":"Set intensity used to map input pixel values to CIE diagram.\n\n"},{"names":["contrast"],"info":"Set contrast used to draw tongue colors that are out of active color system gamut.\n\n"},{"names":["corrgamma"],"info":"Correct gamma displayed on scope, by default enabled.\n\n"},{"names":["showwhite"],"info":"Show white point on CIE diagram, by default disabled.\n\n"},{"names":["gamma"],"info":"Set input gamma. Used only with XYZ input color space.\n\n"}]},{"filtergroup":["codecview"],"info":"\nVisualize information exported by some codecs.\n\nSome codecs can export information through frames using side-data or other\nmeans. For example, some MPEG based codecs export motion vectors through the\n@var{export_mvs} flag in the codec @option{flags2} option.\n\nThe filter accepts the following option:\n\n","options":[{"names":["mv"],"info":"Set motion vectors to visualize.\n\nAvailable flags for @var{mv} are:\n\n@item pf\nforward predicted MVs of P-frames\n@item bf\nforward predicted MVs of B-frames\n@item bb\nbackward predicted MVs of B-frames\n\n"},{"names":["qp"],"info":"Display quantization parameters using the chroma planes.\n\n"},{"names":["mv_type","mvt"],"info":"Set motion vectors type to visualize. Includes MVs from all frames unless specified by @var{frame_type} option.\n\nAvailable flags for @var{mv_type} are:\n\n@item fp\nforward predicted MVs\n@item bp\nbackward predicted MVs\n\n"},{"names":["frame_type","ft"],"info":"Set frame type to visualize motion vectors of.\n\nAvailable flags for @var{frame_type} are:\n\n@item if\nintra-coded frames (I-frames)\n@item pf\npredicted frames (P-frames)\n@item bf\nbi-directionally predicted frames (B-frames)\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nVisualize forward predicted MVs of all frames using @command{ffplay}:\n@example\nffplay -flags2 +export_mvs input.mp4 -vf codecview=mv_type=fp\n@end example\n\n@item\nVisualize multi-directionals MVs of P and B-Frames using @command{ffplay}:\n@example\nffplay -flags2 +export_mvs input.mp4 -vf codecview=mv=pf+bf+bb\n@end example\n@end itemize\n\n"}]},{"filtergroup":["colorbalance"],"info":"Modify intensity of primary colors (red, green and blue) of input frames.\n\nThe filter allows an input frame to be adjusted in the shadows, midtones or highlights\nregions for the red-cyan, green-magenta or blue-yellow balance.\n\nA positive adjustment value shifts the balance towards the primary color, a negative\nvalue towards the complementary color.\n\n","options":[{"names":["rs"],"info":""},{"names":["gs"],"info":""},{"names":["bs"],"info":"Adjust red, green and blue shadows (darkest pixels).\n\n"},{"names":["rm"],"info":""},{"names":["gm"],"info":""},{"names":["bm"],"info":"Adjust red, green and blue midtones (medium pixels).\n\n"},{"names":["rh"],"info":""},{"names":["gh"],"info":""},{"names":["bh"],"info":"Adjust red, green and blue highlights (brightest pixels).\n\nAllowed ranges for options are @code{[-1.0, 1.0]}. Defaults are @code{0}.\n\n"},{"names":["pl"],"info":"Preserve lightness when changing color balance. Default is disabled.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nAdd red color cast to shadows:\n@example\ncolorbalance=rs=.3\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the all above options as @ref{commands}.\n\n"}]},{"filtergroup":["colorchannelmixer"],"info":"\nAdjust video input frames by re-mixing color channels.\n\nThis filter modifies a color channel by adding the values associated to\nthe other channels of the same pixels. For example if the value to\nmodify is red, the output value will be:\n@example\n@var{red}=@var{red}*@var{rr} + @var{blue}*@var{rb} + @var{green}*@var{rg} + @var{alpha}*@var{ra}\n@end example\n\n","options":[{"names":["rr"],"info":""},{"names":["rg"],"info":""},{"names":["rb"],"info":""},{"names":["ra"],"info":"Adjust contribution of input red, green, blue and alpha channels for output red channel.\nDefault is @code{1} for @var{rr}, and @code{0} for @var{rg}, @var{rb} and @var{ra}.\n\n"},{"names":["gr"],"info":""},{"names":["gg"],"info":""},{"names":["gb"],"info":""},{"names":["ga"],"info":"Adjust contribution of input red, green, blue and alpha channels for output green channel.\nDefault is @code{1} for @var{gg}, and @code{0} for @var{gr}, @var{gb} and @var{ga}.\n\n"},{"names":["br"],"info":""},{"names":["bg"],"info":""},{"names":["bb"],"info":""},{"names":["ba"],"info":"Adjust contribution of input red, green, blue and alpha channels for output blue channel.\nDefault is @code{1} for @var{bb}, and @code{0} for @var{br}, @var{bg} and @var{ba}.\n\n"},{"names":["ar"],"info":""},{"names":["ag"],"info":""},{"names":["ab"],"info":""},{"names":["aa"],"info":"Adjust contribution of input red, green, blue and alpha channels for output alpha channel.\nDefault is @code{1} for @var{aa}, and @code{0} for @var{ar}, @var{ag} and @var{ab}.\n\nAllowed ranges for options are @code{[-2.0, 2.0]}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nConvert source to grayscale:\n@example\ncolorchannelmixer=.3:.4:.3:0:.3:.4:.3:0:.3:.4:.3\n@end example\n@item\nSimulate sepia tones:\n@example\ncolorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the all above options as @ref{commands}.\n\n"}]},{"filtergroup":["colorkey"],"info":"RGB colorspace color keying.\n\n","options":[{"names":["color"],"info":"The color which will be replaced with transparency.\n\n"},{"names":["similarity"],"info":"Similarity percentage with the key color.\n\n0.01 matches only the exact key color, while 1.0 matches everything.\n\n"},{"names":["blend"],"info":"Blend percentage.\n\n0.0 makes pixels either fully transparent, or not transparent at all.\n\nHigher values result in semi-transparent pixels, with a higher transparency\nthe more similar the pixels color is to the key color.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake every green pixel in the input image transparent:\n@example\nffmpeg -i input.png -vf colorkey=green out.png\n@end example\n\n@item\nOverlay a greenscreen-video on top of a static background image.\n@example\nffmpeg -i background.png -i video.mp4 -filter_complex \"[1:v]colorkey=0x3BBD1E:0.3:0.2[ckout];[0:v][ckout]overlay[out]\" -map \"[out]\" output.flv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["colorhold"],"info":"Remove all color information for all RGB colors except for certain one.\n\n","options":[{"names":["color"],"info":"The color which will not be replaced with neutral gray.\n\n"},{"names":["similarity"],"info":"Similarity percentage with the above color.\n0.01 matches only the exact key color, while 1.0 matches everything.\n\n"},{"names":["blend"],"info":"Blend percentage. 0.0 makes pixels fully gray.\nHigher values result in more preserved color.\n\n"}]},{"filtergroup":["colorlevels"],"info":"\nAdjust video input frames using levels.\n\n","options":[{"names":["rimin"],"info":""},{"names":["gimin"],"info":""},{"names":["bimin"],"info":""},{"names":["aimin"],"info":"Adjust red, green, blue and alpha input black point.\nAllowed ranges for options are @code{[-1.0, 1.0]}. Defaults are @code{0}.\n\n"},{"names":["rimax"],"info":""},{"names":["gimax"],"info":""},{"names":["bimax"],"info":""},{"names":["aimax"],"info":"Adjust red, green, blue and alpha input white point.\nAllowed ranges for options are @code{[-1.0, 1.0]}. Defaults are @code{1}.\n\nInput levels are used to lighten highlights (bright tones), darken shadows\n(dark tones), change the balance of bright and dark tones.\n\n"},{"names":["romin"],"info":""},{"names":["gomin"],"info":""},{"names":["bomin"],"info":""},{"names":["aomin"],"info":"Adjust red, green, blue and alpha output black point.\nAllowed ranges for options are @code{[0, 1.0]}. Defaults are @code{0}.\n\n"},{"names":["romax"],"info":""},{"names":["gomax"],"info":""},{"names":["bomax"],"info":""},{"names":["aomax"],"info":"Adjust red, green, blue and alpha output white point.\nAllowed ranges for options are @code{[0, 1.0]}. Defaults are @code{1}.\n\nOutput levels allows manual selection of a constrained output level range.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake video output darker:\n@example\ncolorlevels=rimin=0.058:gimin=0.058:bimin=0.058\n@end example\n\n@item\nIncrease contrast:\n@example\ncolorlevels=rimin=0.039:gimin=0.039:bimin=0.039:rimax=0.96:gimax=0.96:bimax=0.96\n@end example\n\n@item\nMake video output lighter:\n@example\ncolorlevels=rimax=0.902:gimax=0.902:bimax=0.902\n@end example\n\n@item\nIncrease brightness:\n@example\ncolorlevels=romin=0.5:gomin=0.5:bomin=0.5\n@end example\n@end itemize\n\n"}]},{"filtergroup":["colormatrix"],"info":"\nConvert color matrix.\n\n","options":[{"names":["src"],"info":""},{"names":["dst"],"info":"Specify the source and destination color matrix. Both values must be\nspecified.\n\nThe accepted values are:\n@item bt709\nBT.709\n\n@item fcc\nFCC\n\n@item bt601\nBT.601\n\n@item bt470\nBT.470\n\n@item bt470bg\nBT.470BG\n\n@item smpte170m\nSMPTE-170M\n\n@item smpte240m\nSMPTE-240M\n\n@item bt2020\nBT.2020\n\nFor example to convert from BT.601 to SMPTE-240M, use the command:\n@example\ncolormatrix=bt601:smpte240m\n@end example\n\n"}]},{"filtergroup":["colorspace"],"info":"\nConvert colorspace, transfer characteristics or color primaries.\nInput video needs to have an even size.\n\n","options":[{"names":["all"],"info":"Specify all color properties at once.\n\nThe accepted values are:\n@item bt470m\nBT.470M\n\n@item bt470bg\nBT.470BG\n\n@item bt601-6-525\nBT.601-6 525\n\n@item bt601-6-625\nBT.601-6 625\n\n@item bt709\nBT.709\n\n@item smpte170m\nSMPTE-170M\n\n@item smpte240m\nSMPTE-240M\n\n@item bt2020\nBT.2020\n\n\n@anchor{space}\n"},{"names":["space"],"info":"Specify output colorspace.\n\nThe accepted values are:\n@item bt709\nBT.709\n\n@item fcc\nFCC\n\n@item bt470bg\nBT.470BG or BT.601-6 625\n\n@item smpte170m\nSMPTE-170M or BT.601-6 525\n\n@item smpte240m\nSMPTE-240M\n\n@item ycgco\nYCgCo\n\n@item bt2020ncl\nBT.2020 with non-constant luminance\n\n\n@anchor{trc}\n"},{"names":["trc"],"info":"Specify output transfer characteristics.\n\nThe accepted values are:\n@item bt709\nBT.709\n\n@item bt470m\nBT.470M\n\n@item bt470bg\nBT.470BG\n\n@item gamma22\nConstant gamma of 2.2\n\n@item gamma28\nConstant gamma of 2.8\n\n@item smpte170m\nSMPTE-170M, BT.601-6 625 or BT.601-6 525\n\n@item smpte240m\nSMPTE-240M\n\n@item srgb\nSRGB\n\n@item iec61966-2-1\niec61966-2-1\n\n@item iec61966-2-4\niec61966-2-4\n\n@item xvycc\nxvycc\n\n@item bt2020-10\nBT.2020 for 10-bits content\n\n@item bt2020-12\nBT.2020 for 12-bits content\n\n\n@anchor{primaries}\n"},{"names":["primaries"],"info":"Specify output color primaries.\n\nThe accepted values are:\n@item bt709\nBT.709\n\n@item bt470m\nBT.470M\n\n@item bt470bg\nBT.470BG or BT.601-6 625\n\n@item smpte170m\nSMPTE-170M or BT.601-6 525\n\n@item smpte240m\nSMPTE-240M\n\n@item film\nfilm\n\n@item smpte431\nSMPTE-431\n\n@item smpte432\nSMPTE-432\n\n@item bt2020\nBT.2020\n\n@item jedec-p22\nJEDEC P22 phosphors\n\n\n@anchor{range}\n"},{"names":["range"],"info":"Specify output color range.\n\nThe accepted values are:\n@item tv\nTV (restricted) range\n\n@item mpeg\nMPEG (restricted) range\n\n@item pc\nPC (full) range\n\n@item jpeg\nJPEG (full) range\n\n\n"},{"names":["format"],"info":"Specify output color format.\n\nThe accepted values are:\n@item yuv420p\nYUV 4:2:0 planar 8-bits\n\n@item yuv420p10\nYUV 4:2:0 planar 10-bits\n\n@item yuv420p12\nYUV 4:2:0 planar 12-bits\n\n@item yuv422p\nYUV 4:2:2 planar 8-bits\n\n@item yuv422p10\nYUV 4:2:2 planar 10-bits\n\n@item yuv422p12\nYUV 4:2:2 planar 12-bits\n\n@item yuv444p\nYUV 4:4:4 planar 8-bits\n\n@item yuv444p10\nYUV 4:4:4 planar 10-bits\n\n@item yuv444p12\nYUV 4:4:4 planar 12-bits\n\n\n"},{"names":["fast"],"info":"Do a fast conversion, which skips gamma/primary correction. This will take\nsignificantly less CPU, but will be mathematically incorrect. To get output\ncompatible with that produced by the colormatrix filter, use fast=1.\n\n"},{"names":["dither"],"info":"Specify dithering mode.\n\nThe accepted values are:\n@item none\nNo dithering\n\n@item fsb\nFloyd-Steinberg dithering\n\n"},{"names":["wpadapt"],"info":"Whitepoint adaptation mode.\n\nThe accepted values are:\n@item bradford\nBradford whitepoint adaptation\n\n@item vonkries\nvon Kries whitepoint adaptation\n\n@item identity\nidentity whitepoint adaptation (i.e. no whitepoint adaptation)\n\n"},{"names":["iall"],"info":"Override all input properties at once. Same accepted values as @ref{all}.\n\n"},{"names":["ispace"],"info":"Override input colorspace. Same accepted values as @ref{space}.\n\n"},{"names":["iprimaries"],"info":"Override input color primaries. Same accepted values as @ref{primaries}.\n\n"},{"names":["itrc"],"info":"Override input transfer characteristics. Same accepted values as @ref{trc}.\n\n"},{"names":["irange"],"info":"Override input color range. Same accepted values as @ref{range}.\n\n\nThe filter converts the transfer characteristics, color space and color\nprimaries to the specified user values. The output value, if not specified,\nis set to a default value based on the \"all\" property. If that property is\nalso not specified, the filter will log an error. The output color range and\nformat default to the same value as the input color range and format. The\ninput transfer characteristics, color space, color primaries and color range\nshould be set on the input data. If any of these are missing, the filter will\nlog an error and no conversion will take place.\n\nFor example to convert the input to SMPTE-240M, use the command:\n@example\ncolorspace=smpte240m\n@end example\n\n"}]},{"filtergroup":["convolution"],"info":"\nApply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements.\n\n","options":[{"names":["0m"],"info":""},{"names":["1m"],"info":""},{"names":["2m"],"info":""},{"names":["3m"],"info":"Set matrix for each plane.\nMatrix is sequence of 9, 25 or 49 signed integers in @var{square} mode,\nand from 1 to 49 odd number of signed integers in @var{row} mode.\n\n"},{"names":["0rdiv"],"info":""},{"names":["1rdiv"],"info":""},{"names":["2rdiv"],"info":""},{"names":["3rdiv"],"info":"Set multiplier for calculated value for each plane.\nIf unset or 0, it will be sum of all matrix elements.\n\n"},{"names":["0bias"],"info":""},{"names":["1bias"],"info":""},{"names":["2bias"],"info":""},{"names":["3bias"],"info":"Set bias for each plane. This value is added to the result of the multiplication.\nUseful for making the overall image brighter or darker. Default is 0.0.\n\n"},{"names":["0mode"],"info":""},{"names":["1mode"],"info":""},{"names":["2mode"],"info":""},{"names":["3mode"],"info":"Set matrix mode for each plane. Can be @var{square}, @var{row} or @var{column}.\nDefault is @var{square}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply sharpen:\n@example\nconvolution=\"0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0\"\n@end example\n\n@item\nApply blur:\n@example\nconvolution=\"1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9\"\n@end example\n\n@item\nApply edge enhance:\n@example\nconvolution=\"0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128\"\n@end example\n\n@item\nApply edge detect:\n@example\nconvolution=\"0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128\"\n@end example\n\n@item\nApply laplacian edge detector which includes diagonals:\n@example\nconvolution=\"1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0\"\n@end example\n\n@item\nApply emboss:\n@example\nconvolution=\"-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2\"\n@end example\n@end itemize\n\n"}]},{"filtergroup":["convolve"],"info":"\nApply 2D convolution of video stream in frequency domain using second stream\nas impulse.\n\n","options":[{"names":["planes"],"info":"Set which planes to process.\n\n"},{"names":["impulse"],"info":"Set which impulse video frames will be processed, can be @var{first}\nor @var{all}. Default is @var{all}.\n\nThe @code{convolve} filter also supports the @ref{framesync} options.\n\n"}]},{"filtergroup":["copy"],"info":"\nCopy the input video source unchanged to the output. This is mainly useful for\ntesting purposes.\n\n@anchor{coreimage}\n","options":[]},{"filtergroup":["coreimage"],"info":"Video filtering on GPU using Apple's CoreImage API on OSX.\n\nHardware acceleration is based on an OpenGL context. Usually, this means it is\nprocessed by video hardware. However, software-based OpenGL implementations\nexist which means there is no guarantee for hardware processing. It depends on\nthe respective OSX.\n\nThere are many filters and image generators provided by Apple that come with a\nlarge variety of options. The filter has to be referenced by its name along\nwith its options.\n\nThe coreimage filter accepts the following options:\n","options":[{"names":["list_filters"],"info":"List all available filters and generators along with all their respective\noptions as well as possible minimum and maximum values along with the default\nvalues.\n@example\nlist_filters=true\n@end example\n\n"},{"names":["filter"],"info":"Specify all filters by their respective name and options.\nUse @var{list_filters} to determine all valid filter names and options.\nNumerical options are specified by a float value and are automatically clamped\nto their respective value range. Vector and color options have to be specified\nby a list of space separated float values. Character escaping has to be done.\nA special option name @code{default} is available to use default options for a\nfilter.\n\nIt is required to specify either @code{default} or at least one of the filter options.\nAll omitted options are used with their default values.\nThe syntax of the filter string is as follows:\n@example\nfilter=<NAME>@@<OPTION>=<VALUE>[@@<OPTION>=<VALUE>][@@...][#<NAME>@@<OPTION>=<VALUE>[@@<OPTION>=<VALUE>][@@...]][#...]\n@end example\n\n"},{"names":["output_rect"],"info":"Specify a rectangle where the output of the filter chain is copied into the\ninput image. It is given by a list of space separated float values:\n@example\noutput_rect=x\\ y\\ width\\ height\n@end example\nIf not given, the output rectangle equals the dimensions of the input image.\nThe output rectangle is automatically cropped at the borders of the input\nimage. Negative values are valid for each component.\n@example\noutput_rect=25\\ 25\\ 100\\ 100\n@end example\n\nSeveral filters can be chained for successive processing without GPU-HOST\ntransfers allowing for fast processing of complex filter chains.\nCurrently, only filters with zero (generators) or exactly one (filters) input\nimage and one output image are supported. Also, transition filters are not yet\nusable as intended.\n\nSome filters generate output images with additional padding depending on the\nrespective filter kernel. The padding is automatically removed to ensure the\nfilter output has the same size as the input image.\n\nFor image generators, the size of the output image is determined by the\nprevious output image of the filter chain or the input image of the whole\nfilterchain, respectively. The generators do not use the pixel information of\nthis image to generate their output. However, the generated output is\nblended onto this image, resulting in partial or complete coverage of the\noutput image.\n\nThe @ref{coreimagesrc} video source can be used for generating input images\nwhich are directly fed into the filter chain. By using it, providing input\nimages by another video source or an input video is not required.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nList all filters available:\n@example\ncoreimage=list_filters=true\n@end example\n\n@item\nUse the CIBoxBlur filter with default options to blur an image:\n@example\ncoreimage=filter=CIBoxBlur@@default\n@end example\n\n@item\nUse a filter chain with CISepiaTone at default values and CIVignetteEffect with\nits center at 100x100 and a radius of 50 pixels:\n@example\ncoreimage=filter=CIBoxBlur@@default#CIVignetteEffect@@inputCenter=100\\ 100@@inputRadius=50\n@end example\n\n@item\nUse nullsrc and CIQRCodeGenerator to create a QR code for the FFmpeg homepage,\ngiven as complete and escaped command-line for Apple's standard bash shell:\n@example\nffmpeg -f lavfi -i nullsrc=s=100x100,coreimage=filter=CIQRCodeGenerator@@inputMessage=https\\\\\\\\\\://FFmpeg.org/@@inputCorrectionLevel=H -frames:v 1 QRCode.png\n@end example\n@end itemize\n\n"}]},{"filtergroup":["cover_rect"],"info":"\nCover a rectangular object\n\nIt accepts the following options:\n\n","options":[{"names":["cover"],"info":"Filepath of the optional cover image, needs to be in yuv420.\n\n"},{"names":["mode"],"info":"Set covering mode.\n\nIt accepts the following values:\n@item cover\ncover it by the supplied image\n@item blur\ncover it by interpolating the surrounding pixels\n\nDefault value is @var{blur}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nCover a rectangular object by the supplied image of a given video using @command{ffmpeg}:\n@example\nffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["crop"],"info":"\nCrop the input video to given dimensions.\n\nIt accepts the following parameters:\n\n","options":[{"names":["w","out_w"],"info":"The width of the output video. It defaults to @code{iw}.\nThis expression is evaluated only once during the filter\nconfiguration, or when the @samp{w} or @samp{out_w} command is sent.\n\n"},{"names":["h","out_h"],"info":"The height of the output video. It defaults to @code{ih}.\nThis expression is evaluated only once during the filter\nconfiguration, or when the @samp{h} or @samp{out_h} command is sent.\n\n"},{"names":["x"],"info":"The horizontal position, in the input video, of the left edge of the output\nvideo. It defaults to @code{(in_w-out_w)/2}.\nThis expression is evaluated per-frame.\n\n"},{"names":["y"],"info":"The vertical position, in the input video, of the top edge of the output video.\nIt defaults to @code{(in_h-out_h)/2}.\nThis expression is evaluated per-frame.\n\n"},{"names":["keep_aspect"],"info":"If set to 1 will force the output display aspect ratio\nto be the same of the input, by changing the output sample aspect\nratio. It defaults to 0.\n\n"},{"names":["exact"],"info":"Enable exact cropping. If enabled, subsampled videos will be cropped at exact\nwidth/height/x/y as specified and will not be rounded to nearest smaller value.\nIt defaults to 0.\n\nThe @var{out_w}, @var{out_h}, @var{x}, @var{y} parameters are\nexpressions containing the following constants:\n\n@item x\n@item y\nThe computed values for @var{x} and @var{y}. They are evaluated for\neach new frame.\n\n@item in_w\n@item in_h\nThe input width and height.\n\n@item iw\n@item ih\nThese are the same as @var{in_w} and @var{in_h}.\n\n@item out_w\n@item out_h\nThe output (cropped) width and height.\n\n@item ow\n@item oh\nThese are the same as @var{out_w} and @var{out_h}.\n\n@item a\nsame as @var{iw} / @var{ih}\n\n@item sar\ninput sample aspect ratio\n\n@item dar\ninput display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item n\nThe number of the input frame, starting from 0.\n\n@item pos\nthe position in the file of the input frame, NAN if unknown\n\n@item t\nThe timestamp expressed in seconds. It's NAN if the input timestamp is unknown.\n\n\nThe expression for @var{out_w} may depend on the value of @var{out_h},\nand the expression for @var{out_h} may depend on @var{out_w}, but they\ncannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are\nevaluated after @var{out_w} and @var{out_h}.\n\nThe @var{x} and @var{y} parameters specify the expressions for the\nposition of the top-left corner of the output (non-cropped) area. They\nare evaluated for each frame. If the evaluated value is not valid, it\nis approximated to the nearest valid value.\n\nThe expression for @var{x} may depend on @var{y}, and the expression\nfor @var{y} may depend on @var{x}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nCrop area with size 100x100 at position (12,34).\n@example\ncrop=100:100:12:34\n@end example\n\nUsing named options, the example above becomes:\n@example\ncrop=w=100:h=100:x=12:y=34\n@end example\n\n@item\nCrop the central input area with size 100x100:\n@example\ncrop=100:100\n@end example\n\n@item\nCrop the central input area with size 2/3 of the input video:\n@example\ncrop=2/3*in_w:2/3*in_h\n@end example\n\n@item\nCrop the input video central square:\n@example\ncrop=out_w=in_h\ncrop=in_h\n@end example\n\n@item\nDelimit the rectangle with the top-left corner placed at position\n100:100 and the right-bottom corner corresponding to the right-bottom\ncorner of the input image.\n@example\ncrop=in_w-100:in_h-100:100:100\n@end example\n\n@item\nCrop 10 pixels from the left and right borders, and 20 pixels from\nthe top and bottom borders\n@example\ncrop=in_w-2*10:in_h-2*20\n@end example\n\n@item\nKeep only the bottom right quarter of the input image:\n@example\ncrop=in_w/2:in_h/2:in_w/2:in_h/2\n@end example\n\n@item\nCrop height for getting Greek harmony:\n@example\ncrop=in_w:1/PHI*in_w\n@end example\n\n@item\nApply trembling effect:\n@example\ncrop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)\n@end example\n\n@item\nApply erratic camera effect depending on timestamp:\n@example\ncrop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)\"\n@end example\n\n@item\nSet x depending on the value of y:\n@example\ncrop=in_w/2:in_h/2:y:10+10*sin(n/10)\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item w, out_w\n@item h, out_h\n@item x\n@item y\nSet width/height of the output video and the horizontal/vertical position\nin the input video.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n@end table\n\n"}]},{"filtergroup":["cropdetect"],"info":"\nAuto-detect the crop size.\n\nIt calculates the necessary cropping parameters and prints the\nrecommended parameters via the logging system. The detected dimensions\ncorrespond to the non-black area of the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["limit"],"info":"Set higher black value threshold, which can be optionally specified\nfrom nothing (0) to everything (255 for 8-bit based formats). An intensity\nvalue greater to the set value is considered non-black. It defaults to 24.\nYou can also specify a value between 0.0 and 1.0 which will be scaled depending\non the bitdepth of the pixel format.\n\n"},{"names":["round"],"info":"The value which the width/height should be divisible by. It defaults to\n16. The offset is automatically adjusted to center the video. Use 2 to\nget only even dimensions (needed for 4:2:2 video). 16 is best when\nencoding to most video codecs.\n\n"},{"names":["reset_count","reset"],"info":"Set the counter that determines after how many frames cropdetect will\nreset the previously detected largest video area and start over to\ndetect the current optimal crop area. Default value is 0.\n\nThis can be useful when channel logos distort the video area. 0\nindicates 'never reset', and returns the largest area encountered during\nplayback.\n\n@anchor{cue}\n"}]},{"filtergroup":["cue"],"info":"\nDelay video filtering until a given wallclock timestamp. The filter first\npasses on @option{preroll} amount of frames, then it buffers at most\n@option{buffer} amount of frames and waits for the cue. After reaching the cue\nit forwards the buffered frames and also any subsequent frames coming in its\ninput.\n\nThe filter can be used synchronize the output of multiple ffmpeg processes for\nrealtime output devices like decklink. By putting the delay in the filtering\nchain and pre-buffering frames the process can pass on data to output almost\nimmediately after the target wallclock timestamp is reached.\n\nPerfect frame accuracy cannot be guaranteed, but the result is good enough for\nsome use cases.\n\n","options":[{"names":["cue"],"info":"The cue timestamp expressed in a UNIX timestamp in microseconds. Default is 0.\n\n"},{"names":["preroll"],"info":"The duration of content to pass on as preroll expressed in seconds. Default is 0.\n\n"},{"names":["buffer"],"info":"The maximum duration of content to buffer before waiting for the cue expressed\nin seconds. Default is 0.\n\n\n@anchor{curves}\n"}]},{"filtergroup":["curves"],"info":"\nApply color adjustments using curves.\n\nThis filter is similar to the Adobe Photoshop and GIMP curves tools. Each\ncomponent (red, green and blue) has its values defined by @var{N} key points\ntied from each other using a smooth curve. The x-axis represents the pixel\nvalues from the input frame, and the y-axis the new pixel values to be set for\nthe output frame.\n\nBy default, a component curve is defined by the two points @var{(0;0)} and\n@var{(1;1)}. This creates a straight line where each original pixel value is\n\"adjusted\" to its own value, which means no change to the image.\n\nThe filter allows you to redefine these two points and add some more. A new\ncurve (using a natural cubic spline interpolation) will be define to pass\nsmoothly through all these new coordinates. The new defined points needs to be\nstrictly increasing over the x-axis, and their @var{x} and @var{y} values must\nbe in the @var{[0;1]} interval. If the computed curves happened to go outside\nthe vector spaces, the values will be clipped accordingly.\n\n","options":[{"names":["preset"],"info":"Select one of the available color presets. This option can be used in addition\nto the @option{r}, @option{g}, @option{b} parameters; in this case, the later\noptions takes priority on the preset values.\nAvailable presets are:\n@item none\n@item color_negative\n@item cross_process\n@item darker\n@item increase_contrast\n@item lighter\n@item linear_contrast\n@item medium_contrast\n@item negative\n@item strong_contrast\n@item vintage\nDefault is @code{none}.\n"},{"names":["master","m"],"info":"Set the master key points. These points will define a second pass mapping. It\nis sometimes called a \"luminance\" or \"value\" mapping. It can be used with\n@option{r}, @option{g}, @option{b} or @option{all} since it acts like a\npost-processing LUT.\n"},{"names":["red","r"],"info":"Set the key points for the red component.\n"},{"names":["green","g"],"info":"Set the key points for the green component.\n"},{"names":["blue","b"],"info":"Set the key points for the blue component.\n"},{"names":["all"],"info":"Set the key points for all components (not including master).\nCan be used in addition to the other key points component\noptions. In this case, the unset component(s) will fallback on this\n@option{all} setting.\n"},{"names":["psfile"],"info":"Specify a Photoshop curves file (@code{.acv}) to import the settings from.\n"},{"names":["plot"],"info":"Save Gnuplot script of the curves in specified file.\n\nTo avoid some filtergraph syntax conflicts, each key points list need to be\ndefined using the following syntax: @code{x0/y0 x1/y1 x2/y2 ...}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nIncrease slightly the middle level of blue:\n@example\ncurves=blue='0/0 0.5/0.58 1/1'\n@end example\n\n@item\nVintage effect:\n@example\ncurves=r='0/0.11 .42/.51 1/0.95':g='0/0 0.50/0.48 1/1':b='0/0.22 .49/.44 1/0.8'\n@end example\nHere we obtain the following coordinates for each components:\n@table @var\n@item red\n@code{(0;0.11) (0.42;0.51) (1;0.95)}\n@item green\n@code{(0;0) (0.50;0.48) (1;1)}\n@item blue\n@code{(0;0.22) (0.49;0.44) (1;0.80)}\n@end table\n\n@item\nThe previous example can also be achieved with the associated built-in preset:\n@example\ncurves=preset=vintage\n@end example\n\n@item\nOr simply:\n@example\ncurves=vintage\n@end example\n\n@item\nUse a Photoshop preset and redefine the points of the green component:\n@example\ncurves=psfile='MyCurvesPresets/purple.acv':green='0/0 0.45/0.53 1/1'\n@end example\n\n@item\nCheck out the curves of the @code{cross_process} profile using @command{ffmpeg}\nand @command{gnuplot}:\n@example\nffmpeg -f lavfi -i color -vf curves=cross_process:plot=/tmp/curves.plt -frames:v 1 -f null -\ngnuplot -p /tmp/curves.plt\n@end example\n@end itemize\n\n"}]},{"filtergroup":["datascope"],"info":"\nVideo data analysis filter.\n\nThis filter shows hexadecimal pixel values of part of video.\n\n","options":[{"names":["size","s"],"info":"Set output video size.\n\n"},{"names":["x"],"info":"Set x offset from where to pick pixels.\n\n"},{"names":["y"],"info":"Set y offset from where to pick pixels.\n\n"},{"names":["mode"],"info":"Set scope mode, can be one of the following:\n@item mono\nDraw hexadecimal pixel values with white color on black background.\n\n@item color\nDraw hexadecimal pixel values with input video pixel color on black\nbackground.\n\n@item color2\nDraw hexadecimal pixel values on color background picked from input video,\nthe text color is picked in such way so its always visible.\n\n"},{"names":["axis"],"info":"Draw rows and columns numbers on left and top of video.\n\n"},{"names":["opacity"],"info":"Set background opacity.\n\n"}]},{"filtergroup":["dctdnoiz"],"info":"\nDenoise frames using 2D DCT (frequency domain filtering).\n\nThis filter is not designed for real time.\n\n","options":[{"names":["sigma","s"],"info":"Set the noise sigma constant.\n\nThis @var{sigma} defines a hard threshold of @code{3 * sigma}; every DCT\ncoefficient (absolute value) below this threshold with be dropped.\n\nIf you need a more advanced filtering, see @option{expr}.\n\nDefault is @code{0}.\n\n"},{"names":["overlap"],"info":"Set number overlapping pixels for each block. Since the filter can be slow, you\nmay want to reduce this value, at the cost of a less effective filter and the\nrisk of various artefacts.\n\nIf the overlapping value doesn't permit processing the whole input width or\nheight, a warning will be displayed and according borders won't be denoised.\n\nDefault value is @var{blocksize}-1, which is the best possible setting.\n\n"},{"names":["expr","e"],"info":"Set the coefficient factor expression.\n\nFor each coefficient of a DCT block, this expression will be evaluated as a\nmultiplier value for the coefficient.\n\nIf this is option is set, the @option{sigma} option will be ignored.\n\nThe absolute value of the coefficient can be accessed through the @var{c}\nvariable.\n\n"},{"names":["n"],"info":"Set the @var{blocksize} using the number of bits. @code{1<<@var{n}} defines the\n@var{blocksize}, which is the width and height of the processed blocks.\n\nThe default value is @var{3} (8x8) and can be raised to @var{4} for a\n@var{blocksize} of 16x16. Note that changing this setting has huge consequences\non the speed processing. Also, a larger block size does not necessarily means a\nbetter de-noising.\n\n","examples":"@subsection Examples\n\nApply a denoise with a @option{sigma} of @code{4.5}:\n@example\ndctdnoiz=4.5\n@end example\n\nThe same operation can be achieved using the expression system:\n@example\ndctdnoiz=e='gte(c, 4.5*3)'\n@end example\n\nViolent denoise using a block size of @code{16x16}:\n@example\ndctdnoiz=15:n=4\n@end example\n\n"}]},{"filtergroup":["deband"],"info":"\nRemove banding artifacts from input video.\nIt works by replacing banded pixels with average value of referenced pixels.\n\n","options":[{"names":["1thr"],"info":""},{"names":["2thr"],"info":""},{"names":["3thr"],"info":""},{"names":["4thr"],"info":"Set banding detection threshold for each plane. Default is 0.02.\nValid range is 0.00003 to 0.5.\nIf difference between current pixel and reference pixel is less than threshold,\nit will be considered as banded.\n\n"},{"names":["range","r"],"info":"Banding detection range in pixels. Default is 16. If positive, random number\nin range 0 to set value will be used. If negative, exact absolute value\nwill be used.\nThe range defines square of four pixels around current pixel.\n\n"},{"names":["direction","d"],"info":"Set direction in radians from which four pixel will be compared. If positive,\nrandom direction from 0 to set direction will be picked. If negative, exact of\nabsolute value will be picked. For example direction 0, -PI or -2*PI radians\nwill pick only pixels on same row and -PI/2 will pick only pixels on same\ncolumn.\n\n"},{"names":["blur","b"],"info":"If enabled, current pixel is compared with average value of all four\nsurrounding pixels. The default is enabled. If disabled current pixel is\ncompared with all four surrounding pixels. The pixel is considered banded\nif only all four differences with surrounding pixels are less than threshold.\n\n"},{"names":["coupling","c"],"info":"If enabled, current pixel is changed if and only if all pixel components are banded,\ne.g. banding detection threshold is triggered for all color components.\nThe default is disabled.\n\n"}]},{"filtergroup":["deblock"],"info":"\nRemove blocking artifacts from input video.\n\n","options":[{"names":["filter"],"info":"Set filter type, can be @var{weak} or @var{strong}. Default is @var{strong}.\nThis controls what kind of deblocking is applied.\n\n"},{"names":["block"],"info":"Set size of block, allowed range is from 4 to 512. Default is @var{8}.\n\n"},{"names":["alpha"],"info":""},{"names":["beta"],"info":""},{"names":["gamma"],"info":""},{"names":["delta"],"info":"Set blocking detection thresholds. Allowed range is 0 to 1.\nDefaults are: @var{0.098} for @var{alpha} and @var{0.05} for the rest.\nUsing higher threshold gives more deblocking strength.\nSetting @var{alpha} controls threshold detection at exact edge of block.\nRemaining options controls threshold detection near the edge. Each one for\nbelow/above or left/right. Setting any of those to @var{0} disables\ndeblocking.\n\n"},{"names":["planes"],"info":"Set planes to filter. Default is to filter all available planes.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDeblock using weak filter and block size of 4 pixels.\n@example\ndeblock=filter=weak:block=4\n@end example\n\n@item\nDeblock using strong filter, block size of 4 pixels and custom thresholds for\ndeblocking more edges.\n@example\ndeblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05\n@end example\n\n@item\nSimilar as above, but filter only first plane.\n@example\ndeblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05:planes=1\n@end example\n\n@item\nSimilar as above, but filter only second and third plane.\n@example\ndeblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05:planes=6\n@end example\n@end itemize\n\n@anchor{decimate}\n"}]},{"filtergroup":["decimate"],"info":"\nDrop duplicated frames at regular intervals.\n\n","options":[{"names":["cycle"],"info":"Set the number of frames from which one will be dropped. Setting this to\n@var{N} means one frame in every batch of @var{N} frames will be dropped.\nDefault is @code{5}.\n\n"},{"names":["dupthresh"],"info":"Set the threshold for duplicate detection. If the difference metric for a frame\nis less than or equal to this value, then it is declared as duplicate. Default\nis @code{1.1}\n\n"},{"names":["scthresh"],"info":"Set scene change threshold. Default is @code{15}.\n\n"},{"names":["blockx"],"info":""},{"names":["blocky"],"info":"Set the size of the x and y-axis blocks used during metric calculations.\nLarger blocks give better noise suppression, but also give worse detection of\nsmall movements. Must be a power of two. Default is @code{32}.\n\n"},{"names":["ppsrc"],"info":"Mark main input as a pre-processed input and activate clean source input\nstream. This allows the input to be pre-processed with various filters to help\nthe metrics calculation while keeping the frame selection lossless. When set to\n@code{1}, the first stream is for the pre-processed input, and the second\nstream is the clean source from where the kept frames are chosen. Default is\n@code{0}.\n\n"},{"names":["chroma"],"info":"Set whether or not chroma is considered in the metric calculations. Default is\n@code{1}.\n\n"}]},{"filtergroup":["deconvolve"],"info":"\nApply 2D deconvolution of video stream in frequency domain using second stream\nas impulse.\n\n","options":[{"names":["planes"],"info":"Set which planes to process.\n\n"},{"names":["impulse"],"info":"Set which impulse video frames will be processed, can be @var{first}\nor @var{all}. Default is @var{all}.\n\n"},{"names":["noise"],"info":"Set noise when doing divisions. Default is @var{0.0000001}. Useful when width\nand height are not same and not power of 2 or if stream prior to convolving\nhad noise.\n\nThe @code{deconvolve} filter also supports the @ref{framesync} options.\n\n"}]},{"filtergroup":["dedot"],"info":"\nReduce cross-luminance (dot-crawl) and cross-color (rainbows) from video.\n\nIt accepts the following options:\n\n","options":[{"names":["m"],"info":"Set mode of operation. Can be combination of @var{dotcrawl} for cross-luminance reduction and/or\n@var{rainbows} for cross-color reduction.\n\n"},{"names":["lt"],"info":"Set spatial luma threshold. Lower values increases reduction of cross-luminance.\n\n"},{"names":["tl"],"info":"Set tolerance for temporal luma. Higher values increases reduction of cross-luminance.\n\n"},{"names":["tc"],"info":"Set tolerance for chroma temporal variation. Higher values increases reduction of cross-color.\n\n"},{"names":["ct"],"info":"Set temporal chroma threshold. Lower values increases reduction of cross-color.\n\n"}]},{"filtergroup":["deflate"],"info":"\nApply deflate effect to the video.\n\nThis filter replaces the pixel by the local(3x3) average by taking into account\nonly values lower than the pixel.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane, default is 65535.\nIf 0, plane will remain unchanged.\n\n"}]},{"filtergroup":["deflicker"],"info":"\nRemove temporal frame luminance variations.\n\nIt accepts the following options:\n\n","options":[{"names":["size","s"],"info":"Set moving-average filter size in frames. Default is 5. Allowed range is 2 - 129.\n\n"},{"names":["mode","m"],"info":"Set averaging mode to smooth temporal luminance variations.\n\nAvailable values are:\n@item am\nArithmetic mean\n\n@item gm\nGeometric mean\n\n@item hm\nHarmonic mean\n\n@item qm\nQuadratic mean\n\n@item cm\nCubic mean\n\n@item pm\nPower mean\n\n@item median\nMedian\n\n"},{"names":["bypass"],"info":"Do not actually modify frame. Useful when one only wants metadata.\n\n"}]},{"filtergroup":["dejudder"],"info":"\nRemove judder produced by partially interlaced telecined content.\n\nJudder can be introduced, for instance, by @ref{pullup} filter. If the original\nsource was partially telecined content then the output of @code{pullup,dejudder}\nwill have a variable frame rate. May change the recorded frame rate of the\ncontainer. Aside from that change, this filter will not affect constant frame\nrate video.\n\nThe option available in this filter is:\n","options":[{"names":["cycle"],"info":"Specify the length of the window over which the judder repeats.\n\nAccepts any integer greater than 1. Useful values are:\n\n@item 4\nIf the original was telecined from 24 to 30 fps (Film to NTSC).\n\n@item 5\nIf the original was telecined from 25 to 30 fps (PAL to NTSC).\n\n@item 20\nIf a mixture of the two.\n\nThe default is @samp{4}.\n\n"}]},{"filtergroup":["delogo"],"info":"\nSuppress a TV station logo by a simple interpolation of the surrounding\npixels. Just set a rectangle covering the logo and watch it disappear\n(and sometimes something even uglier appear - your mileage may vary).\n\nIt accepts the following parameters:\n","options":[{"names":["x"],"info":""},{"names":["y"],"info":"Specify the top left corner coordinates of the logo. They must be\nspecified.\n\n"},{"names":["w"],"info":""},{"names":["h"],"info":"Specify the width and height of the logo to clear. They must be\nspecified.\n\n"},{"names":["band","t"],"info":"Specify the thickness of the fuzzy edge of the rectangle (added to\n@var{w} and @var{h}). The default value is 1. This option is\ndeprecated, setting higher values should no longer be necessary and\nis not recommended.\n\n"},{"names":["show"],"info":"When set to 1, a green rectangle is drawn on the screen to simplify\nfinding the right @var{x}, @var{y}, @var{w}, and @var{h} parameters.\nThe default value is 0.\n\nThe rectangle is drawn on the outermost pixels which will be (partly)\nreplaced with interpolated values. The values of the next pixels\nimmediately outside this rectangle in each direction will be used to\ncompute the interpolated pixel values inside the rectangle.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSet a rectangle covering the area with top left corner coordinates 0,0\nand size 100x77, and a band of size 10:\n@example\ndelogo=x=0:y=0:w=100:h=77:band=10\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["derain"],"info":"\nRemove the rain in the input image/video by applying the derain methods based on\nconvolutional neural networks. Supported models:\n\n@itemize\n@item\nRecurrent Squeeze-and-Excitation Context Aggregation Net (RESCAN).\nSee @url{http://openaccess.thecvf.com/content_ECCV_2018/papers/Xia_Li_Recurrent_Squeeze-and-Excitation_Context_ECCV_2018_paper.pdf}.\n@end itemize\n\nTraining as well as model generation scripts are provided in\nthe repository at @url{https://github.com/XueweiMeng/derain_filter.git}.\n\nNative model files (.model) can be generated from TensorFlow model\nfiles (.pb) by using tools/python/convert.py\n\n","options":[{"names":["filter_type"],"info":"Specify which filter to use. This option accepts the following values:\n\n@item derain\nDerain filter. To conduct derain filter, you need to use a derain model.\n\n@item dehaze\nDehaze filter. To conduct dehaze filter, you need to use a dehaze model.\nDefault value is @samp{derain}.\n\n"},{"names":["dnn_backend"],"info":"Specify which DNN backend to use for model loading and execution. This option accepts\nthe following values:\n\n@item native\nNative implementation of DNN loading and execution.\n\n@item tensorflow\nTensorFlow backend. To enable this backend you\nneed to install the TensorFlow for C library (see\n@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with\n@code{--enable-libtensorflow}\nDefault value is @samp{native}.\n\n"},{"names":["model"],"info":"Set path to model file specifying network architecture and its parameters.\nNote that different backends use different file formats. TensorFlow and native\nbackend can load files for only its format.\n\n"}]},{"filtergroup":["deshake"],"info":"\nAttempt to fix small changes in horizontal and/or vertical shift. This\nfilter helps remove camera shake from hand-holding a camera, bumping a\ntripod, moving on a vehicle, etc.\n\n","options":[{"names":["x"],"info":""},{"names":["y"],"info":""},{"names":["w"],"info":""},{"names":["h"],"info":"Specify a rectangular area where to limit the search for motion\nvectors.\nIf desired the search for motion vectors can be limited to a\nrectangular area of the frame defined by its top left corner, width\nand height. These parameters have the same meaning as the drawbox\nfilter which can be used to visualise the position of the bounding\nbox.\n\nThis is useful when simultaneous movement of subjects within the frame\nmight be confused for camera motion by the motion vector search.\n\nIf any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1\nthen the full frame is used. This allows later options to be set\nwithout specifying the bounding box for the motion vector search.\n\nDefault - search the whole frame.\n\n"},{"names":["rx"],"info":""},{"names":["ry"],"info":"Specify the maximum extent of movement in x and y directions in the\nrange 0-64 pixels. Default 16.\n\n"},{"names":["edge"],"info":"Specify how to generate pixels to fill blanks at the edge of the\nframe. Available values are:\n@item blank, 0\nFill zeroes at blank locations\n@item original, 1\nOriginal image at blank locations\n@item clamp, 2\nExtruded edge value at blank locations\n@item mirror, 3\nMirrored edge at blank locations\nDefault value is @samp{mirror}.\n\n"},{"names":["blocksize"],"info":"Specify the blocksize to use for motion search. Range 4-128 pixels,\ndefault 8.\n\n"},{"names":["contrast"],"info":"Specify the contrast threshold for blocks. Only blocks with more than\nthe specified contrast (difference between darkest and lightest\npixels) will be considered. Range 1-255, default 125.\n\n"},{"names":["search"],"info":"Specify the search strategy. Available values are:\n@item exhaustive, 0\nSet exhaustive search\n@item less, 1\nSet less exhaustive search.\nDefault value is @samp{exhaustive}.\n\n"},{"names":["filename"],"info":"If set then a detailed log of the motion search is written to the\nspecified file.\n\n\n"}]},{"filtergroup":["despill"],"info":"\nRemove unwanted contamination of foreground colors, caused by reflected color of\ngreenscreen or bluescreen.\n\nThis filter accepts the following options:\n\n","options":[{"names":["type"],"info":"Set what type of despill to use.\n\n"},{"names":["mix"],"info":"Set how spillmap will be generated.\n\n"},{"names":["expand"],"info":"Set how much to get rid of still remaining spill.\n\n"},{"names":["red"],"info":"Controls amount of red in spill area.\n\n"},{"names":["green"],"info":"Controls amount of green in spill area.\nShould be -1 for greenscreen.\n\n"},{"names":["blue"],"info":"Controls amount of blue in spill area.\nShould be -1 for bluescreen.\n\n"},{"names":["brightness"],"info":"Controls brightness of spill area, preserving colors.\n\n"},{"names":["alpha"],"info":"Modify alpha from generated spillmap.\n\n"}]},{"filtergroup":["detelecine"],"info":"\nApply an exact inverse of the telecine operation. It requires a predefined\npattern specified using the pattern option which must be the same as that passed\nto the telecine filter.\n\nThis filter accepts the following options:\n\n","options":[{"names":["first_field"],"info":"@item top, t\ntop field first\n@item bottom, b\nbottom field first\nThe default value is @code{top}.\n\n"},{"names":["pattern"],"info":"A string of numbers representing the pulldown pattern you wish to apply.\nThe default value is @code{23}.\n\n"},{"names":["start_frame"],"info":"A number representing position of the first frame with respect to the telecine\npattern. This is to be used if the stream is cut. The default value is @code{0}.\n\n"}]},{"filtergroup":["dilation"],"info":"\nApply dilation effect to the video.\n\nThis filter replaces the pixel by the local(3x3) maximum.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane, default is 65535.\nIf 0, plane will remain unchanged.\n\n"},{"names":["coordinates"],"info":"Flag which specifies the pixel to refer to. Default is 255 i.e. all eight\npixels are used.\n\nFlags to local 3x3 coordinates maps like this:\n\n 1 2 3\n 4 5\n 6 7 8\n\n"}]},{"filtergroup":["displace"],"info":"\nDisplace pixels as indicated by second and third input stream.\n\nIt takes three input streams and outputs one stream, the first input is the\nsource, and second and third input are displacement maps.\n\nThe second input specifies how much to displace pixels along the\nx-axis, while the third input specifies how much to displace pixels\nalong the y-axis.\nIf one of displacement map streams terminates, last frame from that\ndisplacement map will be used.\n\nNote that once generated, displacements maps can be reused over and over again.\n\nA description of the accepted options follows.\n\n","options":[{"names":["edge"],"info":"Set displace behavior for pixels that are out of range.\n\nAvailable values are:\n@item blank\nMissing pixels are replaced by black pixels.\n\n@item smear\nAdjacent pixels will spread out to replace missing pixels.\n\n@item wrap\nOut of range pixels are wrapped so they point to pixels of other side.\n\n@item mirror\nOut of range pixels will be replaced with mirrored pixels.\nDefault is @samp{smear}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nAdd ripple effect to rgb input of video size hd720:\n@example\nffmpeg -i INPUT -f lavfi -i nullsrc=s=hd720,lutrgb=128:128:128 -f lavfi -i nullsrc=s=hd720,geq='r=128+30*sin(2*PI*X/400+T):g=128+30*sin(2*PI*X/400+T):b=128+30*sin(2*PI*X/400+T)' -lavfi '[0][1][2]displace' OUTPUT\n@end example\n\n@item\nAdd wave effect to rgb input of video size hd720:\n@example\nffmpeg -i INPUT -f lavfi -i nullsrc=hd720,geq='r=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T)):g=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T)):b=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T))' -lavfi '[1]split[x][y],[0][x][y]displace' OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["dnn_processing"],"info":"\nDo image processing with deep neural networks. Currently only AVFrame with RGB24\nand BGR24 are supported, more formats will be added later.\n\n","options":[{"names":["dnn_backend"],"info":"Specify which DNN backend to use for model loading and execution. This option accepts\nthe following values:\n\n@item native\nNative implementation of DNN loading and execution.\n\n@item tensorflow\nTensorFlow backend. To enable this backend you\nneed to install the TensorFlow for C library (see\n@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with\n@code{--enable-libtensorflow}\n\nDefault value is @samp{native}.\n\n"},{"names":["model"],"info":"Set path to model file specifying network architecture and its parameters.\nNote that different backends use different file formats. TensorFlow and native\nbackend can load files for only its format.\n\nNative model file (.model) can be generated from TensorFlow model file (.pb) by using tools/python/convert.py\n\n"},{"names":["input"],"info":"Set the input name of the dnn network.\n\n"},{"names":["output"],"info":"Set the output name of the dnn network.\n\n"},{"names":["fmt"],"info":"Set the pixel format for the Frame. Allowed values are @code{AV_PIX_FMT_RGB24}, and @code{AV_PIX_FMT_BGR24}.\nDefault value is @code{AV_PIX_FMT_RGB24}.\n\n\n"}]},{"filtergroup":["drawbox"],"info":"\nDraw a colored box on the input image.\n\nIt accepts the following parameters:\n\n","options":[{"names":["x"],"info":""},{"names":["y"],"info":"The expressions which specify the top left corner coordinates of the box. It defaults to 0.\n\n"},{"names":["width","w"],"info":""},{"names":["height","h"],"info":"The expressions which specify the width and height of the box; if 0 they are interpreted as\nthe input width and height. It defaults to 0.\n\n"},{"names":["color","c"],"info":"Specify the color of the box to write. For the general syntax of this option,\ncheck the @ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}. If the special\nvalue @code{invert} is used, the box edge color is the same as the\nvideo with inverted luma.\n\n"},{"names":["thickness","t"],"info":"The expression which sets the thickness of the box edge.\nA value of @code{fill} will create a filled box. Default value is @code{3}.\n\nSee below for the list of accepted constants.\n\n"},{"names":["replace"],"info":"Applicable if the input has alpha. With value @code{1}, the pixels of the painted box\nwill overwrite the video's color and alpha pixels.\nDefault is @code{0}, which composites the box onto the input, leaving the video's alpha intact.\n\nThe parameters for @var{x}, @var{y}, @var{w} and @var{h} and @var{t} are expressions containing the\nfollowing constants:\n\n@item dar\nThe input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}.\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item in_h, ih\n@item in_w, iw\nThe input width and height.\n\n@item sar\nThe input sample aspect ratio.\n\n@item x\n@item y\nThe x and y offset coordinates where the box is drawn.\n\n@item w\n@item h\nThe width and height of the drawn box.\n\n@item t\nThe thickness of the drawn box.\n\nThese constants allow the @var{x}, @var{y}, @var{w}, @var{h} and @var{t} expressions to refer to\neach other, so you may for example specify @code{y=x/dar} or @code{h=w/dar}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDraw a black box around the edge of the input image:\n@example\ndrawbox\n@end example\n\n@item\nDraw a box with color red and an opacity of 50%:\n@example\ndrawbox=10:20:200:60:red@@0.5\n@end example\n\nThe previous example can be specified as:\n@example\ndrawbox=x=10:y=20:w=200:h=60:color=red@@0.5\n@end example\n\n@item\nFill the box with pink color:\n@example\ndrawbox=x=10:y=10:w=100:h=100:color=pink@@0.5:t=fill\n@end example\n\n@item\nDraw a 2-pixel red 2.40:1 mask:\n@example\ndrawbox=x=-t:y=0.5*(ih-iw/2.4)-t:w=iw+t*2:h=iw/2.4+t*2:t=2:c=red\n@end example\n@end itemize\n\n@subsection Commands\nThis filter supports same commands as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n@anchor{drawgraph}\n"}]},{"filtergroup":["drawgraph"],"info":"Draw a graph using input video metadata.\n\nIt accepts the following parameters:\n\n","options":[{"names":["m1"],"info":"Set 1st frame metadata key from which metadata values will be used to draw a graph.\n\n"},{"names":["fg1"],"info":"Set 1st foreground color expression.\n\n"},{"names":["m2"],"info":"Set 2nd frame metadata key from which metadata values will be used to draw a graph.\n\n"},{"names":["fg2"],"info":"Set 2nd foreground color expression.\n\n"},{"names":["m3"],"info":"Set 3rd frame metadata key from which metadata values will be used to draw a graph.\n\n"},{"names":["fg3"],"info":"Set 3rd foreground color expression.\n\n"},{"names":["m4"],"info":"Set 4th frame metadata key from which metadata values will be used to draw a graph.\n\n"},{"names":["fg4"],"info":"Set 4th foreground color expression.\n\n"},{"names":["min"],"info":"Set minimal value of metadata value.\n\n"},{"names":["max"],"info":"Set maximal value of metadata value.\n\n"},{"names":["bg"],"info":"Set graph background color. Default is white.\n\n"},{"names":["mode"],"info":"Set graph mode.\n\nAvailable values for mode is:\n@item bar\n@item dot\n@item line\n\nDefault is @code{line}.\n\n"},{"names":["slide"],"info":"Set slide mode.\n\nAvailable values for slide is:\n@item frame\nDraw new frame when right border is reached.\n\n@item replace\nReplace old columns with new ones.\n\n@item scroll\nScroll from right to left.\n\n@item rscroll\nScroll from left to right.\n\n@item picture\nDraw single picture.\n\nDefault is @code{frame}.\n\n"},{"names":["size"],"info":"Set size of graph video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nThe default value is @code{900x256}.\n\nThe foreground color expressions can use the following variables:\n@item MIN\nMinimal value of metadata value.\n\n@item MAX\nMaximal value of metadata value.\n\n@item VAL\nCurrent metadata key value.\n\nThe color is defined as 0xAABBGGRR.\n\nExample using metadata from @ref{signalstats} filter:\n@example\nsignalstats,drawgraph=lavfi.signalstats.YAVG:min=0:max=255\n@end example\n\nExample using metadata from @ref{ebur128} filter:\n@example\nebur128=metadata=1,adrawgraph=lavfi.r128.M:min=-120:max=5\n@end example\n\n"}]},{"filtergroup":["drawgrid"],"info":"\nDraw a grid on the input image.\n\nIt accepts the following parameters:\n\n","options":[{"names":["x"],"info":""},{"names":["y"],"info":"The expressions which specify the coordinates of some point of grid intersection (meant to configure offset). Both default to 0.\n\n"},{"names":["width","w"],"info":""},{"names":["height","h"],"info":"The expressions which specify the width and height of the grid cell, if 0 they are interpreted as the\ninput width and height, respectively, minus @code{thickness}, so image gets\nframed. Default to 0.\n\n"},{"names":["color","c"],"info":"Specify the color of the grid. For the general syntax of this option,\ncheck the @ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}. If the special\nvalue @code{invert} is used, the grid color is the same as the\nvideo with inverted luma.\n\n"},{"names":["thickness","t"],"info":"The expression which sets the thickness of the grid line. Default value is @code{1}.\n\nSee below for the list of accepted constants.\n\n"},{"names":["replace"],"info":"Applicable if the input has alpha. With @code{1} the pixels of the painted grid\nwill overwrite the video's color and alpha pixels.\nDefault is @code{0}, which composites the grid onto the input, leaving the video's alpha intact.\n\nThe parameters for @var{x}, @var{y}, @var{w} and @var{h} and @var{t} are expressions containing the\nfollowing constants:\n\n@item dar\nThe input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}.\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item in_h, ih\n@item in_w, iw\nThe input grid cell width and height.\n\n@item sar\nThe input sample aspect ratio.\n\n@item x\n@item y\nThe x and y coordinates of some point of grid intersection (meant to configure offset).\n\n@item w\n@item h\nThe width and height of the drawn cell.\n\n@item t\nThe thickness of the drawn cell.\n\nThese constants allow the @var{x}, @var{y}, @var{w}, @var{h} and @var{t} expressions to refer to\neach other, so you may for example specify @code{y=x/dar} or @code{h=w/dar}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDraw a grid with cell 100x100 pixels, thickness 2 pixels, with color red and an opacity of 50%:\n@example\ndrawgrid=width=100:height=100:thickness=2:color=red@@0.5\n@end example\n\n@item\nDraw a white 3x3 grid with an opacity of 50%:\n@example\ndrawgrid=w=iw/3:h=ih/3:t=2:c=white@@0.5\n@end example\n@end itemize\n\n@subsection Commands\nThis filter supports same commands as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n@anchor{drawtext}\n"}]},{"filtergroup":["drawtext"],"info":"\nDraw a text string or text from a specified file on top of a video, using the\nlibfreetype library.\n\nTo enable compilation of this filter, you need to configure FFmpeg with\n@code{--enable-libfreetype}.\nTo enable default font fallback and the @var{font} option you need to\nconfigure FFmpeg with @code{--enable-libfontconfig}.\nTo enable the @var{text_shaping} option, you need to configure FFmpeg with\n@code{--enable-libfribidi}.\n\n@subsection Syntax\n\nIt accepts the following parameters:\n\n","options":[{"names":["box"],"info":"Used to draw a box around text using the background color.\nThe value must be either 1 (enable) or 0 (disable).\nThe default value of @var{box} is 0.\n\n"},{"names":["boxborderw"],"info":"Set the width of the border to be drawn around the box using @var{boxcolor}.\nThe default value of @var{boxborderw} is 0.\n\n"},{"names":["boxcolor"],"info":"The color to be used for drawing box around text. For the syntax of this\noption, check the @ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\nThe default value of @var{boxcolor} is \"white\".\n\n"},{"names":["line_spacing"],"info":"Set the line spacing in pixels of the border to be drawn around the box using @var{box}.\nThe default value of @var{line_spacing} is 0.\n\n"},{"names":["borderw"],"info":"Set the width of the border to be drawn around the text using @var{bordercolor}.\nThe default value of @var{borderw} is 0.\n\n"},{"names":["bordercolor"],"info":"Set the color to be used for drawing border around text. For the syntax of this\noption, check the @ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\nThe default value of @var{bordercolor} is \"black\".\n\n"},{"names":["expansion"],"info":"Select how the @var{text} is expanded. Can be either @code{none},\n@code{strftime} (deprecated) or\n@code{normal} (default). See the @ref{drawtext_expansion, Text expansion} section\nbelow for details.\n\n"},{"names":["basetime"],"info":"Set a start time for the count. Value is in microseconds. Only applied\nin the deprecated strftime expansion mode. To emulate in normal expansion\nmode use the @code{pts} function, supplying the start time (in seconds)\nas the second argument.\n\n"},{"names":["fix_bounds"],"info":"If true, check and fix text coords to avoid clipping.\n\n"},{"names":["fontcolor"],"info":"The color to be used for drawing fonts. For the syntax of this option, check\nthe @ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\nThe default value of @var{fontcolor} is \"black\".\n\n"},{"names":["fontcolor_expr"],"info":"String which is expanded the same way as @var{text} to obtain dynamic\n@var{fontcolor} value. By default this option has empty value and is not\nprocessed. When this option is set, it overrides @var{fontcolor} option.\n\n"},{"names":["font"],"info":"The font family to be used for drawing text. By default Sans.\n\n"},{"names":["fontfile"],"info":"The font file to be used for drawing text. The path must be included.\nThis parameter is mandatory if the fontconfig support is disabled.\n\n"},{"names":["alpha"],"info":"Draw the text applying alpha blending. The value can\nbe a number between 0.0 and 1.0.\nThe expression accepts the same variables @var{x, y} as well.\nThe default value is 1.\nPlease see @var{fontcolor_expr}.\n\n"},{"names":["fontsize"],"info":"The font size to be used for drawing text.\nThe default value of @var{fontsize} is 16.\n\n"},{"names":["text_shaping"],"info":"If set to 1, attempt to shape the text (for example, reverse the order of\nright-to-left text and join Arabic characters) before drawing it.\nOtherwise, just draw the text exactly as given.\nBy default 1 (if supported).\n\n"},{"names":["ft_load_flags"],"info":"The flags to be used for loading the fonts.\n\nThe flags map the corresponding flags supported by libfreetype, and are\na combination of the following values:\n@item default\n@item no_scale\n@item no_hinting\n@item render\n@item no_bitmap\n@item vertical_layout\n@item force_autohint\n@item crop_bitmap\n@item pedantic\n@item ignore_global_advance_width\n@item no_recurse\n@item ignore_transform\n@item monochrome\n@item linear_design\n@item no_autohint\n\nDefault value is \"default\".\n\nFor more information consult the documentation for the FT_LOAD_*\nlibfreetype flags.\n\n"},{"names":["shadowcolor"],"info":"The color to be used for drawing a shadow behind the drawn text. For the\nsyntax of this option, check the @ref{color syntax,,\"Color\" section in the\nffmpeg-utils manual,ffmpeg-utils}.\n\nThe default value of @var{shadowcolor} is \"black\".\n\n"},{"names":["shadowx"],"info":""},{"names":["shadowy"],"info":"The x and y offsets for the text shadow position with respect to the\nposition of the text. They can be either positive or negative\nvalues. The default value for both is \"0\".\n\n"},{"names":["start_number"],"info":"The starting frame number for the n/frame_num variable. The default value\nis \"0\".\n\n"},{"names":["tabsize"],"info":"The size in number of spaces to use for rendering the tab.\nDefault value is 4.\n\n"},{"names":["timecode"],"info":"Set the initial timecode representation in \"hh:mm:ss[:;.]ff\"\nformat. It can be used with or without text parameter. @var{timecode_rate}\noption must be specified.\n\n"},{"names":["timecode_rate","rate","r"],"info":"Set the timecode frame rate (timecode only). Value will be rounded to nearest\ninteger. Minimum value is \"1\".\nDrop-frame timecode is supported for frame rates 30 & 60.\n\n"},{"names":["tc24hmax"],"info":"If set to 1, the output of the timecode option will wrap around at 24 hours.\nDefault is 0 (disabled).\n\n"},{"names":["text"],"info":"The text string to be drawn. The text must be a sequence of UTF-8\nencoded characters.\nThis parameter is mandatory if no file is specified with the parameter\n@var{textfile}.\n\n"},{"names":["textfile"],"info":"A text file containing text to be drawn. The text must be a sequence\nof UTF-8 encoded characters.\n\nThis parameter is mandatory if no text string is specified with the\nparameter @var{text}.\n\nIf both @var{text} and @var{textfile} are specified, an error is thrown.\n\n"},{"names":["reload"],"info":"If set to 1, the @var{textfile} will be reloaded before each frame.\nBe sure to update it atomically, or it may be read partially, or even fail.\n\n"},{"names":["x"],"info":""},{"names":["y"],"info":"The expressions which specify the offsets where text will be drawn\nwithin the video frame. They are relative to the top/left border of the\noutput image.\n\nThe default value of @var{x} and @var{y} is \"0\".\n\nSee below for the list of accepted constants and functions.\n\nThe parameters for @var{x} and @var{y} are expressions containing the\nfollowing constants and functions:\n\n@item dar\ninput display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item line_h, lh\nthe height of each text line\n\n@item main_h, h, H\nthe input height\n\n@item main_w, w, W\nthe input width\n\n@item max_glyph_a, ascent\nthe maximum distance from the baseline to the highest/upper grid\ncoordinate used to place a glyph outline point, for all the rendered\nglyphs.\nIt is a positive value, due to the grid's orientation with the Y axis\nupwards.\n\n@item max_glyph_d, descent\nthe maximum distance from the baseline to the lowest grid coordinate\nused to place a glyph outline point, for all the rendered glyphs.\nThis is a negative value, due to the grid's orientation, with the Y axis\nupwards.\n\n@item max_glyph_h\nmaximum glyph height, that is the maximum height for all the glyphs\ncontained in the rendered text, it is equivalent to @var{ascent} -\n@var{descent}.\n\n@item max_glyph_w\nmaximum glyph width, that is the maximum width for all the glyphs\ncontained in the rendered text\n\n@item n\nthe number of input frame, starting from 0\n\n@item rand(min, max)\nreturn a random number included between @var{min} and @var{max}\n\n@item sar\nThe input sample aspect ratio.\n\n@item t\ntimestamp expressed in seconds, NAN if the input timestamp is unknown\n\n@item text_h, th\nthe height of the rendered text\n\n@item text_w, tw\nthe width of the rendered text\n\n@item x\n@item y\nthe x and y offset coordinates where the text is drawn.\n\nThese parameters allow the @var{x} and @var{y} expressions to refer\nto each other, so you can for example specify @code{y=x/dar}.\n\n@item pict_type\nA one character description of the current frame's picture type.\n\n@item pkt_pos\nThe current packet's position in the input file or stream\n(in bytes, from the start of the input). A value of -1 indicates\nthis info is not available.\n\n@item pkt_duration\nThe current packet's duration, in seconds.\n\n@item pkt_size\nThe current packet's size (in bytes).\n\n@anchor{drawtext_expansion}\n@subsection Text expansion\n\nIf @option{expansion} is set to @code{strftime},\nthe filter recognizes strftime() sequences in the provided text and\nexpands them accordingly. Check the documentation of strftime(). This\nfeature is deprecated.\n\nIf @option{expansion} is set to @code{none}, the text is printed verbatim.\n\nIf @option{expansion} is set to @code{normal} (which is the default),\nthe following expansion mechanism is used.\n\nThe backslash character @samp{\\}, followed by any character, always expands to\nthe second character.\n\nSequences of the form @code{%@{...@}} are expanded. The text between the\nbraces is a function name, possibly followed by arguments separated by ':'.\nIf the arguments contain special characters or delimiters (':' or '@}'),\nthey should be escaped.\n\nNote that they probably must also be escaped as the value for the\n@option{text} option in the filter argument string and as the filter\nargument in the filtergraph description, and possibly also for the shell,\nthat makes up to four levels of escaping; using a text file avoids these\nproblems.\n\nThe following functions are available:\n\n\n@item expr, e\nThe expression evaluation result.\n\nIt must take one argument specifying the expression to be evaluated,\nwhich accepts the same constants and functions as the @var{x} and\n@var{y} values. Note that not all constants should be used, for\nexample the text size is not known when evaluating the expression, so\nthe constants @var{text_w} and @var{text_h} will have an undefined\nvalue.\n\n@item expr_int_format, eif\nEvaluate the expression's value and output as formatted integer.\n\nThe first argument is the expression to be evaluated, just as for the @var{expr} function.\nThe second argument specifies the output format. Allowed values are @samp{x},\n@samp{X}, @samp{d} and @samp{u}. They are treated exactly as in the\n@code{printf} function.\nThe third parameter is optional and sets the number of positions taken by the output.\nIt can be used to add padding with zeros from the left.\n\n@item gmtime\nThe time at which the filter is running, expressed in UTC.\nIt can accept an argument: a strftime() format string.\n\n@item localtime\nThe time at which the filter is running, expressed in the local time zone.\nIt can accept an argument: a strftime() format string.\n\n@item metadata\nFrame metadata. Takes one or two arguments.\n\nThe first argument is mandatory and specifies the metadata key.\n\nThe second argument is optional and specifies a default value, used when the\nmetadata key is not found or empty.\n\nAvailable metadata can be identified by inspecting entries\nstarting with TAG included within each frame section\nprinted by running @code{ffprobe -show_frames}.\n\nString metadata generated in filters leading to\nthe drawtext filter are also available.\n\n@item n, frame_num\nThe frame number, starting from 0.\n\n@item pict_type\nA one character description of the current picture type.\n\n@item pts\nThe timestamp of the current frame.\nIt can take up to three arguments.\n\nThe first argument is the format of the timestamp; it defaults to @code{flt}\nfor seconds as a decimal number with microsecond accuracy; @code{hms} stands\nfor a formatted @var{[-]HH:MM:SS.mmm} timestamp with millisecond accuracy.\n@code{gmtime} stands for the timestamp of the frame formatted as UTC time;\n@code{localtime} stands for the timestamp of the frame formatted as\nlocal time zone time.\n\nThe second argument is an offset added to the timestamp.\n\nIf the format is set to @code{hms}, a third argument @code{24HH} may be\nsupplied to present the hour part of the formatted timestamp in 24h format\n(00-23).\n\nIf the format is set to @code{localtime} or @code{gmtime},\na third argument may be supplied: a strftime() format string.\nBy default, @var{YYYY-MM-DD HH:MM:SS} format will be used.\n\n@subsection Commands\n\nThis filter supports altering parameters via commands:\n@item reinit\nAlter existing filter parameters.\n\nSyntax for the argument is the same as for filter invocation, e.g.\n\n@example\nfontsize=56:fontcolor=green:text='Hello World'\n@end example\n\nFull filter invocation with sendcmd would look like this:\n\n@example\nsendcmd=c='56.0 drawtext reinit fontsize=56\\:fontcolor=green\\:text=Hello\\\\ World'\n@end example\n\nIf the entire argument can't be parsed or applied as valid values then the filter will\ncontinue with its existing parameters.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDraw \"Test Text\" with font FreeSerif, using the default values for the\noptional parameters.\n\n@example\ndrawtext=\"fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'\"\n@end example\n\n@item\nDraw 'Test Text' with font FreeSerif of size 24 at position x=100\nand y=50 (counting from the top-left corner of the screen), text is\nyellow with a red box around it. Both the text and the box have an\nopacity of 20%.\n\n@example\ndrawtext=\"fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\\\n x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2\"\n@end example\n\nNote that the double quotes are not necessary if spaces are not used\nwithin the parameter list.\n\n@item\nShow the text at the center of the video frame:\n@example\ndrawtext=\"fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h)/2\"\n@end example\n\n@item\nShow the text at a random position, switching to a new position every 30 seconds:\n@example\ndrawtext=\"fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=if(eq(mod(t\\,30)\\,0)\\,rand(0\\,(w-text_w))\\,x):y=if(eq(mod(t\\,30)\\,0)\\,rand(0\\,(h-text_h))\\,y)\"\n@end example\n\n@item\nShow a text line sliding from right to left in the last row of the video\nframe. The file @file{LONG_LINE} is assumed to contain a single line\nwith no newlines.\n@example\ndrawtext=\"fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t\"\n@end example\n\n@item\nShow the content of file @file{CREDITS} off the bottom of the frame and scroll up.\n@example\ndrawtext=\"fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t\"\n@end example\n\n@item\nDraw a single green letter \"g\", at the center of the input video.\nThe glyph baseline is placed at half screen height.\n@example\ndrawtext=\"fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent\"\n@end example\n\n@item\nShow text for 1 second every 3 seconds:\n@example\ndrawtext=\"fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:enable=lt(mod(t\\,3)\\,1):text='blink'\"\n@end example\n\n@item\nUse fontconfig to set the font. Note that the colons need to be escaped.\n@example\ndrawtext='fontfile=Linux Libertine O-40\\:style=Semibold:text=FFmpeg'\n@end example\n\n@item\nPrint the date of a real-time encoding (see strftime(3)):\n@example\ndrawtext='fontfile=FreeSans.ttf:text=%@{localtime\\:%a %b %d %Y@}'\n@end example\n\n@item\nShow text fading in and out (appearing/disappearing):\n@example\n#!/bin/sh\nDS=1.0 # display start\nDE=10.0 # display end\nFID=1.5 # fade in duration\nFOD=5 # fade out duration\nffplay -f lavfi \"color,drawtext=text=TEST:fontsize=50:fontfile=FreeSerif.ttf:fontcolor_expr=ff0000%@{eif\\\\\\\\: clip(255*(1*between(t\\\\, $DS + $FID\\\\, $DE - $FOD) + ((t - $DS)/$FID)*between(t\\\\, $DS\\\\, $DS + $FID) + (-(t - $DE)/$FOD)*between(t\\\\, $DE - $FOD\\\\, $DE) )\\\\, 0\\\\, 255) \\\\\\\\: x\\\\\\\\: 2 @}\"\n@end example\n\n@item\nHorizontally align multiple separate texts. Note that @option{max_glyph_a}\nand the @option{fontsize} value are included in the @option{y} offset.\n@example\ndrawtext=fontfile=FreeSans.ttf:text=DOG:fontsize=24:x=10:y=20+24-max_glyph_a,\ndrawtext=fontfile=FreeSans.ttf:text=cow:fontsize=24:x=80:y=20+24-max_glyph_a\n@end example\n\n@end itemize\n\nFor more information about libfreetype, check:\n@url{http://www.freetype.org/}.\n\nFor more information about fontconfig, check:\n@url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.\n\nFor more information about libfribidi, check:\n@url{http://fribidi.org/}.\n\n"}]},{"filtergroup":["edgedetect"],"info":"\nDetect and draw edges. The filter uses the Canny Edge Detection algorithm.\n\n","options":[{"names":["low"],"info":""},{"names":["high"],"info":"Set low and high threshold values used by the Canny thresholding\nalgorithm.\n\nThe high threshold selects the \"strong\" edge pixels, which are then\nconnected through 8-connectivity with the \"weak\" edge pixels selected\nby the low threshold.\n\n@var{low} and @var{high} threshold values must be chosen in the range\n[0,1], and @var{low} should be lesser or equal to @var{high}.\n\nDefault value for @var{low} is @code{20/255}, and default value for @var{high}\nis @code{50/255}.\n\n"},{"names":["mode"],"info":"Define the drawing mode.\n\n@item wires\nDraw white/gray wires on black background.\n\n@item colormix\nMix the colors to create a paint/cartoon effect.\n\n@item canny\nApply Canny edge detector on all selected planes.\nDefault value is @var{wires}.\n\n"},{"names":["planes"],"info":"Select planes for filtering. By default all available planes are filtered.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nStandard edge detection with custom values for the hysteresis thresholding:\n@example\nedgedetect=low=0.1:high=0.4\n@end example\n\n@item\nPainting effect without thresholding:\n@example\nedgedetect=mode=colormix:high=0\n@end example\n@end itemize\n\n"}]},{"filtergroup":["elbg"],"info":"\nApply a posterize effect using the ELBG (Enhanced LBG) algorithm.\n\nFor each input image, the filter will compute the optimal mapping from\nthe input to the output given the codebook length, that is the number\nof distinct output colors.\n\nThis filter accepts the following options.\n\n","options":[{"names":["codebook_length","l"],"info":"Set codebook length. The value must be a positive integer, and\nrepresents the number of distinct output colors. Default value is 256.\n\n"},{"names":["nb_steps","n"],"info":"Set the maximum number of iterations to apply for computing the optimal\nmapping. The higher the value the better the result and the higher the\ncomputation time. Default value is 1.\n\n"},{"names":["seed","s"],"info":"Set a random seed, must be an integer included between 0 and\nUINT32_MAX. If not specified, or if explicitly set to -1, the filter\nwill try to use a good random seed on a best effort basis.\n\n"},{"names":["pal8"],"info":"Set pal8 output pixel format. This option does not work with codebook\nlength greater than 256.\n\n"}]},{"filtergroup":["entropy"],"info":"\nMeasure graylevel entropy in histogram of color channels of video frames.\n\nIt accepts the following parameters:\n\n","options":[{"names":["mode"],"info":"Can be either @var{normal} or @var{diff}. Default is @var{normal}.\n\n@var{diff} mode measures entropy of histogram delta values, absolute differences\nbetween neighbour histogram values.\n\n"}]},{"filtergroup":["eq"],"info":"Set brightness, contrast, saturation and approximate gamma adjustment.\n\n","options":[{"names":["contrast"],"info":"Set the contrast expression. The value must be a float value in range\n@code{-1000.0} to @code{1000.0}. The default value is \"1\".\n\n"},{"names":["brightness"],"info":"Set the brightness expression. The value must be a float value in\nrange @code{-1.0} to @code{1.0}. The default value is \"0\".\n\n"},{"names":["saturation"],"info":"Set the saturation expression. The value must be a float in\nrange @code{0.0} to @code{3.0}. The default value is \"1\".\n\n"},{"names":["gamma"],"info":"Set the gamma expression. The value must be a float in range\n@code{0.1} to @code{10.0}. The default value is \"1\".\n\n"},{"names":["gamma_r"],"info":"Set the gamma expression for red. The value must be a float in\nrange @code{0.1} to @code{10.0}. The default value is \"1\".\n\n"},{"names":["gamma_g"],"info":"Set the gamma expression for green. The value must be a float in range\n@code{0.1} to @code{10.0}. The default value is \"1\".\n\n"},{"names":["gamma_b"],"info":"Set the gamma expression for blue. The value must be a float in range\n@code{0.1} to @code{10.0}. The default value is \"1\".\n\n"},{"names":["gamma_weight"],"info":"Set the gamma weight expression. It can be used to reduce the effect\nof a high gamma value on bright image areas, e.g. keep them from\ngetting overamplified and just plain white. The value must be a float\nin range @code{0.0} to @code{1.0}. A value of @code{0.0} turns the\ngamma correction all the way down while @code{1.0} leaves it at its\nfull strength. Default is \"1\".\n\n"},{"names":["eval"],"info":"Set when the expressions for brightness, contrast, saturation and\ngamma expressions are evaluated.\n\nIt accepts the following values:\n@item init\nonly evaluate expressions once during the filter initialization or\nwhen a command is processed\n\n@item frame\nevaluate expressions for each incoming frame\n\nDefault value is @samp{init}.\n\nThe expressions accept the following parameters:\n@item n\nframe count of the input frame starting from 0\n\n@item pos\nbyte position of the corresponding packet in the input file, NAN if\nunspecified\n\n@item r\nframe rate of the input video, NAN if the input frame rate is unknown\n\n@item t\ntimestamp expressed in seconds, NAN if the input timestamp is unknown\n\n@subsection Commands\nThe filter supports the following commands:\n\n@item contrast\nSet the contrast expression.\n\n@item brightness\nSet the brightness expression.\n\n@item saturation\nSet the saturation expression.\n\n@item gamma\nSet the gamma expression.\n\n@item gamma_r\nSet the gamma_r expression.\n\n@item gamma_g\nSet gamma_g expression.\n\n@item gamma_b\nSet gamma_b expression.\n\n@item gamma_weight\nSet gamma_weight expression.\n\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n\n"}]},{"filtergroup":["erosion"],"info":"\nApply erosion effect to the video.\n\nThis filter replaces the pixel by the local(3x3) minimum.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane, default is 65535.\nIf 0, plane will remain unchanged.\n\n"},{"names":["coordinates"],"info":"Flag which specifies the pixel to refer to. Default is 255 i.e. all eight\npixels are used.\n\nFlags to local 3x3 coordinates maps like this:\n\n 1 2 3\n 4 5\n 6 7 8\n\n"}]},{"filtergroup":["extractplanes"],"info":"\nExtract color channel components from input video stream into\nseparate grayscale video streams.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set plane(s) to extract.\n\nAvailable values for planes are:\n@item y\n@item u\n@item v\n@item a\n@item r\n@item g\n@item b\n\nChoosing planes not available in the input will result in an error.\nThat means you cannot select @code{r}, @code{g}, @code{b} planes\nwith @code{y}, @code{u}, @code{v} planes at same time.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nExtract luma, u and v color channel component from input video frame\ninto 3 grayscale outputs:\n@example\nffmpeg -i video.avi -filter_complex 'extractplanes=y+u+v[y][u][v]' -map '[y]' y.avi -map '[u]' u.avi -map '[v]' v.avi\n@end example\n@end itemize\n\n"}]},{"filtergroup":["fade"],"info":"\nApply a fade-in/out effect to the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["type","t"],"info":"The effect type can be either \"in\" for a fade-in, or \"out\" for a fade-out\neffect.\nDefault is @code{in}.\n\n"},{"names":["start_frame","s"],"info":"Specify the number of the frame to start applying the fade\neffect at. Default is 0.\n\n"},{"names":["nb_frames","n"],"info":"The number of frames that the fade effect lasts. At the end of the\nfade-in effect, the output video will have the same intensity as the input video.\nAt the end of the fade-out transition, the output video will be filled with the\nselected @option{color}.\nDefault is 25.\n\n"},{"names":["alpha"],"info":"If set to 1, fade only alpha channel, if one exists on the input.\nDefault value is 0.\n\n"},{"names":["start_time","st"],"info":"Specify the timestamp (in seconds) of the frame to start to apply the fade\neffect. If both start_frame and start_time are specified, the fade will start at\nwhichever comes last. Default is 0.\n\n"},{"names":["duration","d"],"info":"The number of seconds for which the fade effect has to last. At the end of the\nfade-in effect the output video will have the same intensity as the input video,\nat the end of the fade-out transition the output video will be filled with the\nselected @option{color}.\nIf both duration and nb_frames are specified, duration is used. Default is 0\n(nb_frames is used by default).\n\n"},{"names":["color","c"],"info":"Specify the color of the fade. Default is \"black\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFade in the first 30 frames of video:\n@example\nfade=in:0:30\n@end example\n\nThe command above is equivalent to:\n@example\nfade=t=in:s=0:n=30\n@end example\n\n@item\nFade out the last 45 frames of a 200-frame video:\n@example\nfade=out:155:45\nfade=type=out:start_frame=155:nb_frames=45\n@end example\n\n@item\nFade in the first 25 frames and fade out the last 25 frames of a 1000-frame video:\n@example\nfade=in:0:25, fade=out:975:25\n@end example\n\n@item\nMake the first 5 frames yellow, then fade in from frame 5-24:\n@example\nfade=in:5:20:color=yellow\n@end example\n\n@item\nFade in alpha over first 25 frames of video:\n@example\nfade=in:0:25:alpha=1\n@end example\n\n@item\nMake the first 5.5 seconds black, then fade in for 0.5 seconds:\n@example\nfade=t=in:st=5.5:d=0.5\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["fftdnoiz"],"info":"Denoise frames using 3D FFT (frequency domain filtering).\n\n","options":[{"names":["sigma"],"info":"Set the noise sigma constant. This sets denoising strength.\nDefault value is 1. Allowed range is from 0 to 30.\nUsing very high sigma with low overlap may give blocking artifacts.\n\n"},{"names":["amount"],"info":"Set amount of denoising. By default all detected noise is reduced.\nDefault value is 1. Allowed range is from 0 to 1.\n\n"},{"names":["block"],"info":"Set size of block, Default is 4, can be 3, 4, 5 or 6.\nActual size of block in pixels is 2 to power of @var{block}, so by default\nblock size in pixels is 2^4 which is 16.\n\n"},{"names":["overlap"],"info":"Set block overlap. Default is 0.5. Allowed range is from 0.2 to 0.8.\n\n"},{"names":["prev"],"info":"Set number of previous frames to use for denoising. By default is set to 0.\n\n"},{"names":["next"],"info":"Set number of next frames to to use for denoising. By default is set to 0.\n\n"},{"names":["planes"],"info":"Set planes which will be filtered, by default are all available filtered\nexcept alpha.\n\n"}]},{"filtergroup":["fftfilt"],"info":"Apply arbitrary expressions to samples in frequency domain\n\n","options":[{"names":["dc_Y"],"info":"Adjust the dc value (gain) of the luma plane of the image. The filter\naccepts an integer value in range @code{0} to @code{1000}. The default\nvalue is set to @code{0}.\n\n"},{"names":["dc_U"],"info":"Adjust the dc value (gain) of the 1st chroma plane of the image. The\nfilter accepts an integer value in range @code{0} to @code{1000}. The\ndefault value is set to @code{0}.\n\n"},{"names":["dc_V"],"info":"Adjust the dc value (gain) of the 2nd chroma plane of the image. The\nfilter accepts an integer value in range @code{0} to @code{1000}. The\ndefault value is set to @code{0}.\n\n"},{"names":["weight_Y"],"info":"Set the frequency domain weight expression for the luma plane.\n\n"},{"names":["weight_U"],"info":"Set the frequency domain weight expression for the 1st chroma plane.\n\n"},{"names":["weight_V"],"info":"Set the frequency domain weight expression for the 2nd chroma plane.\n\n"},{"names":["eval"],"info":"Set when the expressions are evaluated.\n\nIt accepts the following values:\n@item init\nOnly evaluate expressions once during the filter initialization.\n\n@item frame\nEvaluate expressions for each incoming frame.\n\nDefault value is @samp{init}.\n\nThe filter accepts the following variables:\n"},{"names":["X"],"info":""},{"names":["Y"],"info":"The coordinates of the current sample.\n\n"},{"names":["W"],"info":""},{"names":["H"],"info":"The width and height of the image.\n\n"},{"names":["N"],"info":"The number of input frame, starting from 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nHigh-pass:\n@example\nfftfilt=dc_Y=128:weight_Y='squish(1-(Y+X)/100)'\n@end example\n\n@item\nLow-pass:\n@example\nfftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)'\n@end example\n\n@item\nSharpen:\n@example\nfftfilt=dc_Y=0:weight_Y='1+squish(1-(Y+X)/100)'\n@end example\n\n@item\nBlur:\n@example\nfftfilt=dc_Y=0:weight_Y='exp(-4 * ((Y+X)/(W+H)))'\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["field"],"info":"\nExtract a single field from an interlaced image using stride\narithmetic to avoid wasting CPU time. The output frames are marked as\nnon-interlaced.\n\n","options":[{"names":["type"],"info":"Specify whether to extract the top (if the value is @code{0} or\n@code{top}) or the bottom field (if the value is @code{1} or\n@code{bottom}).\n\n"}]},{"filtergroup":["fieldhint"],"info":"\nCreate new frames by copying the top and bottom fields from surrounding frames\nsupplied as numbers by the hint file.\n\n","options":[{"names":["hint"],"info":"Set file containing hints: absolute/relative frame numbers.\n\nThere must be one line for each frame in a clip. Each line must contain two\nnumbers separated by the comma, optionally followed by @code{-} or @code{+}.\nNumbers supplied on each line of file can not be out of [N-1,N+1] where N\nis current frame number for @code{absolute} mode or out of [-1, 1] range\nfor @code{relative} mode. First number tells from which frame to pick up top\nfield and second number tells from which frame to pick up bottom field.\n\nIf optionally followed by @code{+} output frame will be marked as interlaced,\nelse if followed by @code{-} output frame will be marked as progressive, else\nit will be marked same as input frame.\nIf optionally followed by @code{t} output frame will use only top field, or in\ncase of @code{b} it will use only bottom field.\nIf line starts with @code{#} or @code{;} that line is skipped.\n\n"},{"names":["mode"],"info":"Can be item @code{absolute} or @code{relative}. Default is @code{absolute}.\n\nExample of first several lines of @code{hint} file for @code{relative} mode:\n@example\n0,0 - # first frame\n1,0 - # second frame, use third's frame top field and second's frame bottom field\n1,0 - # third frame, use fourth's frame top field and third's frame bottom field\n1,0 -\n0,0 -\n0,0 -\n1,0 -\n1,0 -\n1,0 -\n0,0 -\n0,0 -\n1,0 -\n1,0 -\n1,0 -\n0,0 -\n@end example\n\n"}]},{"filtergroup":["fieldmatch"],"info":"\nField matching filter for inverse telecine. It is meant to reconstruct the\nprogressive frames from a telecined stream. The filter does not drop duplicated\nframes, so to achieve a complete inverse telecine @code{fieldmatch} needs to be\nfollowed by a decimation filter such as @ref{decimate} in the filtergraph.\n\nThe separation of the field matching and the decimation is notably motivated by\nthe possibility of inserting a de-interlacing filter fallback between the two.\nIf the source has mixed telecined and real interlaced content,\n@code{fieldmatch} will not be able to match fields for the interlaced parts.\nBut these remaining combed frames will be marked as interlaced, and thus can be\nde-interlaced by a later filter such as @ref{yadif} before decimation.\n\nIn addition to the various configuration options, @code{fieldmatch} can take an\noptional second stream, activated through the @option{ppsrc} option. If\nenabled, the frames reconstruction will be based on the fields and frames from\nthis second stream. This allows the first input to be pre-processed in order to\nhelp the various algorithms of the filter, while keeping the output lossless\n(assuming the fields are matched properly). Typically, a field-aware denoiser,\nor brightness/contrast adjustments can help.\n\nNote that this filter uses the same algorithms as TIVTC/TFM (AviSynth project)\nand VIVTC/VFM (VapourSynth project). The later is a light clone of TFM from\nwhich @code{fieldmatch} is based on. While the semantic and usage are very\nclose, some behaviour and options names can differ.\n\nThe @ref{decimate} filter currently only works for constant frame rate input.\nIf your input has mixed telecined (30fps) and progressive content with a lower\nframerate like 24fps use the following filterchain to produce the necessary cfr\nstream: @code{dejudder,fps=30000/1001,fieldmatch,decimate}.\n\n","options":[{"names":["order"],"info":"Specify the assumed field order of the input stream. Available values are:\n\n@item auto\nAuto detect parity (use FFmpeg's internal parity value).\n@item bff\nAssume bottom field first.\n@item tff\nAssume top field first.\n\nNote that it is sometimes recommended not to trust the parity announced by the\nstream.\n\nDefault value is @var{auto}.\n\n"},{"names":["mode"],"info":"Set the matching mode or strategy to use. @option{pc} mode is the safest in the\nsense that it won't risk creating jerkiness due to duplicate frames when\npossible, but if there are bad edits or blended fields it will end up\noutputting combed frames when a good match might actually exist. On the other\nhand, @option{pcn_ub} mode is the most risky in terms of creating jerkiness,\nbut will almost always find a good frame if there is one. The other values are\nall somewhere in between @option{pc} and @option{pcn_ub} in terms of risking\njerkiness and creating duplicate frames versus finding good matches in sections\nwith bad edits, orphaned fields, blended fields, etc.\n\nMore details about p/c/n/u/b are available in @ref{p/c/n/u/b meaning} section.\n\nAvailable values are:\n\n@item pc\n2-way matching (p/c)\n@item pc_n\n2-way matching, and trying 3rd match if still combed (p/c + n)\n@item pc_u\n2-way matching, and trying 3rd match (same order) if still combed (p/c + u)\n@item pc_n_ub\n2-way matching, trying 3rd match if still combed, and trying 4th/5th matches if\nstill combed (p/c + n + u/b)\n@item pcn\n3-way matching (p/c/n)\n@item pcn_ub\n3-way matching, and trying 4th/5th matches if all 3 of the original matches are\ndetected as combed (p/c/n + u/b)\n\nThe parenthesis at the end indicate the matches that would be used for that\nmode assuming @option{order}=@var{tff} (and @option{field} on @var{auto} or\n@var{top}).\n\nIn terms of speed @option{pc} mode is by far the fastest and @option{pcn_ub} is\nthe slowest.\n\nDefault value is @var{pc_n}.\n\n"},{"names":["ppsrc"],"info":"Mark the main input stream as a pre-processed input, and enable the secondary\ninput stream as the clean source to pick the fields from. See the filter\nintroduction for more details. It is similar to the @option{clip2} feature from\nVFM/TFM.\n\nDefault value is @code{0} (disabled).\n\n"},{"names":["field"],"info":"Set the field to match from. It is recommended to set this to the same value as\n@option{order} unless you experience matching failures with that setting. In\ncertain circumstances changing the field that is used to match from can have a\nlarge impact on matching performance. Available values are:\n\n@item auto\nAutomatic (same value as @option{order}).\n@item bottom\nMatch from the bottom field.\n@item top\nMatch from the top field.\n\nDefault value is @var{auto}.\n\n"},{"names":["mchroma"],"info":"Set whether or not chroma is included during the match comparisons. In most\ncases it is recommended to leave this enabled. You should set this to @code{0}\nonly if your clip has bad chroma problems such as heavy rainbowing or other\nartifacts. Setting this to @code{0} could also be used to speed things up at\nthe cost of some accuracy.\n\nDefault value is @code{1}.\n\n"},{"names":["y0"],"info":""},{"names":["y1"],"info":"These define an exclusion band which excludes the lines between @option{y0} and\n@option{y1} from being included in the field matching decision. An exclusion\nband can be used to ignore subtitles, a logo, or other things that may\ninterfere with the matching. @option{y0} sets the starting scan line and\n@option{y1} sets the ending line; all lines in between @option{y0} and\n@option{y1} (including @option{y0} and @option{y1}) will be ignored. Setting\n@option{y0} and @option{y1} to the same value will disable the feature.\n@option{y0} and @option{y1} defaults to @code{0}.\n\n"},{"names":["scthresh"],"info":"Set the scene change detection threshold as a percentage of maximum change on\nthe luma plane. Good values are in the @code{[8.0, 14.0]} range. Scene change\ndetection is only relevant in case @option{combmatch}=@var{sc}. The range for\n@option{scthresh} is @code{[0.0, 100.0]}.\n\nDefault value is @code{12.0}.\n\n"},{"names":["combmatch"],"info":"When @option{combatch} is not @var{none}, @code{fieldmatch} will take into\naccount the combed scores of matches when deciding what match to use as the\nfinal match. Available values are:\n\n@item none\nNo final matching based on combed scores.\n@item sc\nCombed scores are only used when a scene change is detected.\n@item full\nUse combed scores all the time.\n\nDefault is @var{sc}.\n\n"},{"names":["combdbg"],"info":"Force @code{fieldmatch} to calculate the combed metrics for certain matches and\nprint them. This setting is known as @option{micout} in TFM/VFM vocabulary.\nAvailable values are:\n\n@item none\nNo forced calculation.\n@item pcn\nForce p/c/n calculations.\n@item pcnub\nForce p/c/n/u/b calculations.\n\nDefault value is @var{none}.\n\n"},{"names":["cthresh"],"info":"This is the area combing threshold used for combed frame detection. This\nessentially controls how \"strong\" or \"visible\" combing must be to be detected.\nLarger values mean combing must be more visible and smaller values mean combing\ncan be less visible or strong and still be detected. Valid settings are from\n@code{-1} (every pixel will be detected as combed) to @code{255} (no pixel will\nbe detected as combed). This is basically a pixel difference value. A good\nrange is @code{[8, 12]}.\n\nDefault value is @code{9}.\n\n"},{"names":["chroma"],"info":"Sets whether or not chroma is considered in the combed frame decision. Only\ndisable this if your source has chroma problems (rainbowing, etc.) that are\ncausing problems for the combed frame detection with chroma enabled. Actually,\nusing @option{chroma}=@var{0} is usually more reliable, except for the case\nwhere there is chroma only combing in the source.\n\nDefault value is @code{0}.\n\n"},{"names":["blockx"],"info":""},{"names":["blocky"],"info":"Respectively set the x-axis and y-axis size of the window used during combed\nframe detection. This has to do with the size of the area in which\n@option{combpel} pixels are required to be detected as combed for a frame to be\ndeclared combed. See the @option{combpel} parameter description for more info.\nPossible values are any number that is a power of 2 starting at 4 and going up\nto 512.\n\nDefault value is @code{16}.\n\n"},{"names":["combpel"],"info":"The number of combed pixels inside any of the @option{blocky} by\n@option{blockx} size blocks on the frame for the frame to be detected as\ncombed. While @option{cthresh} controls how \"visible\" the combing must be, this\nsetting controls \"how much\" combing there must be in any localized area (a\nwindow defined by the @option{blockx} and @option{blocky} settings) on the\nframe. Minimum value is @code{0} and maximum is @code{blocky x blockx} (at\nwhich point no frames will ever be detected as combed). This setting is known\nas @option{MI} in TFM/VFM vocabulary.\n\nDefault value is @code{80}.\n\n@anchor{p/c/n/u/b meaning}\n@subsection p/c/n/u/b meaning\n\n@subsubsection p/c/n\n\nWe assume the following telecined stream:\n\n@example\nTop fields: 1 2 2 3 4\nBottom fields: 1 2 3 4 4\n@end example\n\nThe numbers correspond to the progressive frame the fields relate to. Here, the\nfirst two frames are progressive, the 3rd and 4th are combed, and so on.\n\nWhen @code{fieldmatch} is configured to run a matching from bottom\n(@option{field}=@var{bottom}) this is how this input stream get transformed:\n\n@example\nInput stream:\n T 1 2 2 3 4\n B 1 2 3 4 4 <-- matching reference\n\nMatches: c c n n c\n\nOutput stream:\n T 1 2 3 4 4\n B 1 2 3 4 4\n@end example\n\nAs a result of the field matching, we can see that some frames get duplicated.\nTo perform a complete inverse telecine, you need to rely on a decimation filter\nafter this operation. See for instance the @ref{decimate} filter.\n\nThe same operation now matching from top fields (@option{field}=@var{top})\nlooks like this:\n\n@example\nInput stream:\n T 1 2 2 3 4 <-- matching reference\n B 1 2 3 4 4\n\nMatches: c c p p c\n\nOutput stream:\n T 1 2 2 3 4\n B 1 2 2 3 4\n@end example\n\nIn these examples, we can see what @var{p}, @var{c} and @var{n} mean;\nbasically, they refer to the frame and field of the opposite parity:\n\n@itemize\n"},{"names":["@var{p}","matches","the","field","of","the","opposite","parity","in","the","previous","frame"],"info":""},{"names":["@var{c}","matches","the","field","of","the","opposite","parity","in","the","current","frame"],"info":""},{"names":["@var{n}","matches","the","field","of","the","opposite","parity","in","the","next","frame"],"info":"@end itemize\n\n@subsubsection u/b\n\nThe @var{u} and @var{b} matching are a bit special in the sense that they match\nfrom the opposite parity flag. In the following examples, we assume that we are\ncurrently matching the 2nd frame (Top:2, bottom:2). According to the match, a\n'x' is placed above and below each matched fields.\n\nWith bottom matching (@option{field}=@var{bottom}):\n@example\nMatch: c p n b u\n\n x x x x x\n Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2\n Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3\n x x x x x\n\nOutput frames:\n 2 1 2 2 2\n 2 2 2 1 3\n@end example\n\nWith top matching (@option{field}=@var{top}):\n@example\nMatch: c p n b u\n\n x x x x x\n Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2\n Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3\n x x x x x\n\nOutput frames:\n 2 2 2 1 2\n 2 1 3 2 2\n@end example\n\n","examples":"@subsection Examples\n\nSimple IVTC of a top field first telecined stream:\n@example\nfieldmatch=order=tff:combmatch=none, decimate\n@end example\n\nAdvanced IVTC, with fallback on @ref{yadif} for still combed frames:\n@example\nfieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate\n@end example\n\n"}]},{"filtergroup":["fieldorder"],"info":"\nTransform the field order of the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["order"],"info":"The output field order. Valid values are @var{tff} for top field first or @var{bff}\nfor bottom field first.\n\nThe default value is @samp{tff}.\n\nThe transformation is done by shifting the picture content up or down\nby one line, and filling the remaining line with appropriate picture content.\nThis method is consistent with most broadcast field order converters.\n\nIf the input video is not flagged as being interlaced, or it is already\nflagged as being of the required output field order, then this filter does\nnot alter the incoming video.\n\nIt is very useful when converting to or from PAL DV material,\nwhich is bottom field first.\n\nFor example:\n@example\nffmpeg -i in.vob -vf \"fieldorder=bff\" out.dv\n@end example\n\n"}]},{"filtergroup":["fifo","afifo"],"info":"\nBuffer input images and send them when they are requested.\n\nIt is mainly useful when auto-inserted by the libavfilter\nframework.\n\nIt does not take parameters.\n\n","options":[]},{"filtergroup":["fillborders"],"info":"\nFill borders of the input video, without changing video stream dimensions.\nSometimes video can have garbage at the four edges and you may not want to\ncrop video input to keep size multiple of some number.\n\nThis filter accepts the following options:\n\n","options":[{"names":["left"],"info":"Number of pixels to fill from left border.\n\n"},{"names":["right"],"info":"Number of pixels to fill from right border.\n\n"},{"names":["top"],"info":"Number of pixels to fill from top border.\n\n"},{"names":["bottom"],"info":"Number of pixels to fill from bottom border.\n\n"},{"names":["mode"],"info":"Set fill mode.\n\nIt accepts the following values:\n@item smear\nfill pixels using outermost pixels\n\n@item mirror\nfill pixels using mirroring\n\n@item fixed\nfill pixels with constant value\n\nDefault is @var{smear}.\n\n"},{"names":["color"],"info":"Set color for pixels in fixed mode. Default is @var{black}.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["find_rect"],"info":"\nFind a rectangular object\n\nIt accepts the following options:\n\n","options":[{"names":["object"],"info":"Filepath of the object image, needs to be in gray8.\n\n"},{"names":["threshold"],"info":"Detection threshold, default is 0.5.\n\n"},{"names":["mipmaps"],"info":"Number of mipmaps, default is 3.\n\n"},{"names":["xmin","ymin","xmax","ymax"],"info":"Specifies the rectangle in which to search.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nCover a rectangular object by the supplied image of a given video using @command{ffmpeg}:\n@example\nffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["floodfill"],"info":"\nFlood area with values of same pixel components with another values.\n\nIt accepts the following options:\n","options":[{"names":["x"],"info":"Set pixel x coordinate.\n\n"},{"names":["y"],"info":"Set pixel y coordinate.\n\n"},{"names":["s0"],"info":"Set source #0 component value.\n\n"},{"names":["s1"],"info":"Set source #1 component value.\n\n"},{"names":["s2"],"info":"Set source #2 component value.\n\n"},{"names":["s3"],"info":"Set source #3 component value.\n\n"},{"names":["d0"],"info":"Set destination #0 component value.\n\n"},{"names":["d1"],"info":"Set destination #1 component value.\n\n"},{"names":["d2"],"info":"Set destination #2 component value.\n\n"},{"names":["d3"],"info":"Set destination #3 component value.\n\n@anchor{format}\n"}]},{"filtergroup":["format"],"info":"\nConvert the input video to one of the specified pixel formats.\nLibavfilter will try to pick one that is suitable as input to\nthe next filter.\n\nIt accepts the following parameters:\n","options":[{"names":["pix_fmts"],"info":"A '|'-separated list of pixel format names, such as\n\"pix_fmts=yuv420p|monow|rgb24\".\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nConvert the input video to the @var{yuv420p} format\n@example\nformat=pix_fmts=yuv420p\n@end example\n\nConvert the input video to any of the formats in the list\n@example\nformat=pix_fmts=yuv420p|yuv444p|yuv410p\n@end example\n@end itemize\n\n@anchor{fps}\n"}]},{"filtergroup":["fps"],"info":"\nConvert the video to specified constant frame rate by duplicating or dropping\nframes as necessary.\n\nIt accepts the following parameters:\n","options":[{"names":["fps"],"info":"The desired output frame rate. The default is @code{25}.\n\n"},{"names":["start_time"],"info":"Assume the first PTS should be the given value, in seconds. This allows for\npadding/trimming at the start of stream. By default, no assumption is made\nabout the first frame's expected PTS, so no padding or trimming is done.\nFor example, this could be set to 0 to pad the beginning with duplicates of\nthe first frame if a video stream starts after the audio stream or to trim any\nframes with a negative PTS.\n\n"},{"names":["round"],"info":"Timestamp (PTS) rounding method.\n\nPossible values are:\n@item zero\nround towards 0\n@item inf\nround away from 0\n@item down\nround towards -infinity\n@item up\nround towards +infinity\n@item near\nround to nearest\nThe default is @code{near}.\n\n"},{"names":["eof_action"],"info":"Action performed when reading the last frame.\n\nPossible values are:\n@item round\nUse same timestamp rounding method as used for other frames.\n@item pass\nPass through last frame if input duration has not been reached yet.\nThe default is @code{round}.\n\n\nAlternatively, the options can be specified as a flat string:\n@var{fps}[:@var{start_time}[:@var{round}]].\n\nSee also the @ref{setpts} filter.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nA typical usage in order to set the fps to 25:\n@example\nfps=fps=25\n@end example\n\n@item\nSets the fps to 24, using abbreviation and rounding method to round to nearest:\n@example\nfps=fps=film:round=near\n@end example\n@end itemize\n\n"}]},{"filtergroup":["framepack"],"info":"\nPack two different video streams into a stereoscopic video, setting proper\nmetadata on supported codecs. The two views should have the same size and\nframerate and processing will stop when the shorter video ends. Please note\nthat you may conveniently adjust view properties with the @ref{scale} and\n@ref{fps} filters.\n\nIt accepts the following parameters:\n","options":[{"names":["format"],"info":"The desired packing format. Supported values are:\n\n\n@item sbs\nThe views are next to each other (default).\n\n@item tab\nThe views are on top of each other.\n\n@item lines\nThe views are packed by line.\n\n@item columns\nThe views are packed by column.\n\n@item frameseq\nThe views are temporally interleaved.\n\n\n\nSome examples:\n\n@example\n# Convert left and right views into a frame-sequential video\nffmpeg -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT\n\n# Convert views into a side-by-side video with the same output resolution as the input\nffmpeg -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT\n@end example\n\n"}]},{"filtergroup":["framerate"],"info":"\nChange the frame rate by interpolating new video output frames from the source\nframes.\n\nThis filter is not designed to function correctly with interlaced media. If\nyou wish to change the frame rate of interlaced media then you are required\nto deinterlace before this filter and re-interlace after this filter.\n\nA description of the accepted options follows.\n\n","options":[{"names":["fps"],"info":"Specify the output frames per second. This option can also be specified\nas a value alone. The default is @code{50}.\n\n"},{"names":["interp_start"],"info":"Specify the start of a range where the output frame will be created as a\nlinear interpolation of two frames. The range is [@code{0}-@code{255}],\nthe default is @code{15}.\n\n"},{"names":["interp_end"],"info":"Specify the end of a range where the output frame will be created as a\nlinear interpolation of two frames. The range is [@code{0}-@code{255}],\nthe default is @code{240}.\n\n"},{"names":["scene"],"info":"Specify the level at which a scene change is detected as a value between\n0 and 100 to indicate a new scene; a low value reflects a low\nprobability for the current frame to introduce a new scene, while a higher\nvalue means the current frame is more likely to be one.\nThe default is @code{8.2}.\n\n"},{"names":["flags"],"info":"Specify flags influencing the filter process.\n\nAvailable value for @var{flags} is:\n\n@item scene_change_detect, scd\nEnable scene change detection using the value of the option @var{scene}.\nThis flag is enabled by default.\n\n"}]},{"filtergroup":["framestep"],"info":"\nSelect one frame every N-th frame.\n\nThis filter accepts the following option:\n","options":[{"names":["step"],"info":"Select frame after every @code{step} frames.\nAllowed values are positive integers higher than 0. Default value is @code{1}.\n\n"}]},{"filtergroup":["freezedetect"],"info":"\nDetect frozen video.\n\nThis filter logs a message and sets frame metadata when it detects that the\ninput video has no significant change in content during a specified duration.\nVideo freeze detection calculates the mean average absolute difference of all\nthe components of video frames and compares it to a noise floor.\n\nThe printed times and duration are expressed in seconds. The\n@code{lavfi.freezedetect.freeze_start} metadata key is set on the first frame\nwhose timestamp equals or exceeds the detection duration and it contains the\ntimestamp of the first frame of the freeze. The\n@code{lavfi.freezedetect.freeze_duration} and\n@code{lavfi.freezedetect.freeze_end} metadata keys are set on the first frame\nafter the freeze.\n\n","options":[{"names":["noise","n"],"info":"Set noise tolerance. Can be specified in dB (in case \"dB\" is appended to the\nspecified value) or as a difference ratio between 0 and 1. Default is -60dB, or\n0.001.\n\n"},{"names":["duration","d"],"info":"Set freeze duration until notification (default is 2 seconds).\n\n@anchor{frei0r}\n"}]},{"filtergroup":["frei0r"],"info":"\nApply a frei0r effect to the input video.\n\nTo enable the compilation of this filter, you need to install the frei0r\nheader and configure FFmpeg with @code{--enable-frei0r}.\n\nIt accepts the following parameters:\n\n","options":[{"names":["filter_name"],"info":"The name of the frei0r effect to load. If the environment variable\n@env{FREI0R_PATH} is defined, the frei0r effect is searched for in each of the\ndirectories specified by the colon-separated list in @env{FREI0R_PATH}.\nOtherwise, the standard frei0r paths are searched, in this order:\n@file{HOME/.frei0r-1/lib/}, @file{/usr/local/lib/frei0r-1/},\n@file{/usr/lib/frei0r-1/}.\n\n"},{"names":["filter_params"],"info":"A '|'-separated list of parameters to pass to the frei0r effect.\n\n\nA frei0r effect parameter can be a boolean (its value is either\n\"y\" or \"n\"), a double, a color (specified as\n@var{R}/@var{G}/@var{B}, where @var{R}, @var{G}, and @var{B} are floating point\nnumbers between 0.0 and 1.0, inclusive) or a color description as specified in the\n@ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils},\na position (specified as @var{X}/@var{Y}, where\n@var{X} and @var{Y} are floating point numbers) and/or a string.\n\nThe number and types of parameters depend on the loaded effect. If an\neffect parameter is not specified, the default value is set.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply the distort0r effect, setting the first two double parameters:\n@example\nfrei0r=filter_name=distort0r:filter_params=0.5|0.01\n@end example\n\n@item\nApply the colordistance effect, taking a color as the first parameter:\n@example\nfrei0r=colordistance:0.2/0.3/0.4\nfrei0r=colordistance:violet\nfrei0r=colordistance:0x112233\n@end example\n\n@item\nApply the perspective effect, specifying the top left and top right image\npositions:\n@example\nfrei0r=perspective:0.2/0.2|0.8/0.2\n@end example\n@end itemize\n\nFor more information, see\n@url{http://frei0r.dyne.org}\n\n"}]},{"filtergroup":["fspp"],"info":"\nApply fast and simple postprocessing. It is a faster version of @ref{spp}.\n\nIt splits (I)DCT into horizontal/vertical passes. Unlike the simple post-\nprocessing filter, one of them is performed once per block, not per pixel.\nThis allows for much higher speed.\n\n","options":[{"names":["quality"],"info":"Set quality. This option defines the number of levels for averaging. It accepts\nan integer in the range 4-5. Default value is @code{4}.\n\n"},{"names":["qp"],"info":"Force a constant quantization parameter. It accepts an integer in range 0-63.\nIf not set, the filter will use the QP from the video stream (if available).\n\n"},{"names":["strength"],"info":"Set filter strength. It accepts an integer in range -15 to 32. Lower values mean\nmore details but also more artifacts, while higher values make the image smoother\nbut also blurrier. Default value is @code{0} − PSNR optimal.\n\n"},{"names":["use_bframe_qp"],"info":"Enable the use of the QP from the B-Frames if set to @code{1}. Using this\noption may cause flicker since the B-Frames have often larger QP. Default is\n@code{0} (not enabled).\n\n\n"}]},{"filtergroup":["gblur"],"info":"\nApply Gaussian blur filter.\n\n","options":[{"names":["sigma"],"info":"Set horizontal sigma, standard deviation of Gaussian blur. Default is @code{0.5}.\n\n"},{"names":["steps"],"info":"Set number of steps for Gaussian approximation. Default is @code{1}.\n\n"},{"names":["planes"],"info":"Set which planes to filter. By default all planes are filtered.\n\n"},{"names":["sigmaV"],"info":"Set vertical sigma, if negative it will be same as @code{sigma}.\nDefault is @code{-1}.\n\n@subsection Commands\nThis filter supports same commands as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["geq"],"info":"\nApply generic equation to each pixel.\n\n","options":[{"names":["lum_expr","lum"],"info":"Set the luminance expression.\n"},{"names":["cb_expr","cb"],"info":"Set the chrominance blue expression.\n"},{"names":["cr_expr","cr"],"info":"Set the chrominance red expression.\n"},{"names":["alpha_expr","a"],"info":"Set the alpha expression.\n"},{"names":["red_expr","r"],"info":"Set the red expression.\n"},{"names":["green_expr","g"],"info":"Set the green expression.\n"},{"names":["blue_expr","b"],"info":"Set the blue expression.\n\nThe colorspace is selected according to the specified options. If one\nof the @option{lum_expr}, @option{cb_expr}, or @option{cr_expr}\noptions is specified, the filter will automatically select a YCbCr\ncolorspace. If one of the @option{red_expr}, @option{green_expr}, or\n@option{blue_expr} options is specified, it will select an RGB\ncolorspace.\n\nIf one of the chrominance expression is not defined, it falls back on the other\none. If no alpha expression is specified it will evaluate to opaque value.\nIf none of chrominance expressions are specified, they will evaluate\nto the luminance expression.\n\nThe expressions can use the following variables and functions:\n\n@item N\nThe sequential number of the filtered frame, starting from @code{0}.\n\n@item X\n@item Y\nThe coordinates of the current sample.\n\n@item W\n@item H\nThe width and height of the image.\n\n@item SW\n@item SH\nWidth and height scale depending on the currently filtered plane. It is the\nratio between the corresponding luma plane number of pixels and the current\nplane ones. E.g. for YUV4:2:0 the values are @code{1,1} for the luma plane, and\n@code{0.5,0.5} for chroma planes.\n\n@item T\nTime of the current frame, expressed in seconds.\n\n@item p(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the current\nplane.\n\n@item lum(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the luminance\nplane.\n\n@item cb(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the\nblue-difference chroma plane. Return 0 if there is no such plane.\n\n@item cr(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the\nred-difference chroma plane. Return 0 if there is no such plane.\n\n@item r(x, y)\n@item g(x, y)\n@item b(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the\nred/green/blue component. Return 0 if there is no such component.\n\n@item alpha(x, y)\nReturn the value of the pixel at location (@var{x},@var{y}) of the alpha\nplane. Return 0 if there is no such plane.\n\n@item interpolation\nSet one of interpolation methods:\n@table @option\n@item nearest, n\n@item bilinear, b\nDefault is bilinear.\n\nFor functions, if @var{x} and @var{y} are outside the area, the value will be\nautomatically clipped to the closer edge.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFlip the image horizontally:\n@example\ngeq=p(W-X\\,Y)\n@end example\n\n@item\nGenerate a bidimensional sine wave, with angle @code{PI/3} and a\nwavelength of 100 pixels:\n@example\ngeq=128 + 100*sin(2*(PI/100)*(cos(PI/3)*(X-50*T) + sin(PI/3)*Y)):128:128\n@end example\n\n@item\nGenerate a fancy enigmatic moving light:\n@example\nnullsrc=s=256x256,geq=random(1)/hypot(X-cos(N*0.07)*W/2-W/2\\,Y-sin(N*0.09)*H/2-H/2)^2*1000000*sin(N*0.02):128:128\n@end example\n\n@item\nGenerate a quick emboss effect:\n@example\nformat=gray,geq=lum_expr='(p(X,Y)+(256-p(X-4,Y-4)))/2'\n@end example\n\n@item\nModify RGB components depending on pixel position:\n@example\ngeq=r='X/W*r(X,Y)':g='(1-X/W)*g(X,Y)':b='(H-Y)/H*b(X,Y)'\n@end example\n\n@item\nCreate a radial gradient that is the same size as the input (also see\nthe @ref{vignette} filter):\n@example\ngeq=lum=255*gauss((X/W-0.5)*3)*gauss((Y/H-0.5)*3)/gauss(0)/gauss(0),format=gray\n@end example\n@end itemize\n\n"}]},{"filtergroup":["gradfun"],"info":"\nFix the banding artifacts that are sometimes introduced into nearly flat\nregions by truncation to 8-bit color depth.\nInterpolate the gradients that should go where the bands are, and\ndither them.\n\nIt is designed for playback only. Do not use it prior to\nlossy compression, because compression tends to lose the dither and\nbring back the bands.\n\nIt accepts the following parameters:\n\n","options":[{"names":["strength"],"info":"The maximum amount by which the filter will change any one pixel. This is also\nthe threshold for detecting nearly flat regions. Acceptable values range from\n.51 to 64; the default value is 1.2. Out-of-range values will be clipped to the\nvalid range.\n\n"},{"names":["radius"],"info":"The neighborhood to fit the gradient to. A larger radius makes for smoother\ngradients, but also prevents the filter from modifying the pixels near detailed\nregions. Acceptable values are 8-32; the default value is 16. Out-of-range\nvalues will be clipped to the valid range.\n\n\nAlternatively, the options can be specified as a flat string:\n@var{strength}[:@var{radius}]\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply the filter with a @code{3.5} strength and radius of @code{8}:\n@example\ngradfun=3.5:8\n@end example\n\n@item\nSpecify radius, omitting the strength (which will fall-back to the default\nvalue):\n@example\ngradfun=radius=8\n@end example\n\n@end itemize\n\n@anchor{graphmonitor}\n"}]},{"filtergroup":["graphmonitor"],"info":"Show various filtergraph stats.\n\nWith this filter one can debug complete filtergraph.\nEspecially issues with links filling with queued frames.\n\n","options":[{"names":["size","s"],"info":"Set video output size. Default is @var{hd720}.\n\n"},{"names":["opacity","o"],"info":"Set video opacity. Default is @var{0.9}. Allowed range is from @var{0} to @var{1}.\n\n"},{"names":["mode","m"],"info":"Set output mode, can be @var{fulll} or @var{compact}.\nIn @var{compact} mode only filters with some queued frames have displayed stats.\n\n"},{"names":["flags","f"],"info":"Set flags which enable which stats are shown in video.\n\nAvailable values for flags are:\n@item queue\nDisplay number of queued frames in each link.\n\n@item frame_count_in\nDisplay number of frames taken from filter.\n\n@item frame_count_out\nDisplay number of frames given out from filter.\n\n@item pts\nDisplay current filtered frame pts.\n\n@item time\nDisplay current filtered frame time.\n\n@item timebase\nDisplay time base for filter link.\n\n@item format\nDisplay used format for filter link.\n\n@item size\nDisplay video size or number of audio channels in case of audio used by filter link.\n\n@item rate\nDisplay video frame rate or sample rate in case of audio used by filter link.\n\n"},{"names":["rate","r"],"info":"Set upper limit for video rate of output stream, Default value is @var{25}.\nThis guarantee that output video frame rate will not be higher than this value.\n\n"}]},{"filtergroup":["greyedge"],"info":"A color constancy variation filter which estimates scene illumination via grey edge algorithm\nand corrects the scene colors accordingly.\n\nSee: @url{https://staff.science.uva.nl/th.gevers/pub/GeversTIP07.pdf}\n\n","options":[{"names":["difford"],"info":"The order of differentiation to be applied on the scene. Must be chosen in the range\n[0,2] and default value is 1.\n\n"},{"names":["minknorm"],"info":"The Minkowski parameter to be used for calculating the Minkowski distance. Must\nbe chosen in the range [0,20] and default value is 1. Set to 0 for getting\nmax value instead of calculating Minkowski distance.\n\n"},{"names":["sigma"],"info":"The standard deviation of Gaussian blur to be applied on the scene. Must be\nchosen in the range [0,1024.0] and default value = 1. floor( @var{sigma} * break_off_sigma(3) )\ncan't be equal to 0 if @var{difford} is greater than 0.\n\n","examples":"@subsection Examples\n@itemize\n\n@item\nGrey Edge:\n@example\ngreyedge=difford=1:minknorm=5:sigma=2\n@end example\n\n@item\nMax Edge:\n@example\ngreyedge=difford=1:minknorm=0:sigma=2\n@end example\n\n@end itemize\n\n@anchor{haldclut}\n"}]},{"filtergroup":["haldclut"],"info":"\nApply a Hald CLUT to a video stream.\n\nFirst input is the video stream to process, and second one is the Hald CLUT.\nThe Hald CLUT input can be a simple picture or a complete video stream.\n\n","options":[{"names":["shortest"],"info":"Force termination when the shortest input terminates. Default is @code{0}.\n"},{"names":["repeatlast"],"info":"Continue applying the last CLUT after the end of the stream. A value of\n@code{0} disable the filter after the last frame of the CLUT is reached.\nDefault is @code{1}.\n\n@code{haldclut} also has the same interpolation options as @ref{lut3d} (both\nfilters share the same internals).\n\nThis filter also supports the @ref{framesync} options.\n\nMore information about the Hald CLUT can be found on Eskil Steenberg's website\n(Hald CLUT author) at @url{http://www.quelsolaar.com/technology/clut.html}.\n\n@subsection Workflow examples\n\n@subsubsection Hald CLUT video stream\n\nGenerate an identity Hald CLUT stream altered with various effects:\n@example\nffmpeg -f lavfi -i @ref{haldclutsrc}=8 -vf \"hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process\" -t 10 -c:v ffv1 clut.nut\n@end example\n\nNote: make sure you use a lossless codec.\n\nThen use it with @code{haldclut} to apply it on some random stream:\n@example\nffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv\n@end example\n\nThe Hald CLUT will be applied to the 10 first seconds (duration of\n@file{clut.nut}), then the latest picture of that CLUT stream will be applied\nto the remaining frames of the @code{mandelbrot} stream.\n\n@subsubsection Hald CLUT with preview\n\nA Hald CLUT is supposed to be a squared image of @code{Level*Level*Level} by\n@code{Level*Level*Level} pixels. For a given Hald CLUT, FFmpeg will select the\nbiggest possible square starting at the top left of the picture. The remaining\npadding pixels (bottom or right) will be ignored. This area can be used to add\na preview of the Hald CLUT.\n\nTypically, the following generated Hald CLUT will be supported by the\n@code{haldclut} filter:\n\n@example\nffmpeg -f lavfi -i @ref{haldclutsrc}=8 -vf \"\n pad=iw+320 [padded_clut];\n smptebars=s=320x256, split [a][b];\n [padded_clut][a] overlay=W-320:h, curves=color_negative [main];\n [main][b] overlay=W-320\" -frames:v 1 clut.png\n@end example\n\nIt contains the original and a preview of the effect of the CLUT: SMPTE color\nbars are displayed on the right-top, and below the same color bars processed by\nthe color changes.\n\nThen, the effect of this Hald CLUT can be visualized with:\n@example\nffplay input.mkv -vf \"movie=clut.png, [in] haldclut\"\n@end example\n\n"}]},{"filtergroup":["hflip"],"info":"\nFlip the input video horizontally.\n\nFor example, to horizontally flip the input video with @command{ffmpeg}:\n@example\nffmpeg -i in.avi -vf \"hflip\" out.avi\n@end example\n\n","options":[]},{"filtergroup":["histeq"],"info":"This filter applies a global color histogram equalization on a\nper-frame basis.\n\nIt can be used to correct video that has a compressed range of pixel\nintensities. The filter redistributes the pixel intensities to\nequalize their distribution across the intensity range. It may be\nviewed as an \"automatically adjusting contrast filter\". This filter is\nuseful only for correcting degraded or poorly captured source\nvideo.\n\n","options":[{"names":["strength"],"info":"Determine the amount of equalization to be applied. As the strength\nis reduced, the distribution of pixel intensities more-and-more\napproaches that of the input frame. The value must be a float number\nin the range [0,1] and defaults to 0.200.\n\n"},{"names":["intensity"],"info":"Set the maximum intensity that can generated and scale the output\nvalues appropriately. The strength should be set as desired and then\nthe intensity can be limited if needed to avoid washing-out. The value\nmust be a float number in the range [0,1] and defaults to 0.210.\n\n"},{"names":["antibanding"],"info":"Set the antibanding level. If enabled the filter will randomly vary\nthe luminance of output pixels by a small amount to avoid banding of\nthe histogram. Possible values are @code{none}, @code{weak} or\n@code{strong}. It defaults to @code{none}.\n\n"}]},{"filtergroup":["histogram"],"info":"\nCompute and draw a color distribution histogram for the input video.\n\nThe computed histogram is a representation of the color component\ndistribution in an image.\n\nStandard histogram displays the color components distribution in an image.\nDisplays color graph for each color component. Shows distribution of\nthe Y, U, V, A or R, G, B components, depending on input format, in the\ncurrent frame. Below each graph a color component scale meter is shown.\n\n","options":[{"names":["level_height"],"info":"Set height of level. Default value is @code{200}.\nAllowed range is [50, 2048].\n\n"},{"names":["scale_height"],"info":"Set height of color scale. Default value is @code{12}.\nAllowed range is [0, 40].\n\n"},{"names":["display_mode"],"info":"Set display mode.\nIt accepts the following values:\n@item stack\nPer color component graphs are placed below each other.\n\n@item parade\nPer color component graphs are placed side by side.\n\n@item overlay\nPresents information identical to that in the @code{parade}, except\nthat the graphs representing color components are superimposed directly\nover one another.\nDefault is @code{stack}.\n\n"},{"names":["levels_mode"],"info":"Set mode. Can be either @code{linear}, or @code{logarithmic}.\nDefault is @code{linear}.\n\n"},{"names":["components"],"info":"Set what color components to display.\nDefault is @code{7}.\n\n"},{"names":["fgopacity"],"info":"Set foreground opacity. Default is @code{0.7}.\n\n"},{"names":["bgopacity"],"info":"Set background opacity. Default is @code{0.5}.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nCalculate and draw histogram:\n@example\nffplay -i input -vf histogram\n@end example\n\n@end itemize\n\n@anchor{hqdn3d}\n"}]},{"filtergroup":["hqdn3d"],"info":"\nThis is a high precision/quality 3d denoise filter. It aims to reduce\nimage noise, producing smooth images and making still images really\nstill. It should enhance compressibility.\n\nIt accepts the following optional parameters:\n\n","options":[{"names":["luma_spatial"],"info":"A non-negative floating point number which specifies spatial luma strength.\nIt defaults to 4.0.\n\n"},{"names":["chroma_spatial"],"info":"A non-negative floating point number which specifies spatial chroma strength.\nIt defaults to 3.0*@var{luma_spatial}/4.0.\n\n"},{"names":["luma_tmp"],"info":"A floating point number which specifies luma temporal strength. It defaults to\n6.0*@var{luma_spatial}/4.0.\n\n"},{"names":["chroma_tmp"],"info":"A floating point number which specifies chroma temporal strength. It defaults to\n@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n@anchor{hwdownload}\n"}]},{"filtergroup":["hwdownload"],"info":"\nDownload hardware frames to system memory.\n\nThe input must be in hardware frames, and the output a non-hardware format.\nNot all formats will be supported on the output - it may be necessary to insert\nan additional @option{format} filter immediately following in the graph to get\nthe output in a supported format.\n\n","options":[]},{"filtergroup":["hwmap"],"info":"\nMap hardware frames to system memory or to another device.\n\nThis filter has several different modes of operation; which one is used depends\non the input and output formats:\n@itemize\n@item\nHardware frame input, normal frame output\n\nMap the input frames to system memory and pass them to the output. If the\noriginal hardware frame is later required (for example, after overlaying\nsomething else on part of it), the @option{hwmap} filter can be used again\nin the next mode to retrieve it.\n@item\nNormal frame input, hardware frame output\n\nIf the input is actually a software-mapped hardware frame, then unmap it -\nthat is, return the original hardware frame.\n\nOtherwise, a device must be provided. Create new hardware surfaces on that\ndevice for the output, then map them back to the software format at the input\nand give those frames to the preceding filter. This will then act like the\n@option{hwupload} filter, but may be able to avoid an additional copy when\nthe input is already in a compatible format.\n@item\nHardware frame input and output\n\nA device must be supplied for the output, either directly or with the\n@option{derive_device} option. The input and output devices must be of\ndifferent types and compatible - the exact meaning of this is\nsystem-dependent, but typically it means that they must refer to the same\nunderlying hardware context (for example, refer to the same graphics card).\n\nIf the input frames were originally created on the output device, then unmap\nto retrieve the original frames.\n\nOtherwise, map the frames to the output device - create new hardware frames\non the output corresponding to the frames on the input.\n@end itemize\n\nThe following additional parameters are accepted:\n\n","options":[{"names":["mode"],"info":"Set the frame mapping mode. Some combination of:\n@item read\nThe mapped frame should be readable.\n@item write\nThe mapped frame should be writeable.\n@item overwrite\nThe mapping will always overwrite the entire frame.\n\nThis may improve performance in some cases, as the original contents of the\nframe need not be loaded.\n@item direct\nThe mapping must not involve any copying.\n\nIndirect mappings to copies of frames are created in some cases where either\ndirect mapping is not possible or it would have unexpected properties.\nSetting this flag ensures that the mapping is direct and will fail if that is\nnot possible.\nDefaults to @var{read+write} if not specified.\n\n"},{"names":["derive_device","@var{type}"],"info":"Rather than using the device supplied at initialisation, instead derive a new\ndevice of type @var{type} from the device the input frames exist on.\n\n"},{"names":["reverse"],"info":"In a hardware to hardware mapping, map in reverse - create frames in the sink\nand map them back to the source. This may be necessary in some cases where\na mapping in one direction is required but only the opposite direction is\nsupported by the devices being used.\n\nThis option is dangerous - it may break the preceding filter in undefined\nways if there are any additional constraints on that filter's output.\nDo not use it without fully understanding the implications of its use.\n\n@anchor{hwupload}\n"}]},{"filtergroup":["hwupload"],"info":"\nUpload system memory frames to hardware surfaces.\n\nThe device to upload to must be supplied when the filter is initialised. If\nusing ffmpeg, select the appropriate device with the @option{-filter_hw_device}\noption.\n\n@anchor{hwupload_cuda}\n","options":[]},{"filtergroup":["hwupload_cuda"],"info":"\nUpload system memory frames to a CUDA device.\n\nIt accepts the following optional parameters:\n\n","options":[{"names":["device"],"info":"The number of the CUDA device to use\n\n"}]},{"filtergroup":["hqx"],"info":"\nApply a high-quality magnification filter designed for pixel art. This filter\nwas originally created by Maxim Stepin.\n\nIt accepts the following option:\n\n","options":[{"names":["n"],"info":"Set the scaling dimension: @code{2} for @code{hq2x}, @code{3} for\n@code{hq3x} and @code{4} for @code{hq4x}.\nDefault is @code{3}.\n\n"}]},{"filtergroup":["hstack"],"info":"Stack input videos horizontally.\n\nAll streams must be of same pixel format and of same height.\n\nNote that this filter is faster than using @ref{overlay} and @ref{pad} filter\nto create same output.\n\nThe filter accepts the following option:\n\n","options":[{"names":["inputs"],"info":"Set number of input streams. Default is 2.\n\n"},{"names":["shortest"],"info":"If set to 1, force the output to terminate when the shortest input\nterminates. Default value is 0.\n\n"}]},{"filtergroup":["hue"],"info":"\nModify the hue and/or the saturation of the input.\n\nIt accepts the following parameters:\n\n","options":[{"names":["h"],"info":"Specify the hue angle as a number of degrees. It accepts an expression,\nand defaults to \"0\".\n\n"},{"names":["s"],"info":"Specify the saturation in the [-10,10] range. It accepts an expression and\ndefaults to \"1\".\n\n"},{"names":["H"],"info":"Specify the hue angle as a number of radians. It accepts an\nexpression, and defaults to \"0\".\n\n"},{"names":["b"],"info":"Specify the brightness in the [-10,10] range. It accepts an expression and\ndefaults to \"0\".\n\n@option{h} and @option{H} are mutually exclusive, and can't be\nspecified at the same time.\n\nThe @option{b}, @option{h}, @option{H} and @option{s} option values are\nexpressions containing the following constants:\n\n@item n\nframe count of the input frame starting from 0\n\n@item pts\npresentation timestamp of the input frame expressed in time base units\n\n@item r\nframe rate of the input video, NAN if the input frame rate is unknown\n\n@item t\ntimestamp expressed in seconds, NAN if the input timestamp is unknown\n\n@item tb\ntime base of the input video\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSet the hue to 90 degrees and the saturation to 1.0:\n@example\nhue=h=90:s=1\n@end example\n\n@item\nSame command but expressing the hue in radians:\n@example\nhue=H=PI/2:s=1\n@end example\n\n@item\nRotate hue and make the saturation swing between 0\nand 2 over a period of 1 second:\n@example\nhue=\"H=2*PI*t: s=sin(2*PI*t)+1\"\n@end example\n\n@item\nApply a 3 seconds saturation fade-in effect starting at 0:\n@example\nhue=\"s=min(t/3\\,1)\"\n@end example\n\nThe general fade-in expression can be written as:\n@example\nhue=\"s=min(0\\, max((t-START)/DURATION\\, 1))\"\n@end example\n\n@item\nApply a 3 seconds saturation fade-out effect starting at 5 seconds:\n@example\nhue=\"s=max(0\\, min(1\\, (8-t)/3))\"\n@end example\n\nThe general fade-out expression can be written as:\n@example\nhue=\"s=max(0\\, min(1\\, (START+DURATION-t)/DURATION))\"\n@end example\n\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item b\n@item s\n@item h\n@item H\nModify the hue and/or the saturation and/or brightness of the input video.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n@end table\n\n"}]},{"filtergroup":["hysteresis"],"info":"\nGrow first stream into second stream by connecting components.\nThis makes it possible to build more robust edge masks.\n\nThis filter accepts the following options:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed as bitmap, unprocessed planes will be\ncopied from first stream.\nBy default value 0xf, all planes will be processed.\n\n"},{"names":["threshold"],"info":"Set threshold which is used in filtering. If pixel component value is higher than\nthis value filter algorithm for connecting components is activated.\nBy default value is 0.\n\n"}]},{"filtergroup":["idet"],"info":"\nDetect video interlacing type.\n\nThis filter tries to detect if the input frames are interlaced, progressive,\ntop or bottom field first. It will also try to detect fields that are\nrepeated between adjacent frames (a sign of telecine).\n\nSingle frame detection considers only immediately adjacent frames when classifying each frame.\nMultiple frame detection incorporates the classification history of previous frames.\n\nThe filter will log these metadata values:\n\n","options":[{"names":["single.current_frame"],"info":"Detected type of current frame using single-frame detection. One of:\n``tff'' (top field first), ``bff'' (bottom field first),\n``progressive'', or ``undetermined''\n\n"},{"names":["single.tff"],"info":"Cumulative number of frames detected as top field first using single-frame detection.\n\n"},{"names":["multiple.tff"],"info":"Cumulative number of frames detected as top field first using multiple-frame detection.\n\n"},{"names":["single.bff"],"info":"Cumulative number of frames detected as bottom field first using single-frame detection.\n\n"},{"names":["multiple.current_frame"],"info":"Detected type of current frame using multiple-frame detection. One of:\n``tff'' (top field first), ``bff'' (bottom field first),\n``progressive'', or ``undetermined''\n\n"},{"names":["multiple.bff"],"info":"Cumulative number of frames detected as bottom field first using multiple-frame detection.\n\n"},{"names":["single.progressive"],"info":"Cumulative number of frames detected as progressive using single-frame detection.\n\n"},{"names":["multiple.progressive"],"info":"Cumulative number of frames detected as progressive using multiple-frame detection.\n\n"},{"names":["single.undetermined"],"info":"Cumulative number of frames that could not be classified using single-frame detection.\n\n"},{"names":["multiple.undetermined"],"info":"Cumulative number of frames that could not be classified using multiple-frame detection.\n\n"},{"names":["repeated.current_frame"],"info":"Which field in the current frame is repeated from the last. One of ``neither'', ``top'', or ``bottom''.\n\n"},{"names":["repeated.neither"],"info":"Cumulative number of frames with no repeated field.\n\n"},{"names":["repeated.top"],"info":"Cumulative number of frames with the top field repeated from the previous frame's top field.\n\n"},{"names":["repeated.bottom"],"info":"Cumulative number of frames with the bottom field repeated from the previous frame's bottom field.\n\nThe filter accepts the following options:\n\n@item intl_thres\nSet interlacing threshold.\n@item prog_thres\nSet progressive threshold.\n@item rep_thres\nThreshold for repeated field detection.\n@item half_life\nNumber of frames after which a given frame's contribution to the\nstatistics is halved (i.e., it contributes only 0.5 to its\nclassification). The default of 0 means that all frames seen are given\nfull weight of 1.0 forever.\n@item analyze_interlaced_flag\nWhen this is not 0 then idet will use the specified number of frames to determine\nif the interlaced flag is accurate, it will not count undetermined frames.\nIf the flag is found to be accurate it will be used without any further\ncomputations, if it is found to be inaccurate it will be cleared without any\nfurther computations. This allows inserting the idet filter as a low computational\nmethod to clean up the interlaced flag\n\n"}]},{"filtergroup":["il"],"info":"\nDeinterleave or interleave fields.\n\nThis filter allows one to process interlaced images fields without\ndeinterlacing them. Deinterleaving splits the input frame into 2\nfields (so called half pictures). Odd lines are moved to the top\nhalf of the output image, even lines to the bottom half.\nYou can process (filter) them independently and then re-interleave them.\n\n","options":[{"names":["luma_mode","l"],"info":""},{"names":["chroma_mode","c"],"info":""},{"names":["alpha_mode","a"],"info":"Available values for @var{luma_mode}, @var{chroma_mode} and\n@var{alpha_mode} are:\n\n@item none\nDo nothing.\n\n@item deinterleave, d\nDeinterleave fields, placing one above the other.\n\n@item interleave, i\nInterleave fields. Reverse the effect of deinterleaving.\nDefault value is @code{none}.\n\n"},{"names":["luma_swap","ls"],"info":""},{"names":["chroma_swap","cs"],"info":""},{"names":["alpha_swap","as"],"info":"Swap luma/chroma/alpha fields. Exchange even & odd lines. Default value is @code{0}.\n\n"}]},{"filtergroup":["inflate"],"info":"\nApply inflate effect to the video.\n\nThis filter replaces the pixel by the local(3x3) average by taking into account\nonly values higher than the pixel.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane, default is 65535.\nIf 0, plane will remain unchanged.\n\n"}]},{"filtergroup":["interlace"],"info":"\nSimple interlacing filter from progressive contents. This interleaves upper (or\nlower) lines from odd frames with lower (or upper) lines from even frames,\nhalving the frame rate and preserving image height.\n\n@example\n Original Original New Frame\n Frame 'j' Frame 'j+1' (tff)\n ========== =========== ==================\n Line 0 --------------------> Frame 'j' Line 0\n Line 1 Line 1 ----> Frame 'j+1' Line 1\n Line 2 ---------------------> Frame 'j' Line 2\n Line 3 Line 3 ----> Frame 'j+1' Line 3\n ... ... ...\nNew Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on\n@end example\n\nIt accepts the following optional parameters:\n\n","options":[{"names":["scan"],"info":"This determines whether the interlaced frame is taken from the even\n(tff - default) or odd (bff) lines of the progressive frame.\n\n"},{"names":["lowpass"],"info":"Vertical lowpass filter to avoid twitter interlacing and\nreduce moire patterns.\n\n@item 0, off\nDisable vertical lowpass filter\n\n@item 1, linear\nEnable linear filter (default)\n\n@item 2, complex\nEnable complex filter. This will slightly less reduce twitter and moire\nbut better retain detail and subjective sharpness impression.\n\n\n"}]},{"filtergroup":["kerndeint"],"info":"\nDeinterlace input video by applying Donald Graft's adaptive kernel\ndeinterling. Work on interlaced parts of a video to produce\nprogressive frames.\n\nThe description of the accepted parameters follows.\n\n","options":[{"names":["thresh"],"info":"Set the threshold which affects the filter's tolerance when\ndetermining if a pixel line must be processed. It must be an integer\nin the range [0,255] and defaults to 10. A value of 0 will result in\napplying the process on every pixels.\n\n"},{"names":["map"],"info":"Paint pixels exceeding the threshold value to white if set to 1.\nDefault is 0.\n\n"},{"names":["order"],"info":"Set the fields order. Swap fields if set to 1, leave fields alone if\n0. Default is 0.\n\n"},{"names":["sharp"],"info":"Enable additional sharpening if set to 1. Default is 0.\n\n"},{"names":["twoway"],"info":"Enable twoway sharpening if set to 1. Default is 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply default values:\n@example\nkerndeint=thresh=10:map=0:order=0:sharp=0:twoway=0\n@end example\n\n@item\nEnable additional sharpening:\n@example\nkerndeint=sharp=1\n@end example\n\n@item\nPaint processed pixels in white:\n@example\nkerndeint=map=1\n@end example\n@end itemize\n\n"}]},{"filtergroup":["lagfun"],"info":"\nSlowly update darker pixels.\n\nThis filter makes short flashes of light appear longer.\nThis filter accepts the following options:\n\n","options":[{"names":["decay"],"info":"Set factor for decaying. Default is .95. Allowed range is from 0 to 1.\n\n"},{"names":["planes"],"info":"Set which planes to filter. Default is all. Allowed range is from 0 to 15.\n\n"}]},{"filtergroup":["lenscorrection"],"info":"\nCorrect radial lens distortion\n\nThis filter can be used to correct for radial distortion as can result from the use\nof wide angle lenses, and thereby re-rectify the image. To find the right parameters\none can use tools available for example as part of opencv or simply trial-and-error.\nTo use opencv use the calibration sample (under samples/cpp) from the opencv sources\nand extract the k1 and k2 coefficients from the resulting matrix.\n\nNote that effectively the same filter is available in the open-source tools Krita and\nDigikam from the KDE project.\n\nIn contrast to the @ref{vignette} filter, which can also be used to compensate lens errors,\nthis filter corrects the distortion of the image, whereas @ref{vignette} corrects the\nbrightness distribution, so you may want to use both filters together in certain\ncases, though you will have to take care of ordering, i.e. whether vignetting should\nbe applied before or after lens correction.\n\n@subsection Options\n\n","options":[{"names":["cx"],"info":"Relative x-coordinate of the focal point of the image, and thereby the center of the\ndistortion. This value has a range [0,1] and is expressed as fractions of the image\nwidth. Default is 0.5.\n"},{"names":["cy"],"info":"Relative y-coordinate of the focal point of the image, and thereby the center of the\ndistortion. This value has a range [0,1] and is expressed as fractions of the image\nheight. Default is 0.5.\n"},{"names":["k1"],"info":"Coefficient of the quadratic correction term. This value has a range [-1,1]. 0 means\nno correction. Default is 0.\n"},{"names":["k2"],"info":"Coefficient of the double quadratic correction term. This value has a range [-1,1].\n0 means no correction. Default is 0.\n\nThe formula that generates the correction is:\n\n@var{r_src} = @var{r_tgt} * (1 + @var{k1} * (@var{r_tgt} / @var{r_0})^2 + @var{k2} * (@var{r_tgt} / @var{r_0})^4)\n\nwhere @var{r_0} is halve of the image diagonal and @var{r_src} and @var{r_tgt} are the\ndistances from the focal point in the source and target images, respectively.\n\n"}]},{"filtergroup":["lensfun"],"info":"\nApply lens correction via the lensfun library (@url{http://lensfun.sourceforge.net/}).\n\nThe @code{lensfun} filter requires the camera make, camera model, and lens model\nto apply the lens correction. The filter will load the lensfun database and\nquery it to find the corresponding camera and lens entries in the database. As\nlong as these entries can be found with the given options, the filter can\nperform corrections on frames. Note that incomplete strings will result in the\nfilter choosing the best match with the given options, and the filter will\noutput the chosen camera and lens models (logged with level \"info\"). You must\nprovide the make, camera model, and lens model as they are required.\n\n","options":[{"names":["make"],"info":"The make of the camera (for example, \"Canon\"). This option is required.\n\n"},{"names":["model"],"info":"The model of the camera (for example, \"Canon EOS 100D\"). This option is\nrequired.\n\n"},{"names":["lens_model"],"info":"The model of the lens (for example, \"Canon EF-S 18-55mm f/3.5-5.6 IS STM\"). This\noption is required.\n\n"},{"names":["mode"],"info":"The type of correction to apply. The following values are valid options:\n\n@item vignetting\nEnables fixing lens vignetting.\n\n@item geometry\nEnables fixing lens geometry. This is the default.\n\n@item subpixel\nEnables fixing chromatic aberrations.\n\n@item vig_geo\nEnables fixing lens vignetting and lens geometry.\n\n@item vig_subpixel\nEnables fixing lens vignetting and chromatic aberrations.\n\n@item distortion\nEnables fixing both lens geometry and chromatic aberrations.\n\n@item all\nEnables all possible corrections.\n\n"},{"names":["focal_length"],"info":"The focal length of the image/video (zoom; expected constant for video). For\nexample, a 18--55mm lens has focal length range of [18--55], so a value in that\nrange should be chosen when using that lens. Default 18.\n\n"},{"names":["aperture"],"info":"The aperture of the image/video (expected constant for video). Note that\naperture is only used for vignetting correction. Default 3.5.\n\n"},{"names":["focus_distance"],"info":"The focus distance of the image/video (expected constant for video). Note that\nfocus distance is only used for vignetting and only slightly affects the\nvignetting correction process. If unknown, leave it at the default value (which\nis 1000).\n\n"},{"names":["scale"],"info":"The scale factor which is applied after transformation. After correction the\nvideo is no longer necessarily rectangular. This parameter controls how much of\nthe resulting image is visible. The value 0 means that a value will be chosen\nautomatically such that there is little or no unmapped area in the output\nimage. 1.0 means that no additional scaling is done. Lower values may result\nin more of the corrected image being visible, while higher values may avoid\nunmapped areas in the output.\n\n"},{"names":["target_geometry"],"info":"The target geometry of the output image/video. The following values are valid\noptions:\n\n@item rectilinear (default)\n@item fisheye\n@item panoramic\n@item equirectangular\n@item fisheye_orthographic\n@item fisheye_stereographic\n@item fisheye_equisolid\n@item fisheye_thoby\n"},{"names":["reverse"],"info":"Apply the reverse of image correction (instead of correcting distortion, apply\nit).\n\n"},{"names":["interpolation"],"info":"The type of interpolation used when correcting distortion. The following values\nare valid options:\n\n@item nearest\n@item linear (default)\n@item lanczos\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply lens correction with make \"Canon\", camera model \"Canon EOS 100D\", and lens\nmodel \"Canon EF-S 18-55mm f/3.5-5.6 IS STM\" with focal length of \"18\" and\naperture of \"8.0\".\n\n@example\nffmpeg -i input.mov -vf lensfun=make=Canon:model=\"Canon EOS 100D\":lens_model=\"Canon EF-S 18-55mm f/3.5-5.6 IS STM\":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov\n@end example\n\n@item\nApply the same as before, but only for the first 5 seconds of video.\n\n@example\nffmpeg -i input.mov -vf lensfun=make=Canon:model=\"Canon EOS 100D\":lens_model=\"Canon EF-S 18-55mm f/3.5-5.6 IS STM\":focal_length=18:aperture=8:enable='lte(t\\,5)' -c:v h264 -b:v 8000k output.mov\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["libvmaf"],"info":"\nObtain the VMAF (Video Multi-Method Assessment Fusion)\nscore between two input videos.\n\nThe obtained VMAF score is printed through the logging system.\n\nIt requires Netflix's vmaf library (libvmaf) as a pre-requisite.\nAfter installing the library it can be enabled using:\n@code{./configure --enable-libvmaf --enable-version3}.\nIf no model path is specified it uses the default model: @code{vmaf_v0.6.1.pkl}.\n\nThe filter has following options:\n\n","options":[{"names":["model_path"],"info":"Set the model path which is to be used for SVM.\nDefault value: @code{\"/usr/local/share/model/vmaf_v0.6.1.pkl\"}\n\n"},{"names":["log_path"],"info":"Set the file path to be used to store logs.\n\n"},{"names":["log_fmt"],"info":"Set the format of the log file (xml or json).\n\n"},{"names":["enable_transform"],"info":"This option can enable/disable the @code{score_transform} applied to the final predicted VMAF score,\nif you have specified score_transform option in the input parameter file passed to @code{run_vmaf_training.py}\nDefault value: @code{false}\n\n"},{"names":["phone_model"],"info":"Invokes the phone model which will generate VMAF scores higher than in the\nregular model, which is more suitable for laptop, TV, etc. viewing conditions.\nDefault value: @code{false}\n\n"},{"names":["psnr"],"info":"Enables computing psnr along with vmaf.\nDefault value: @code{false}\n\n"},{"names":["ssim"],"info":"Enables computing ssim along with vmaf.\nDefault value: @code{false}\n\n"},{"names":["ms_ssim"],"info":"Enables computing ms_ssim along with vmaf.\nDefault value: @code{false}\n\n"},{"names":["pool"],"info":"Set the pool method to be used for computing vmaf.\nOptions are @code{min}, @code{harmonic_mean} or @code{mean} (default).\n\n"},{"names":["n_threads"],"info":"Set number of threads to be used when computing vmaf.\nDefault value: @code{0}, which makes use of all available logical processors.\n\n"},{"names":["n_subsample"],"info":"Set interval for frame subsampling used when computing vmaf.\nDefault value: @code{1}\n\n"},{"names":["enable_conf_interval"],"info":"Enables confidence interval.\nDefault value: @code{false}\n\nThis filter also supports the @ref{framesync} options.\n\n","examples":"@subsection Examples\n@itemize\n@item\nOn the below examples the input file @file{main.mpg} being processed is\ncompared with the reference file @file{ref.mpg}.\n\n@example\nffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf -f null -\n@end example\n\n@item\nExample with options:\n@example\nffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf=\"psnr=1:log_fmt=json\" -f null -\n@end example\n\n@item\nExample with options and different containers:\n@example\nffmpeg -i main.mpg -i ref.mkv -lavfi \"[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=psnr=1:log_fmt=json\" -f null -\n@end example\n@end itemize\n\n"}]},{"filtergroup":["limiter"],"info":"\nLimits the pixel components values to the specified range [min, max].\n\n","options":[{"names":["min"],"info":"Lower bound. Defaults to the lowest allowed value for the input.\n\n"},{"names":["max"],"info":"Upper bound. Defaults to the highest allowed value for the input.\n\n"},{"names":["planes"],"info":"Specify which planes will be processed. Defaults to all available.\n\n"}]},{"filtergroup":["loop"],"info":"\nLoop video frames.\n\n","options":[{"names":["loop"],"info":"Set the number of loops. Setting this value to -1 will result in infinite loops.\nDefault is 0.\n\n"},{"names":["size"],"info":"Set maximal size in number of frames. Default is 0.\n\n"},{"names":["start"],"info":"Set first frame of loop. Default is 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nLoop single first frame infinitely:\n@example\nloop=loop=-1:size=1:start=0\n@end example\n\n@item\nLoop single first frame 10 times:\n@example\nloop=loop=10:size=1:start=0\n@end example\n\n@item\nLoop 10 first frames 5 times:\n@example\nloop=loop=5:size=10:start=0\n@end example\n@end itemize\n\n"}]},{"filtergroup":["lut1d"],"info":"\nApply a 1D LUT to an input video.\n\n","options":[{"names":["file"],"info":"Set the 1D LUT file name.\n\nCurrently supported formats:\n@item cube\nIridas\n@item csp\ncineSpace\n\n"},{"names":["interp"],"info":"Select interpolation mode.\n\nAvailable values are:\n\n@item nearest\nUse values from the nearest defined point.\n@item linear\nInterpolate values using the linear interpolation.\n@item cosine\nInterpolate values using the cosine interpolation.\n@item cubic\nInterpolate values using the cubic interpolation.\n@item spline\nInterpolate values using the spline interpolation.\n\n@anchor{lut3d}\n"}]},{"filtergroup":["lut3d"],"info":"\nApply a 3D LUT to an input video.\n\n","options":[{"names":["file"],"info":"Set the 3D LUT file name.\n\nCurrently supported formats:\n@item 3dl\nAfterEffects\n@item cube\nIridas\n@item dat\nDaVinci\n@item m3d\nPandora\n@item csp\ncineSpace\n"},{"names":["interp"],"info":"Select interpolation mode.\n\nAvailable values are:\n\n@item nearest\nUse values from the nearest defined point.\n@item trilinear\nInterpolate values using the 8 points defining a cube.\n@item tetrahedral\nInterpolate values using a tetrahedron.\n\n"}]},{"filtergroup":["lumakey"],"info":"\nTurn certain luma values into transparency.\n\n","options":[{"names":["threshold"],"info":"Set the luma which will be used as base for transparency.\nDefault value is @code{0}.\n\n"},{"names":["tolerance"],"info":"Set the range of luma values to be keyed out.\nDefault value is @code{0.01}.\n\n"},{"names":["softness"],"info":"Set the range of softness. Default value is @code{0}.\nUse this to control gradual transition from zero to full transparency.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["lut","lutrgb","lutyuv"],"info":"\nCompute a look-up table for binding each pixel component input value\nto an output value, and apply it to the input video.\n\n@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}\nto an RGB input video.\n\nThese filters accept the following parameters:\n","options":[{"names":["c0"],"info":"set first pixel component expression\n"},{"names":["c1"],"info":"set second pixel component expression\n"},{"names":["c2"],"info":"set third pixel component expression\n"},{"names":["c3"],"info":"set fourth pixel component expression, corresponds to the alpha component\n\n"},{"names":["r"],"info":"set red component expression\n"},{"names":["g"],"info":"set green component expression\n"},{"names":["b"],"info":"set blue component expression\n"},{"names":["a"],"info":"alpha component expression\n\n"},{"names":["y"],"info":"set Y/luminance component expression\n"},{"names":["u"],"info":"set U/Cb component expression\n"},{"names":["v"],"info":"set V/Cr component expression\n\nEach of them specifies the expression to use for computing the lookup table for\nthe corresponding pixel component values.\n\nThe exact component associated to each of the @var{c*} options depends on the\nformat in input.\n\nThe @var{lut} filter requires either YUV or RGB pixel formats in input,\n@var{lutrgb} requires RGB pixel formats in input, and @var{lutyuv} requires YUV.\n\nThe expressions can contain the following constants and functions:\n\n@item w\n@item h\nThe input width and height.\n\n@item val\nThe input value for the pixel component.\n\n@item clipval\nThe input value, clipped to the @var{minval}-@var{maxval} range.\n\n@item maxval\nThe maximum value for the pixel component.\n\n@item minval\nThe minimum value for the pixel component.\n\n@item negval\nThe negated value for the pixel component value, clipped to the\n@var{minval}-@var{maxval} range; it corresponds to the expression\n\"maxval-clipval+minval\".\n\n@item clip(val)\nThe computed value in @var{val}, clipped to the\n@var{minval}-@var{maxval} range.\n\n@item gammaval(gamma)\nThe computed gamma correction value of the pixel component value,\nclipped to the @var{minval}-@var{maxval} range. It corresponds to the\nexpression\n\"pow((clipval-minval)/(maxval-minval)\\,@var{gamma})*(maxval-minval)+minval\"\n\n\nAll expressions default to \"val\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nNegate input video:\n@example\nlutrgb=\"r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val\"\nlutyuv=\"y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val\"\n@end example\n\nThe above is the same as:\n@example\nlutrgb=\"r=negval:g=negval:b=negval\"\nlutyuv=\"y=negval:u=negval:v=negval\"\n@end example\n\n@item\nNegate luminance:\n@example\nlutyuv=y=negval\n@end example\n\n@item\nRemove chroma components, turning the video into a graytone image:\n@example\nlutyuv=\"u=128:v=128\"\n@end example\n\n@item\nApply a luma burning effect:\n@example\nlutyuv=\"y=2*val\"\n@end example\n\n@item\nRemove green and blue components:\n@example\nlutrgb=\"g=0:b=0\"\n@end example\n\n@item\nSet a constant alpha channel value on input:\n@example\nformat=rgba,lutrgb=a=\"maxval-minval/2\"\n@end example\n\n@item\nCorrect luminance gamma by a factor of 0.5:\n@example\nlutyuv=y=gammaval(0.5)\n@end example\n\n@item\nDiscard least significant bits of luma:\n@example\nlutyuv=y='bitand(val, 128+64+32)'\n@end example\n\n@item\nTechnicolor like effect:\n@example\nlutyuv=u='(val-maxval/2)*2+maxval/2':v='(val-maxval/2)*2+maxval/2'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["lut2","tlut2"],"info":"\nThe @code{lut2} filter takes two input streams and outputs one\nstream.\n\nThe @code{tlut2} (time lut2) filter takes two consecutive frames\nfrom one single stream.\n\nThis filter accepts the following parameters:\n","options":[{"names":["c0"],"info":"set first pixel component expression\n"},{"names":["c1"],"info":"set second pixel component expression\n"},{"names":["c2"],"info":"set third pixel component expression\n"},{"names":["c3"],"info":"set fourth pixel component expression, corresponds to the alpha component\n\n"},{"names":["d"],"info":"set output bit depth, only available for @code{lut2} filter. By default is 0,\nwhich means bit depth is automatically picked from first input format.\n\nEach of them specifies the expression to use for computing the lookup table for\nthe corresponding pixel component values.\n\nThe exact component associated to each of the @var{c*} options depends on the\nformat in inputs.\n\nThe expressions can contain the following constants:\n\n@item w\n@item h\nThe input width and height.\n\n@item x\nThe first input value for the pixel component.\n\n@item y\nThe second input value for the pixel component.\n\n@item bdx\nThe first input video bit depth.\n\n@item bdy\nThe second input video bit depth.\n\nAll expressions default to \"x\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nHighlight differences between two RGB video streams:\n@example\nlut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1)'\n@end example\n\n@item\nHighlight differences between two YUV video streams:\n@example\nlut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1)'\n@end example\n\n@item\nShow max difference between two video streams:\n@example\nlut2='if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1)))'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["maskedclamp"],"info":"\nClamp the first input stream with the second input and third input stream.\n\nReturns the value of first stream to be between second input\nstream - @code{undershoot} and third input stream + @code{overshoot}.\n\nThis filter accepts the following options:\n","options":[{"names":["undershoot"],"info":"Default value is @code{0}.\n\n"},{"names":["overshoot"],"info":"Default value is @code{0}.\n\n"},{"names":["planes"],"info":"Set which planes will be processed as bitmap, unprocessed planes will be\ncopied from first stream.\nBy default value 0xf, all planes will be processed.\n\n"}]},{"filtergroup":["maskedmax"],"info":"\nMerge the second and third input stream into output stream using absolute differences\nbetween second input stream and first input stream and absolute difference between\nthird input stream and first input stream. The picked value will be from second input\nstream if second absolute difference is greater than first one or from third input stream\notherwise.\n\nThis filter accepts the following options:\n","options":[{"names":["planes"],"info":"Set which planes will be processed as bitmap, unprocessed planes will be\ncopied from first stream.\nBy default value 0xf, all planes will be processed.\n\n"}]},{"filtergroup":["maskedmerge"],"info":"\nMerge the first input stream with the second input stream using per pixel\nweights in the third input stream.\n\nA value of 0 in the third stream pixel component means that pixel component\nfrom first stream is returned unchanged, while maximum value (eg. 255 for\n8-bit videos) means that pixel component from second stream is returned\nunchanged. Intermediate values define the amount of merging between both\ninput stream's pixel components.\n\nThis filter accepts the following options:\n","options":[{"names":["planes"],"info":"Set which planes will be processed as bitmap, unprocessed planes will be\ncopied from first stream.\nBy default value 0xf, all planes will be processed.\n\n"}]},{"filtergroup":["maskedmin"],"info":"\nMerge the second and third input stream into output stream using absolute differences\nbetween second input stream and first input stream and absolute difference between\nthird input stream and first input stream. The picked value will be from second input\nstream if second absolute difference is less than first one or from third input stream\notherwise.\n\nThis filter accepts the following options:\n","options":[{"names":["planes"],"info":"Set which planes will be processed as bitmap, unprocessed planes will be\ncopied from first stream.\nBy default value 0xf, all planes will be processed.\n\n"}]},{"filtergroup":["maskfun"],"info":"Create mask from input video.\n\nFor example it is useful to create motion masks after @code{tblend} filter.\n\nThis filter accepts the following options:\n\n","options":[{"names":["low"],"info":"Set low threshold. Any pixel component lower or exact than this value will be set to 0.\n\n"},{"names":["high"],"info":"Set high threshold. Any pixel component higher than this value will be set to max value\nallowed for current pixel format.\n\n"},{"names":["planes"],"info":"Set planes to filter, by default all available planes are filtered.\n\n"},{"names":["fill"],"info":"Fill all frame pixels with this value.\n\n"},{"names":["sum"],"info":"Set max average pixel value for frame. If sum of all pixel components is higher that this\naverage, output frame will be completely filled with value set by @var{fill} option.\nTypically useful for scene changes when used in combination with @code{tblend} filter.\n\n"}]},{"filtergroup":["mcdeint"],"info":"\nApply motion-compensation deinterlacing.\n\nIt needs one field per frame as input and must thus be used together\nwith yadif=1/3 or equivalent.\n\nThis filter accepts the following options:\n","options":[{"names":["mode"],"info":"Set the deinterlacing mode.\n\nIt accepts one of the following values:\n@item fast\n@item medium\n@item slow\nuse iterative motion estimation\n@item extra_slow\nlike @samp{slow}, but use multiple reference frames.\nDefault value is @samp{fast}.\n\n"},{"names":["parity"],"info":"Set the picture field parity assumed for the input video. It must be\none of the following values:\n\n@item 0, tff\nassume top field first\n@item 1, bff\nassume bottom field first\n\nDefault value is @samp{bff}.\n\n"},{"names":["qp"],"info":"Set per-block quantization parameter (QP) used by the internal\nencoder.\n\nHigher values should result in a smoother motion vector field but less\noptimal individual vectors. Default value is 1.\n\n"}]},{"filtergroup":["median"],"info":"\nPick median pixel from certain rectangle defined by radius.\n\nThis filter accepts the following options:\n\n","options":[{"names":["radius"],"info":"Set horizontal radius size. Default value is @code{1}.\nAllowed range is integer from 1 to 127.\n\n"},{"names":["planes"],"info":"Set which planes to process. Default is @code{15}, which is all available planes.\n\n"},{"names":["radiusV"],"info":"Set vertical radius size. Default value is @code{0}.\nAllowed range is integer from 0 to 127.\nIf it is 0, value will be picked from horizontal @code{radius} option.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n"}]},{"filtergroup":["mergeplanes"],"info":"\nMerge color channel components from several video streams.\n\nThe filter accepts up to 4 input streams, and merge selected input\nplanes to the output video.\n\nThis filter accepts the following options:\n","options":[{"names":["mapping"],"info":"Set input to output plane mapping. Default is @code{0}.\n\nThe mappings is specified as a bitmap. It should be specified as a\nhexadecimal number in the form 0xAa[Bb[Cc[Dd]]]. 'Aa' describes the\nmapping for the first plane of the output stream. 'A' sets the number of\nthe input stream to use (from 0 to 3), and 'a' the plane number of the\ncorresponding input to use (from 0 to 3). The rest of the mappings is\nsimilar, 'Bb' describes the mapping for the output stream second\nplane, 'Cc' describes the mapping for the output stream third plane and\n'Dd' describes the mapping for the output stream fourth plane.\n\n"},{"names":["format"],"info":"Set output pixel format. Default is @code{yuva444p}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMerge three gray video streams of same width and height into single video stream:\n@example\n[a0][a1][a2]mergeplanes=0x001020:yuv444p\n@end example\n\n@item\nMerge 1st yuv444p stream and 2nd gray video stream into yuva444p video stream:\n@example\n[a0][a1]mergeplanes=0x00010210:yuva444p\n@end example\n\n@item\nSwap Y and A plane in yuva444p stream:\n@example\nformat=yuva444p,mergeplanes=0x03010200:yuva444p\n@end example\n\n@item\nSwap U and V plane in yuv420p stream:\n@example\nformat=yuv420p,mergeplanes=0x000201:yuv420p\n@end example\n\n@item\nCast a rgb24 clip to yuv444p:\n@example\nformat=rgb24,mergeplanes=0x000102:yuv444p\n@end example\n@end itemize\n\n"}]},{"filtergroup":["mestimate"],"info":"\nEstimate and export motion vectors using block matching algorithms.\nMotion vectors are stored in frame side data to be used by other filters.\n\nThis filter accepts the following options:\n","options":[{"names":["method"],"info":"Specify the motion estimation method. Accepts one of the following values:\n\n@item esa\nExhaustive search algorithm.\n@item tss\nThree step search algorithm.\n@item tdls\nTwo dimensional logarithmic search algorithm.\n@item ntss\nNew three step search algorithm.\n@item fss\nFour step search algorithm.\n@item ds\nDiamond search algorithm.\n@item hexbs\nHexagon-based search algorithm.\n@item epzs\nEnhanced predictive zonal search algorithm.\n@item umh\nUneven multi-hexagon search algorithm.\nDefault value is @samp{esa}.\n\n"},{"names":["mb_size"],"info":"Macroblock size. Default @code{16}.\n\n"},{"names":["search_param"],"info":"Search parameter. Default @code{7}.\n\n"}]},{"filtergroup":["midequalizer"],"info":"\nApply Midway Image Equalization effect using two video streams.\n\nMidway Image Equalization adjusts a pair of images to have the same\nhistogram, while maintaining their dynamics as much as possible. It's\nuseful for e.g. matching exposures from a pair of stereo cameras.\n\nThis filter has two inputs and one output, which must be of same pixel format, but\nmay be of different sizes. The output of filter is first input adjusted with\nmidway histogram of both inputs.\n\nThis filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes to process. Default is @code{15}, which is all available planes.\n\n"}]},{"filtergroup":["minterpolate"],"info":"\nConvert the video to specified frame rate using motion interpolation.\n\nThis filter accepts the following options:\n","options":[{"names":["fps"],"info":"Specify the output frame rate. This can be rational e.g. @code{60000/1001}. Frames are dropped if @var{fps} is lower than source fps. Default @code{60}.\n\n"},{"names":["mi_mode"],"info":"Motion interpolation mode. Following values are accepted:\n@item dup\nDuplicate previous or next frame for interpolating new ones.\n@item blend\nBlend source frames. Interpolated frame is mean of previous and next frames.\n@item mci\nMotion compensated interpolation. Following options are effective when this mode is selected:\n\n@table @samp\n@item mc_mode\nMotion compensation mode. Following values are accepted:\n@table @samp\n@item obmc\nOverlapped block motion compensation.\n@item aobmc\nAdaptive overlapped block motion compensation. Window weighting coefficients are controlled adaptively according to the reliabilities of the neighboring motion vectors to reduce oversmoothing.\nDefault mode is @samp{obmc}.\n\n"},{"names":["me_mode"],"info":"Motion estimation mode. Following values are accepted:\n@item bidir\nBidirectional motion estimation. Motion vectors are estimated for each source frame in both forward and backward directions.\n@item bilat\nBilateral motion estimation. Motion vectors are estimated directly for interpolated frame.\nDefault mode is @samp{bilat}.\n\n"},{"names":["me"],"info":"The algorithm to be used for motion estimation. Following values are accepted:\n@item esa\nExhaustive search algorithm.\n@item tss\nThree step search algorithm.\n@item tdls\nTwo dimensional logarithmic search algorithm.\n@item ntss\nNew three step search algorithm.\n@item fss\nFour step search algorithm.\n@item ds\nDiamond search algorithm.\n@item hexbs\nHexagon-based search algorithm.\n@item epzs\nEnhanced predictive zonal search algorithm.\n@item umh\nUneven multi-hexagon search algorithm.\nDefault algorithm is @samp{epzs}.\n\n"},{"names":["mb_size"],"info":"Macroblock size. Default @code{16}.\n\n"},{"names":["search_param"],"info":"Motion estimation search parameter. Default @code{32}.\n\n"},{"names":["vsbmc"],"info":"Enable variable-size block motion compensation. Motion estimation is applied with smaller block sizes at object boundaries in order to make the them less blur. Default is @code{0} (disabled).\n\n"},{"names":["scd"],"info":"Scene change detection method. Scene change leads motion vectors to be in random direction. Scene change detection replace interpolated frames by duplicate ones. May not be needed for other modes. Following values are accepted:\n@item none\nDisable scene change detection.\n@item fdiff\nFrame difference. Corresponding pixel values are compared and if it satisfies @var{scd_threshold} scene change is detected.\nDefault method is @samp{fdiff}.\n\n"},{"names":["scd_threshold"],"info":"Scene change detection threshold. Default is @code{5.0}.\n\n"}]},{"filtergroup":["mix"],"info":"\nMix several video input streams into one video stream.\n\nA description of the accepted options follows.\n\n","options":[{"names":["nb_inputs"],"info":"The number of inputs. If unspecified, it defaults to 2.\n\n"},{"names":["weights"],"info":"Specify weight of each input video stream as sequence.\nEach weight is separated by space. If number of weights\nis smaller than number of @var{frames} last specified\nweight will be used for all remaining unset weights.\n\n"},{"names":["scale"],"info":"Specify scale, if it is set it will be multiplied with sum\nof each weight multiplied with pixel values to give final destination\npixel value. By default @var{scale} is auto scaled to sum of weights.\n\n"},{"names":["duration"],"info":"Specify how end of stream is determined.\n@item longest\nThe duration of the longest input. (default)\n\n@item shortest\nThe duration of the shortest input.\n\n@item first\nThe duration of the first input.\n\n"}]},{"filtergroup":["mpdecimate"],"info":"\nDrop frames that do not differ greatly from the previous frame in\norder to reduce frame rate.\n\nThe main use of this filter is for very-low-bitrate encoding\n(e.g. streaming over dialup modem), but it could in theory be used for\nfixing movies that were inverse-telecined incorrectly.\n\nA description of the accepted options follows.\n\n","options":[{"names":["max"],"info":"Set the maximum number of consecutive frames which can be dropped (if\npositive), or the minimum interval between dropped frames (if\nnegative). If the value is 0, the frame is dropped disregarding the\nnumber of previous sequentially dropped frames.\n\nDefault value is 0.\n\n"},{"names":["hi"],"info":""},{"names":["lo"],"info":""},{"names":["frac"],"info":"Set the dropping threshold values.\n\nValues for @option{hi} and @option{lo} are for 8x8 pixel blocks and\nrepresent actual pixel value differences, so a threshold of 64\ncorresponds to 1 unit of difference for each pixel, or the same spread\nout differently over the block.\n\nA frame is a candidate for dropping if no 8x8 blocks differ by more\nthan a threshold of @option{hi}, and if no more than @option{frac} blocks (1\nmeaning the whole image) differ by more than a threshold of @option{lo}.\n\nDefault value for @option{hi} is 64*12, default value for @option{lo} is\n64*5, and default value for @option{frac} is 0.33.\n\n\n"}]},{"filtergroup":["negate"],"info":"\nNegate (invert) the input video.\n\nIt accepts the following option:\n\n","options":[{"names":["negate_alpha"],"info":"With value 1, it negates the alpha component, if present. Default value is 0.\n\n@anchor{nlmeans}\n"}]},{"filtergroup":["nlmeans"],"info":"\nDenoise frames using Non-Local Means algorithm.\n\nEach pixel is adjusted by looking for other pixels with similar contexts. This\ncontext similarity is defined by comparing their surrounding patches of size\n@option{p}x@option{p}. Patches are searched in an area of @option{r}x@option{r}\naround the pixel.\n\nNote that the research area defines centers for patches, which means some\npatches will be made of pixels outside that research area.\n\nThe filter accepts the following options.\n\n","options":[{"names":["s"],"info":"Set denoising strength. Default is 1.0. Must be in range [1.0, 30.0].\n\n"},{"names":["p"],"info":"Set patch size. Default is 7. Must be odd number in range [0, 99].\n\n"},{"names":["pc"],"info":"Same as @option{p} but for chroma planes.\n\nThe default value is @var{0} and means automatic.\n\n"},{"names":["r"],"info":"Set research size. Default is 15. Must be odd number in range [0, 99].\n\n"},{"names":["rc"],"info":"Same as @option{r} but for chroma planes.\n\nThe default value is @var{0} and means automatic.\n\n"}]},{"filtergroup":["nnedi"],"info":"\nDeinterlace video using neural network edge directed interpolation.\n\nThis filter accepts the following options:\n\n","options":[{"names":["weights"],"info":"Mandatory option, without binary file filter can not work.\nCurrently file can be found here:\nhttps://github.com/dubhater/vapoursynth-nnedi3/blob/master/src/nnedi3_weights.bin\n\n"},{"names":["deint"],"info":"Set which frames to deinterlace, by default it is @code{all}.\nCan be @code{all} or @code{interlaced}.\n\n"},{"names":["field"],"info":"Set mode of operation.\n\nCan be one of the following:\n\n@item af\nUse frame flags, both fields.\n@item a\nUse frame flags, single field.\n@item t\nUse top field only.\n@item b\nUse bottom field only.\n@item tf\nUse both fields, top first.\n@item bf\nUse both fields, bottom first.\n\n"},{"names":["planes"],"info":"Set which planes to process, by default filter process all frames.\n\n"},{"names":["nsize"],"info":"Set size of local neighborhood around each pixel, used by the predictor neural\nnetwork.\n\nCan be one of the following:\n\n@item s8x6\n@item s16x6\n@item s32x6\n@item s48x6\n@item s8x4\n@item s16x4\n@item s32x4\n\n"},{"names":["nns"],"info":"Set the number of neurons in predictor neural network.\nCan be one of the following:\n\n@item n16\n@item n32\n@item n64\n@item n128\n@item n256\n\n"},{"names":["qual"],"info":"Controls the number of different neural network predictions that are blended\ntogether to compute the final output value. Can be @code{fast}, default or\n@code{slow}.\n\n"},{"names":["etype"],"info":"Set which set of weights to use in the predictor.\nCan be one of the following:\n\n@item a\nweights trained to minimize absolute error\n@item s\nweights trained to minimize squared error\n\n"},{"names":["pscrn"],"info":"Controls whether or not the prescreener neural network is used to decide\nwhich pixels should be processed by the predictor neural network and which\ncan be handled by simple cubic interpolation.\nThe prescreener is trained to know whether cubic interpolation will be\nsufficient for a pixel or whether it should be predicted by the predictor nn.\nThe computational complexity of the prescreener nn is much less than that of\nthe predictor nn. Since most pixels can be handled by cubic interpolation,\nusing the prescreener generally results in much faster processing.\nThe prescreener is pretty accurate, so the difference between using it and not\nusing it is almost always unnoticeable.\n\nCan be one of the following:\n\n@item none\n@item original\n@item new\n\nDefault is @code{new}.\n\n"},{"names":["fapprox"],"info":"Set various debugging flags.\n\n"}]},{"filtergroup":["noformat"],"info":"\nForce libavfilter not to use any of the specified pixel formats for the\ninput to the next filter.\n\nIt accepts the following parameters:\n","options":[{"names":["pix_fmts"],"info":"A '|'-separated list of pixel format names, such as\npix_fmts=yuv420p|monow|rgb24\".\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nForce libavfilter to use a format different from @var{yuv420p} for the\ninput to the vflip filter:\n@example\nnoformat=pix_fmts=yuv420p,vflip\n@end example\n\n@item\nConvert the input video to any of the formats not contained in the list:\n@example\nnoformat=yuv420p|yuv444p|yuv410p\n@end example\n@end itemize\n\n"}]},{"filtergroup":["noise"],"info":"\nAdd noise on video input frame.\n\n","options":[{"names":["all_seed"],"info":""},{"names":["c0_seed"],"info":""},{"names":["c1_seed"],"info":""},{"names":["c2_seed"],"info":""},{"names":["c3_seed"],"info":"Set noise seed for specific pixel component or all pixel components in case\nof @var{all_seed}. Default value is @code{123457}.\n\n"},{"names":["all_strength","alls"],"info":""},{"names":["c0_strength","c0s"],"info":""},{"names":["c1_strength","c1s"],"info":""},{"names":["c2_strength","c2s"],"info":""},{"names":["c3_strength","c3s"],"info":"Set noise strength for specific pixel component or all pixel components in case\n@var{all_strength}. Default value is @code{0}. Allowed range is [0, 100].\n\n"},{"names":["all_flags","allf"],"info":""},{"names":["c0_flags","c0f"],"info":""},{"names":["c1_flags","c1f"],"info":""},{"names":["c2_flags","c2f"],"info":""},{"names":["c3_flags","c3f"],"info":"Set pixel component flags or set flags for all components if @var{all_flags}.\nAvailable values for component flags are:\n@item a\naveraged temporal noise (smoother)\n@item p\nmix random noise with a (semi)regular pattern\n@item t\ntemporal noise (noise pattern changes between frames)\n@item u\nuniform noise (gaussian otherwise)\n\n","examples":"@subsection Examples\n\nAdd temporal and uniform noise to input video:\n@example\nnoise=alls=20:allf=t+u\n@end example\n\n"}]},{"filtergroup":["normalize"],"info":"\nNormalize RGB video (aka histogram stretching, contrast stretching).\nSee: https://en.wikipedia.org/wiki/Normalization_(image_processing)\n\nFor each channel of each frame, the filter computes the input range and maps\nit linearly to the user-specified output range. The output range defaults\nto the full dynamic range from pure black to pure white.\n\nTemporal smoothing can be used on the input range to reduce flickering (rapid\nchanges in brightness) caused when small dark or bright objects enter or leave\nthe scene. This is similar to the auto-exposure (automatic gain control) on a\nvideo camera, and, like a video camera, it may cause a period of over- or\nunder-exposure of the video.\n\nThe R,G,B channels can be normalized independently, which may cause some\ncolor shifting, or linked together as a single channel, which prevents\ncolor shifting. Linked normalization preserves hue. Independent normalization\ndoes not, so it can be used to remove some color casts. Independent and linked\nnormalization can be combined in any ratio.\n\nThe normalize filter accepts the following options:\n\n","options":[{"names":["blackpt"],"info":""},{"names":["whitept"],"info":"Colors which define the output range. The minimum input value is mapped to\nthe @var{blackpt}. The maximum input value is mapped to the @var{whitept}.\nThe defaults are black and white respectively. Specifying white for\n@var{blackpt} and black for @var{whitept} will give color-inverted,\nnormalized video. Shades of grey can be used to reduce the dynamic range\n(contrast). Specifying saturated colors here can create some interesting\neffects.\n\n"},{"names":["smoothing"],"info":"The number of previous frames to use for temporal smoothing. The input range\nof each channel is smoothed using a rolling average over the current frame\nand the @var{smoothing} previous frames. The default is 0 (no temporal\nsmoothing).\n\n"},{"names":["independence"],"info":"Controls the ratio of independent (color shifting) channel normalization to\nlinked (color preserving) normalization. 0.0 is fully linked, 1.0 is fully\nindependent. Defaults to 1.0 (fully independent).\n\n"},{"names":["strength"],"info":"Overall strength of the filter. 1.0 is full strength. 0.0 is a rather\nexpensive no-op. Defaults to 1.0 (full strength).\n\n\n@subsection Commands\nThis filter supports same @ref{commands} as options, excluding @var{smoothing} option.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n","examples":"@subsection Examples\n\nStretch video contrast to use the full dynamic range, with no temporal\nsmoothing; may flicker depending on the source content:\n@example\nnormalize=blackpt=black:whitept=white:smoothing=0\n@end example\n\nAs above, but with 50 frames of temporal smoothing; flicker should be\nreduced, depending on the source content:\n@example\nnormalize=blackpt=black:whitept=white:smoothing=50\n@end example\n\nAs above, but with hue-preserving linked channel normalization:\n@example\nnormalize=blackpt=black:whitept=white:smoothing=50:independence=0\n@end example\n\nAs above, but with half strength:\n@example\nnormalize=blackpt=black:whitept=white:smoothing=50:independence=0:strength=0.5\n@end example\n\nMap the darkest input color to red, the brightest input color to cyan:\n@example\nnormalize=blackpt=red:whitept=cyan\n@end example\n\n"}]},{"filtergroup":["null"],"info":"\nPass the video source unchanged to the output.\n\n","options":[]},{"filtergroup":["ocr"],"info":"Optical Character Recognition\n\nThis filter uses Tesseract for optical character recognition. To enable\ncompilation of this filter, you need to configure FFmpeg with\n@code{--enable-libtesseract}.\n\nIt accepts the following options:\n\n","options":[{"names":["datapath"],"info":"Set datapath to tesseract data. Default is to use whatever was\nset at installation.\n\n"},{"names":["language"],"info":"Set language, default is \"eng\".\n\n"},{"names":["whitelist"],"info":"Set character whitelist.\n\n"},{"names":["blacklist"],"info":"Set character blacklist.\n\nThe filter exports recognized text as the frame metadata @code{lavfi.ocr.text}.\nThe filter exports confidence of recognized words as the frame metadata @code{lavfi.ocr.confidence}.\n\n"}]},{"filtergroup":["ocv"],"info":"\nApply a video transform using libopencv.\n\nTo enable this filter, install the libopencv library and headers and\nconfigure FFmpeg with @code{--enable-libopencv}.\n\nIt accepts the following parameters:\n\n","options":[{"names":["filter_name"],"info":"The name of the libopencv filter to apply.\n\n"},{"names":["filter_params"],"info":"The parameters to pass to the libopencv filter. If not specified, the default\nvalues are assumed.\n\n\nRefer to the official libopencv documentation for more precise\ninformation:\n@url{http://docs.opencv.org/master/modules/imgproc/doc/filtering.html}\n\nSeveral libopencv filters are supported; see the following subsections.\n\n@anchor{dilate}\n@subsection dilate\n\nDilate an image by using a specific structuring element.\nIt corresponds to the libopencv function @code{cvDilate}.\n\nIt accepts the parameters: @var{struct_el}|@var{nb_iterations}.\n\n@var{struct_el} represents a structuring element, and has the syntax:\n@var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}\n\n@var{cols} and @var{rows} represent the number of columns and rows of\nthe structuring element, @var{anchor_x} and @var{anchor_y} the anchor\npoint, and @var{shape} the shape for the structuring element. @var{shape}\nmust be \"rect\", \"cross\", \"ellipse\", or \"custom\".\n\nIf the value for @var{shape} is \"custom\", it must be followed by a\nstring of the form \"=@var{filename}\". The file with name\n@var{filename} is assumed to represent a binary image, with each\nprintable character corresponding to a bright pixel. When a custom\n@var{shape} is used, @var{cols} and @var{rows} are ignored, the number\nor columns and rows of the read file are assumed instead.\n\nThe default value for @var{struct_el} is \"3x3+0x0/rect\".\n\n@var{nb_iterations} specifies the number of times the transform is\napplied to the image, and defaults to 1.\n\nSome examples:\n@example\n# Use the default values\nocv=dilate\n\n# Dilate using a structuring element with a 5x5 cross, iterating two times\nocv=filter_name=dilate:filter_params=5x5+2x2/cross|2\n\n# Read the shape from the file diamond.shape, iterating two times.\n# The file diamond.shape may contain a pattern of characters like this\n# *\n# ***\n# *****\n# ***\n# *\n# The specified columns and rows are ignored\n# but the anchor point coordinates are not\nocv=dilate:0x0+2x2/custom=diamond.shape|2\n@end example\n\n@subsection erode\n\nErode an image by using a specific structuring element.\nIt corresponds to the libopencv function @code{cvErode}.\n\nIt accepts the parameters: @var{struct_el}:@var{nb_iterations},\nwith the same syntax and semantics as the @ref{dilate} filter.\n\n@subsection smooth\n\nSmooth the input video.\n\nThe filter takes the following parameters:\n@var{type}|@var{param1}|@var{param2}|@var{param3}|@var{param4}.\n\n@var{type} is the type of smooth filter to apply, and must be one of\nthe following values: \"blur\", \"blur_no_scale\", \"median\", \"gaussian\",\nor \"bilateral\". The default value is \"gaussian\".\n\nThe meaning of @var{param1}, @var{param2}, @var{param3}, and @var{param4}\ndepends on the smooth type. @var{param1} and\n@var{param2} accept integer positive values or 0. @var{param3} and\n@var{param4} accept floating point values.\n\nThe default value for @var{param1} is 3. The default value for the\nother parameters is 0.\n\nThese parameters correspond to the parameters assigned to the\nlibopencv function @code{cvSmooth}.\n\n"}]},{"filtergroup":["oscilloscope"],"info":"\n2D Video Oscilloscope.\n\nUseful to measure spatial impulse, step responses, chroma delays, etc.\n\nIt accepts the following parameters:\n\n","options":[{"names":["x"],"info":"Set scope center x position.\n\n"},{"names":["y"],"info":"Set scope center y position.\n\n"},{"names":["s"],"info":"Set scope size, relative to frame diagonal.\n\n"},{"names":["t"],"info":"Set scope tilt/rotation.\n\n"},{"names":["o"],"info":"Set trace opacity.\n\n"},{"names":["tx"],"info":"Set trace center x position.\n\n"},{"names":["ty"],"info":"Set trace center y position.\n\n"},{"names":["tw"],"info":"Set trace width, relative to width of frame.\n\n"},{"names":["th"],"info":"Set trace height, relative to height of frame.\n\n"},{"names":["c"],"info":"Set which components to trace. By default it traces first three components.\n\n"},{"names":["g"],"info":"Draw trace grid. By default is enabled.\n\n"},{"names":["st"],"info":"Draw some statistics. By default is enabled.\n\n"},{"names":["sc"],"info":"Draw scope. By default is enabled.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nInspect full first row of video frame.\n@example\noscilloscope=x=0.5:y=0:s=1\n@end example\n\n@item\nInspect full last row of video frame.\n@example\noscilloscope=x=0.5:y=1:s=1\n@end example\n\n@item\nInspect full 5th line of video frame of height 1080.\n@example\noscilloscope=x=0.5:y=5/1080:s=1\n@end example\n\n@item\nInspect full last column of video frame.\n@example\noscilloscope=x=1:y=0.5:s=1:t=1\n@end example\n\n@end itemize\n\n@anchor{overlay}\n"}]},{"filtergroup":["overlay"],"info":"\nOverlay one video on top of another.\n\nIt takes two inputs and has one output. The first input is the \"main\"\nvideo on which the second input is overlaid.\n\nIt accepts the following parameters:\n\nA description of the accepted options follows.\n\n","options":[{"names":["x"],"info":""},{"names":["y"],"info":"Set the expression for the x and y coordinates of the overlaid video\non the main video. Default value is \"0\" for both expressions. In case\nthe expression is invalid, it is set to a huge value (meaning that the\noverlay will not be displayed within the output visible area).\n\n"},{"names":["eof_action"],"info":"See @ref{framesync}.\n\n"},{"names":["eval"],"info":"Set when the expressions for @option{x}, and @option{y} are evaluated.\n\nIt accepts the following values:\n@item init\nonly evaluate expressions once during the filter initialization or\nwhen a command is processed\n\n@item frame\nevaluate expressions for each incoming frame\n\nDefault value is @samp{frame}.\n\n"},{"names":["shortest"],"info":"See @ref{framesync}.\n\n"},{"names":["format"],"info":"Set the format for the output video.\n\nIt accepts the following values:\n@item yuv420\nforce YUV420 output\n\n@item yuv422\nforce YUV422 output\n\n@item yuv444\nforce YUV444 output\n\n@item rgb\nforce packed RGB output\n\n@item gbrp\nforce planar RGB output\n\n@item auto\nautomatically pick format\n\nDefault value is @samp{yuv420}.\n\n"},{"names":["repeatlast"],"info":"See @ref{framesync}.\n\n"},{"names":["alpha"],"info":"Set format of alpha of the overlaid video, it can be @var{straight} or\n@var{premultiplied}. Default is @var{straight}.\n\nThe @option{x}, and @option{y} expressions can contain the following\nparameters.\n\n@item main_w, W\n@item main_h, H\nThe main input width and height.\n\n@item overlay_w, w\n@item overlay_h, h\nThe overlay input width and height.\n\n@item x\n@item y\nThe computed values for @var{x} and @var{y}. They are evaluated for\neach new frame.\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values of the output\nformat. For example for the pixel format \"yuv422p\" @var{hsub} is 2 and\n@var{vsub} is 1.\n\n@item n\nthe number of input frame, starting from 0\n\n@item pos\nthe position in the file of the input frame, NAN if unknown\n\n@item t\nThe timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.\n\n\nThis filter also supports the @ref{framesync} options.\n\nNote that the @var{n}, @var{pos}, @var{t} variables are available only\nwhen evaluation is done @emph{per frame}, and will evaluate to NAN\nwhen @option{eval} is set to @samp{init}.\n\nBe aware that frames are taken from each input video in timestamp\norder, hence, if their initial timestamps differ, it is a good idea\nto pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to\nhave them begin in the same zero timestamp, as the example for\nthe @var{movie} filter does.\n\nYou can chain together more overlays but you should test the\nefficiency of such approach.\n\n@subsection Commands\n\nThis filter supports the following commands:\n@item x\n@item y\nModify the x and y of the overlay input.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDraw the overlay at 10 pixels from the bottom right corner of the main\nvideo:\n@example\noverlay=main_w-overlay_w-10:main_h-overlay_h-10\n@end example\n\nUsing named options the example above becomes:\n@example\noverlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10\n@end example\n\n@item\nInsert a transparent PNG logo in the bottom left corner of the input,\nusing the @command{ffmpeg} tool with the @code{-filter_complex} option:\n@example\nffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output\n@end example\n\n@item\nInsert 2 different transparent PNG logos (second logo on bottom\nright corner) using the @command{ffmpeg} tool:\n@example\nffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output\n@end example\n\n@item\nAdd a transparent color layer on top of the main video; @code{WxH}\nmust specify the size of the main input to the overlay filter:\n@example\ncolor=color=red@@.3:size=WxH [over]; [in][over] overlay [out]\n@end example\n\n@item\nPlay an original video and a filtered version (here with the deshake\nfilter) side by side using the @command{ffplay} tool:\n@example\nffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'\n@end example\n\nThe above command is the same as:\n@example\nffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'\n@end example\n\n@item\nMake a sliding overlay appearing from the left to the right top part of the\nscreen starting since time 2:\n@example\noverlay=x='if(gte(t,2), -w+(t-2)*20, NAN)':y=0\n@end example\n\n@item\nCompose output by putting two input videos side to side:\n@example\nffmpeg -i left.avi -i right.avi -filter_complex \"\nnullsrc=size=200x100 [background];\n[0:v] setpts=PTS-STARTPTS, scale=100x100 [left];\n[1:v] setpts=PTS-STARTPTS, scale=100x100 [right];\n[background][left] overlay=shortest=1 [background+left];\n[background+left][right] overlay=shortest=1:x=100 [left+right]\n\"\n@end example\n\n@item\nMask 10-20 seconds of a video by applying the delogo filter to a section\n@example\nffmpeg -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k\n-vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]'\nmasked.avi\n@end example\n\n@item\nChain several overlays in cascade:\n@example\nnullsrc=s=200x200 [bg];\ntestsrc=s=100x100, split=4 [in0][in1][in2][in3];\n[in0] lutrgb=r=0, [bg] overlay=0:0 [mid0];\n[in1] lutrgb=g=0, [mid0] overlay=100:0 [mid1];\n[in2] lutrgb=b=0, [mid1] overlay=0:100 [mid2];\n[in3] null, [mid2] overlay=100:100 [out0]\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["owdenoise"],"info":"\nApply Overcomplete Wavelet denoiser.\n\n","options":[{"names":["depth"],"info":"Set depth.\n\nLarger depth values will denoise lower frequency components more, but\nslow down filtering.\n\nMust be an int in the range 8-16, default is @code{8}.\n\n"},{"names":["luma_strength","ls"],"info":"Set luma strength.\n\nMust be a double value in the range 0-1000, default is @code{1.0}.\n\n"},{"names":["chroma_strength","cs"],"info":"Set chroma strength.\n\nMust be a double value in the range 0-1000, default is @code{1.0}.\n\n@anchor{pad}\n"}]},{"filtergroup":["pad"],"info":"\nAdd paddings to the input image, and place the original input at the\nprovided @var{x}, @var{y} coordinates.\n\nIt accepts the following parameters:\n\n","options":[{"names":["width","w"],"info":""},{"names":["height","h"],"info":"Specify an expression for the size of the output image with the\npaddings added. If the value for @var{width} or @var{height} is 0, the\ncorresponding input size is used for the output.\n\nThe @var{width} expression can reference the value set by the\n@var{height} expression, and vice versa.\n\nThe default value of @var{width} and @var{height} is 0.\n\n"},{"names":["x"],"info":""},{"names":["y"],"info":"Specify the offsets to place the input image at within the padded area,\nwith respect to the top/left border of the output image.\n\nThe @var{x} expression can reference the value set by the @var{y}\nexpression, and vice versa.\n\nThe default value of @var{x} and @var{y} is 0.\n\nIf @var{x} or @var{y} evaluate to a negative number, they'll be changed\nso the input image is centered on the padded area.\n\n"},{"names":["color"],"info":"Specify the color of the padded area. For the syntax of this option,\ncheck the @ref{color syntax,,\"Color\" section in the ffmpeg-utils\nmanual,ffmpeg-utils}.\n\nThe default value of @var{color} is \"black\".\n\n"},{"names":["eval"],"info":"Specify when to evaluate @var{width}, @var{height}, @var{x} and @var{y} expression.\n\nIt accepts the following values:\n\n@item init\nOnly evaluate expressions once during the filter initialization or when\na command is processed.\n\n@item frame\nEvaluate expressions for each incoming frame.\n\n\nDefault value is @samp{init}.\n\n"},{"names":["aspect"],"info":"Pad to aspect instead to a resolution.\n\n\nThe value for the @var{width}, @var{height}, @var{x}, and @var{y}\noptions are expressions containing the following constants:\n\n@item in_w\n@item in_h\nThe input video width and height.\n\n@item iw\n@item ih\nThese are the same as @var{in_w} and @var{in_h}.\n\n@item out_w\n@item out_h\nThe output width and height (the size of the padded area), as\nspecified by the @var{width} and @var{height} expressions.\n\n@item ow\n@item oh\nThese are the same as @var{out_w} and @var{out_h}.\n\n@item x\n@item y\nThe x and y offsets as specified by the @var{x} and @var{y}\nexpressions, or NAN if not yet specified.\n\n@item a\nsame as @var{iw} / @var{ih}\n\n@item sar\ninput sample aspect ratio\n\n@item dar\ninput display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}\n\n@item hsub\n@item vsub\nThe horizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nAdd paddings with the color \"violet\" to the input video. The output video\nsize is 640x480, and the top-left corner of the input video is placed at\ncolumn 0, row 40\n@example\npad=640:480:0:40:violet\n@end example\n\nThe example above is equivalent to the following command:\n@example\npad=width=640:height=480:x=0:y=40:color=violet\n@end example\n\n@item\nPad the input to get an output with dimensions increased by 3/2,\nand put the input video at the center of the padded area:\n@example\npad=\"3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2\"\n@end example\n\n@item\nPad the input to get a squared output with size equal to the maximum\nvalue between the input width and height, and put the input video at\nthe center of the padded area:\n@example\npad=\"max(iw\\,ih):ow:(ow-iw)/2:(oh-ih)/2\"\n@end example\n\n@item\nPad the input to get a final w/h ratio of 16:9:\n@example\npad=\"ih*16/9:ih:(ow-iw)/2:(oh-ih)/2\"\n@end example\n\n@item\nIn case of anamorphic video, in order to set the output display aspect\ncorrectly, it is necessary to use @var{sar} in the expression,\naccording to the relation:\n@example\n(ih * X / ih) * sar = output_dar\nX = output_dar / sar\n@end example\n\nThus the previous example needs to be modified to:\n@example\npad=\"ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2\"\n@end example\n\n@item\nDouble the output size and put the input video in the bottom-right\ncorner of the output padded area:\n@example\npad=\"2*iw:2*ih:ow-iw:oh-ih\"\n@end example\n@end itemize\n\n@anchor{palettegen}\n"}]},{"filtergroup":["palettegen"],"info":"\nGenerate one palette for a whole video stream.\n\nIt accepts the following options:\n\n","options":[{"names":["max_colors"],"info":"Set the maximum number of colors to quantize in the palette.\nNote: the palette will still contain 256 colors; the unused palette entries\nwill be black.\n\n"},{"names":["reserve_transparent"],"info":"Create a palette of 255 colors maximum and reserve the last one for\ntransparency. Reserving the transparency color is useful for GIF optimization.\nIf not set, the maximum of colors in the palette will be 256. You probably want\nto disable this option for a standalone image.\nSet by default.\n\n"},{"names":["transparency_color"],"info":"Set the color that will be used as background for transparency.\n\n"},{"names":["stats_mode"],"info":"Set statistics mode.\n\nIt accepts the following values:\n@item full\nCompute full frame histograms.\n@item diff\nCompute histograms only for the part that differs from previous frame. This\nmight be relevant to give more importance to the moving part of your input if\nthe background is static.\n@item single\nCompute new histogram for each frame.\n\nDefault value is @var{full}.\n\nThe filter also exports the frame metadata @code{lavfi.color_quant_ratio}\n(@code{nb_color_in / nb_color_out}) which you can use to evaluate the degree of\ncolor quantization of the palette. This information is also visible at\n@var{info} logging level.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nGenerate a representative palette of a given video using @command{ffmpeg}:\n@example\nffmpeg -i input.mkv -vf palettegen palette.png\n@end example\n@end itemize\n\n"}]},{"filtergroup":["paletteuse"],"info":"\nUse a palette to downsample an input video stream.\n\nThe filter takes two inputs: one video stream and a palette. The palette must\nbe a 256 pixels image.\n\nIt accepts the following options:\n\n","options":[{"names":["dither"],"info":"Select dithering mode. Available algorithms are:\n@item bayer\nOrdered 8x8 bayer dithering (deterministic)\n@item heckbert\nDithering as defined by Paul Heckbert in 1982 (simple error diffusion).\nNote: this dithering is sometimes considered \"wrong\" and is included as a\nreference.\n@item floyd_steinberg\nFloyd and Steingberg dithering (error diffusion)\n@item sierra2\nFrankie Sierra dithering v2 (error diffusion)\n@item sierra2_4a\nFrankie Sierra dithering v2 \"Lite\" (error diffusion)\n\nDefault is @var{sierra2_4a}.\n\n"},{"names":["bayer_scale"],"info":"When @var{bayer} dithering is selected, this option defines the scale of the\npattern (how much the crosshatch pattern is visible). A low value means more\nvisible pattern for less banding, and higher value means less visible pattern\nat the cost of more banding.\n\nThe option must be an integer value in the range [0,5]. Default is @var{2}.\n\n"},{"names":["diff_mode"],"info":"If set, define the zone to process\n\n@item rectangle\nOnly the changing rectangle will be reprocessed. This is similar to GIF\ncropping/offsetting compression mechanism. This option can be useful for speed\nif only a part of the image is changing, and has use cases such as limiting the\nscope of the error diffusal @option{dither} to the rectangle that bounds the\nmoving scene (it leads to more deterministic output if the scene doesn't change\nmuch, and as a result less moving noise and better GIF compression).\n\nDefault is @var{none}.\n\n"},{"names":["new"],"info":"Take new palette for each output frame.\n\n"},{"names":["alpha_threshold"],"info":"Sets the alpha threshold for transparency. Alpha values above this threshold\nwill be treated as completely opaque, and values below this threshold will be\ntreated as completely transparent.\n\nThe option must be an integer value in the range [0,255]. Default is @var{128}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nUse a palette (generated for example with @ref{palettegen}) to encode a GIF\nusing @command{ffmpeg}:\n@example\nffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif\n@end example\n@end itemize\n\n"}]},{"filtergroup":["perspective"],"info":"\nCorrect perspective of video not recorded perpendicular to the screen.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["x0"],"info":""},{"names":["y0"],"info":""},{"names":["x1"],"info":""},{"names":["y1"],"info":""},{"names":["x2"],"info":""},{"names":["y2"],"info":""},{"names":["x3"],"info":""},{"names":["y3"],"info":"Set coordinates expression for top left, top right, bottom left and bottom right corners.\nDefault values are @code{0:0:W:0:0:H:W:H} with which perspective will remain unchanged.\nIf the @code{sense} option is set to @code{source}, then the specified points will be sent\nto the corners of the destination. If the @code{sense} option is set to @code{destination},\nthen the corners of the source will be sent to the specified coordinates.\n\nThe expressions can use the following variables:\n\n@item W\n@item H\nthe width and height of video frame.\n@item in\nInput frame count.\n@item on\nOutput frame count.\n\n"},{"names":["interpolation"],"info":"Set interpolation for perspective correction.\n\nIt accepts the following values:\n@item linear\n@item cubic\n\nDefault value is @samp{linear}.\n\n"},{"names":["sense"],"info":"Set interpretation of coordinate options.\n\nIt accepts the following values:\n@item 0, source\n\nSend point in the source specified by the given coordinates to\nthe corners of the destination.\n\n@item 1, destination\n\nSend the corners of the source to the point in the destination specified\nby the given coordinates.\n\nDefault value is @samp{source}.\n\n"},{"names":["eval"],"info":"Set when the expressions for coordinates @option{x0,y0,...x3,y3} are evaluated.\n\nIt accepts the following values:\n@item init\nonly evaluate expressions once during the filter initialization or\nwhen a command is processed\n\n@item frame\nevaluate expressions for each incoming frame\n\nDefault value is @samp{init}.\n\n"}]},{"filtergroup":["phase"],"info":"\nDelay interlaced video by one field time so that the field order changes.\n\nThe intended use is to fix PAL movies that have been captured with the\nopposite field order to the film-to-video transfer.\n\nA description of the accepted parameters follows.\n\n","options":[{"names":["mode"],"info":"Set phase mode.\n\nIt accepts the following values:\n@item t\nCapture field order top-first, transfer bottom-first.\nFilter will delay the bottom field.\n\n@item b\nCapture field order bottom-first, transfer top-first.\nFilter will delay the top field.\n\n@item p\nCapture and transfer with the same field order. This mode only exists\nfor the documentation of the other options to refer to, but if you\nactually select it, the filter will faithfully do nothing.\n\n@item a\nCapture field order determined automatically by field flags, transfer\nopposite.\nFilter selects among @samp{t} and @samp{b} modes on a frame by frame\nbasis using field flags. If no field information is available,\nthen this works just like @samp{u}.\n\n@item u\nCapture unknown or varying, transfer opposite.\nFilter selects among @samp{t} and @samp{b} on a frame by frame basis by\nanalyzing the images and selecting the alternative that produces best\nmatch between the fields.\n\n@item T\nCapture top-first, transfer unknown or varying.\nFilter selects among @samp{t} and @samp{p} using image analysis.\n\n@item B\nCapture bottom-first, transfer unknown or varying.\nFilter selects among @samp{b} and @samp{p} using image analysis.\n\n@item A\nCapture determined by field flags, transfer unknown or varying.\nFilter selects among @samp{t}, @samp{b} and @samp{p} using field flags and\nimage analysis. If no field information is available, then this works just\nlike @samp{U}. This is the default mode.\n\n@item U\nBoth capture and transfer unknown or varying.\nFilter selects among @samp{t}, @samp{b} and @samp{p} using image analysis only.\n\n"}]},{"filtergroup":["photosensitivity"],"info":"Reduce various flashes in video, so to help users with epilepsy.\n\nIt accepts the following options:\n","options":[{"names":["frames","f"],"info":"Set how many frames to use when filtering. Default is 30.\n\n"},{"names":["threshold","t"],"info":"Set detection threshold factor. Default is 1.\nLower is stricter.\n\n"},{"names":["skip"],"info":"Set how many pixels to skip when sampling frames. Default is 1.\nAllowed range is from 1 to 1024.\n\n"},{"names":["bypass"],"info":"Leave frames unchanged. Default is disabled.\n\n"}]},{"filtergroup":["pixdesctest"],"info":"\nPixel format descriptor test filter, mainly useful for internal\ntesting. The output video should be equal to the input video.\n\nFor example:\n@example\nformat=monow, pixdesctest\n@end example\n\ncan be used to test the monowhite pixel format descriptor definition.\n\n","options":[]},{"filtergroup":["pixscope"],"info":"\nDisplay sample values of color channels. Mainly useful for checking color\nand levels. Minimum supported resolution is 640x480.\n\nThe filters accept the following options:\n\n","options":[{"names":["x"],"info":"Set scope X position, relative offset on X axis.\n\n"},{"names":["y"],"info":"Set scope Y position, relative offset on Y axis.\n\n"},{"names":["w"],"info":"Set scope width.\n\n"},{"names":["h"],"info":"Set scope height.\n\n"},{"names":["o"],"info":"Set window opacity. This window also holds statistics about pixel area.\n\n"},{"names":["wx"],"info":"Set window X position, relative offset on X axis.\n\n"},{"names":["wy"],"info":"Set window Y position, relative offset on Y axis.\n\n"}]},{"filtergroup":["pp"],"info":"\nEnable the specified chain of postprocessing subfilters using libpostproc. This\nlibrary should be automatically selected with a GPL build (@code{--enable-gpl}).\nSubfilters must be separated by '/' and can be disabled by prepending a '-'.\nEach subfilter and some options have a short and a long name that can be used\ninterchangeably, i.e. dr/dering are the same.\n\nThe filters accept the following options:\n\n","options":[{"names":["subfilters"],"info":"Set postprocessing subfilters string.\n\nAll subfilters share common options to determine their scope:\n\n@item a/autoq\nHonor the quality commands for this subfilter.\n\n@item c/chrom\nDo chrominance filtering, too (default).\n\n@item y/nochrom\nDo luminance filtering only (no chrominance).\n\n@item n/noluma\nDo chrominance filtering only (no luminance).\n\nThese options can be appended after the subfilter name, separated by a '|'.\n\nAvailable subfilters are:\n\n@item hb/hdeblock[|difference[|flatness]]\nHorizontal deblocking filter\n@table @option\n@item difference\nDifference factor where higher values mean more deblocking (default: @code{32}).\n@item flatness\nFlatness threshold where lower values mean more deblocking (default: @code{39}).\n\n"},{"names":["vb/vdeblock[|difference[|flatness]]"],"info":"Vertical deblocking filter\n@item difference\nDifference factor where higher values mean more deblocking (default: @code{32}).\n@item flatness\nFlatness threshold where lower values mean more deblocking (default: @code{39}).\n\n"},{"names":["ha/hadeblock[|difference[|flatness]]"],"info":"Accurate horizontal deblocking filter\n@item difference\nDifference factor where higher values mean more deblocking (default: @code{32}).\n@item flatness\nFlatness threshold where lower values mean more deblocking (default: @code{39}).\n\n"},{"names":["va/vadeblock[|difference[|flatness]]"],"info":"Accurate vertical deblocking filter\n@item difference\nDifference factor where higher values mean more deblocking (default: @code{32}).\n@item flatness\nFlatness threshold where lower values mean more deblocking (default: @code{39}).\n\nThe horizontal and vertical deblocking filters share the difference and\nflatness values so you cannot set different horizontal and vertical\nthresholds.\n\n@item h1/x1hdeblock\nExperimental horizontal deblocking filter\n\n@item v1/x1vdeblock\nExperimental vertical deblocking filter\n\n@item dr/dering\nDeringing filter\n\n@item tn/tmpnoise[|threshold1[|threshold2[|threshold3]]], temporal noise reducer\n@table @option\n@item threshold1\nlarger -> stronger filtering\n@item threshold2\nlarger -> stronger filtering\n@item threshold3\nlarger -> stronger filtering\n\n"},{"names":["al/autolevels[:f/fullyrange]","automatic","brightness","/","contrast","correction"],"info":"@item f/fullyrange\nStretch luminance to @code{0-255}.\n\n"},{"names":["lb/linblenddeint"],"info":"Linear blend deinterlacing filter that deinterlaces the given block by\nfiltering all lines with a @code{(1 2 1)} filter.\n\n"},{"names":["li/linipoldeint"],"info":"Linear interpolating deinterlacing filter that deinterlaces the given block by\nlinearly interpolating every second line.\n\n"},{"names":["ci/cubicipoldeint"],"info":"Cubic interpolating deinterlacing filter deinterlaces the given block by\ncubically interpolating every second line.\n\n"},{"names":["md/mediandeint"],"info":"Median deinterlacing filter that deinterlaces the given block by applying a\nmedian filter to every second line.\n\n"},{"names":["fd/ffmpegdeint"],"info":"FFmpeg deinterlacing filter that deinterlaces the given block by filtering every\nsecond line with a @code{(-1 4 2 4 -1)} filter.\n\n"},{"names":["l5/lowpass5"],"info":"Vertically applied FIR lowpass deinterlacing filter that deinterlaces the given\nblock by filtering all lines with a @code{(-1 2 6 2 -1)} filter.\n\n"},{"names":["fq/forceQuant[|quantizer]"],"info":"Overrides the quantizer table from the input with the constant quantizer you\nspecify.\n@item quantizer\nQuantizer to use\n\n"},{"names":["de/default"],"info":"Default pp filter combination (@code{hb|a,vb|a,dr|a})\n\n"},{"names":["fa/fast"],"info":"Fast pp filter combination (@code{h1|a,v1|a,dr|a})\n\n"},{"names":["ac"],"info":"High quality pp filter combination (@code{ha|a|128|7,va|a,dr|a})\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply horizontal and vertical deblocking, deringing and automatic\nbrightness/contrast:\n@example\npp=hb/vb/dr/al\n@end example\n\n@item\nApply default filters without brightness/contrast correction:\n@example\npp=de/-al\n@end example\n\n@item\nApply default filters and temporal denoiser:\n@example\npp=default/tmpnoise|1|2|3\n@end example\n\n@item\nApply deblocking on luminance only, and switch vertical deblocking on or off\nautomatically depending on available CPU time:\n@example\npp=hb|y/vb|a\n@end example\n@end itemize\n\n"}]},{"filtergroup":["pp7"],"info":"Apply Postprocessing filter 7. It is variant of the @ref{spp} filter,\nsimilar to spp = 6 with 7 point DCT, where only the center sample is\nused after IDCT.\n\n","options":[{"names":["qp"],"info":"Force a constant quantization parameter. It accepts an integer in range\n0 to 63. If not set, the filter will use the QP from the video stream\n(if available).\n\n"},{"names":["mode"],"info":"Set thresholding mode. Available modes are:\n\n@item hard\nSet hard thresholding.\n@item soft\nSet soft thresholding (better de-ringing effect, but likely blurrier).\n@item medium\nSet medium thresholding (good results, default).\n\n"}]},{"filtergroup":["premultiply"],"info":"Apply alpha premultiply effect to input video stream using first plane\nof second stream as alpha.\n\nBoth streams must have same dimensions and same pixel format.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\n"},{"names":["inplace"],"info":"Do not require 2nd input for processing, instead use alpha plane from input stream.\n\n"}]},{"filtergroup":["prewitt"],"info":"Apply prewitt operator to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\n\n@anchor{program_opencl}\n"}]},{"filtergroup":["program_opencl"],"info":"\nFilter video using an OpenCL program.\n\n","options":[{"names":["source"],"info":"OpenCL program source file.\n\n"},{"names":["kernel"],"info":"Kernel name in program.\n\n"},{"names":["inputs"],"info":"Number of inputs to the filter. Defaults to 1.\n\n"},{"names":["size","s"],"info":"Size of output frames. Defaults to the same as the first input.\n\n\nThe program source file must contain a kernel function with the given name,\nwhich will be run once for each plane of the output. Each run on a plane\ngets enqueued as a separate 2D global NDRange with one work-item for each\npixel to be generated. The global ID offset for each work-item is therefore\nthe coordinates of a pixel in the destination image.\n\nThe kernel function needs to take the following arguments:\n@itemize\n"},{"names":[],"info":"Destination image, @var{__write_only image2d_t}.\n\nThis image will become the output; the kernel should write all of it.\n"},{"names":[],"info":"Frame index, @var{unsigned int}.\n\nThis is a counter starting from zero and increasing by one for each frame.\n"},{"names":[],"info":"Source images, @var{__read_only image2d_t}.\n\nThese are the most recent images on each input. The kernel may read from\nthem to generate the output, but they can't be written to.\n@end itemize\n\nExample programs:\n\n@itemize\n"},{"names":[],"info":"Copy the input to the output (output must be the same size as the input).\n@verbatim\n__kernel void copy(__write_only image2d_t destination,\n unsigned int index,\n __read_only image2d_t source)\n{\n const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;\n\n int2 location = (int2)(get_global_id(0), get_global_id(1));\n\n float4 value = read_imagef(source, sampler, location);\n\n write_imagef(destination, location, value);\n}\n@end verbatim\n\n"},{"names":[],"info":"Apply a simple transformation, rotating the input by an amount increasing\nwith the index counter. Pixel values are linearly interpolated by the\nsampler, and the output need not have the same dimensions as the input.\n@verbatim\n__kernel void rotate_image(__write_only image2d_t dst,\n unsigned int index,\n __read_only image2d_t src)\n{\n const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |\n CLK_FILTER_LINEAR);\n\n float angle = (float)index / 100.0f;\n\n float2 dst_dim = convert_float2(get_image_dim(dst));\n float2 src_dim = convert_float2(get_image_dim(src));\n\n float2 dst_cen = dst_dim / 2.0f;\n float2 src_cen = src_dim / 2.0f;\n\n int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));\n\n float2 dst_pos = convert_float2(dst_loc) - dst_cen;\n float2 src_pos = {\n cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,\n sin(angle) * dst_pos.x + cos(angle) * dst_pos.y\n };\n src_pos = src_pos * src_dim / dst_dim;\n\n float2 src_loc = src_pos + src_cen;\n\n if (src_loc.x < 0.0f || src_loc.y < 0.0f ||\n src_loc.x > src_dim.x || src_loc.y > src_dim.y)\n write_imagef(dst, dst_loc, 0.5f);\n else\n write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));\n}\n@end verbatim\n\n"},{"names":[],"info":"Blend two inputs together, with the amount of each input used varying\nwith the index counter.\n@verbatim\n__kernel void blend_images(__write_only image2d_t dst,\n unsigned int index,\n __read_only image2d_t src1,\n __read_only image2d_t src2)\n{\n const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |\n CLK_FILTER_LINEAR);\n\n float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;\n\n int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));\n int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);\n int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);\n\n float4 val1 = read_imagef(src1, sampler, src1_loc);\n float4 val2 = read_imagef(src2, sampler, src2_loc);\n\n write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));\n}\n@end verbatim\n\n@end itemize\n\n"}]},{"filtergroup":["pseudocolor"],"info":"\nAlter frame colors in video with pseudocolors.\n\nThis filter accepts the following options:\n\n","options":[{"names":["c0"],"info":"set pixel first component expression\n\n"},{"names":["c1"],"info":"set pixel second component expression\n\n"},{"names":["c2"],"info":"set pixel third component expression\n\n"},{"names":["c3"],"info":"set pixel fourth component expression, corresponds to the alpha component\n\n"},{"names":["i"],"info":"set component to use as base for altering colors\n\nEach of them specifies the expression to use for computing the lookup table for\nthe corresponding pixel component values.\n\nThe expressions can contain the following constants and functions:\n\n@item w\n@item h\nThe input width and height.\n\n@item val\nThe input value for the pixel component.\n\n@item ymin, umin, vmin, amin\nThe minimum allowed component value.\n\n@item ymax, umax, vmax, amax\nThe maximum allowed component value.\n\nAll expressions default to \"val\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nChange too high luma values to gradient:\n@example\npseudocolor=\"'if(between(val,ymax,amax),lerp(ymin,ymax,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(umax,umin,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(vmin,vmax,(val-ymax)/(amax-ymax)),-1):-1'\"\n@end example\n@end itemize\n\n"}]},{"filtergroup":["psnr"],"info":"\nObtain the average, maximum and minimum PSNR (Peak Signal to Noise\nRatio) between two input videos.\n\nThis filter takes in input two input videos, the first input is\nconsidered the \"main\" source and is passed unchanged to the\noutput. The second input is used as a \"reference\" video for computing\nthe PSNR.\n\nBoth video inputs must have the same resolution and pixel format for\nthis filter to work correctly. Also it assumes that both inputs\nhave the same number of frames, which are compared one by one.\n\nThe obtained average PSNR is printed through the logging system.\n\nThe filter stores the accumulated MSE (mean squared error) of each\nframe, and at the end of the processing it is averaged across all frames\nequally, and the following formula is applied to obtain the PSNR:\n\n@example\nPSNR = 10*log10(MAX^2/MSE)\n@end example\n\nWhere MAX is the average of the maximum values of each component of the\nimage.\n\nThe description of the accepted parameters follows.\n\n","options":[{"names":["stats_file","f"],"info":"If specified the filter will use the named file to save the PSNR of\neach individual frame. When filename equals \"-\" the data is sent to\nstandard output.\n\n"},{"names":["stats_version"],"info":"Specifies which version of the stats file format to use. Details of\neach format are written below.\nDefault value is 1.\n\n"},{"names":["stats_add_max"],"info":"Determines whether the max value is output to the stats log.\nDefault value is 0.\nRequires stats_version >= 2. If this is set and stats_version < 2,\nthe filter will return an error.\n\nThis filter also supports the @ref{framesync} options.\n\nThe file printed if @var{stats_file} is selected, contains a sequence of\nkey/value pairs of the form @var{key}:@var{value} for each compared\ncouple of frames.\n\nIf a @var{stats_version} greater than 1 is specified, a header line precedes\nthe list of per-frame-pair stats, with key value pairs following the frame\nformat with the following parameters:\n\n@item psnr_log_version\nThe version of the log file format. Will match @var{stats_version}.\n\n@item fields\nA comma separated list of the per-frame-pair parameters included in\nthe log.\n\nA description of each shown per-frame-pair parameter follows:\n\n@item n\nsequential number of the input frame, starting from 1\n\n@item mse_avg\nMean Square Error pixel-by-pixel average difference of the compared\nframes, averaged over all the image components.\n\n@item mse_y, mse_u, mse_v, mse_r, mse_g, mse_b, mse_a\nMean Square Error pixel-by-pixel average difference of the compared\nframes for the component specified by the suffix.\n\n@item psnr_y, psnr_u, psnr_v, psnr_r, psnr_g, psnr_b, psnr_a\nPeak Signal to Noise ratio of the compared frames for the component\nspecified by the suffix.\n\n@item max_avg, max_y, max_u, max_v\nMaximum allowed value for each channel, and average over all\nchannels.\n\n","examples":"@subsection Examples\n@itemize\n@item\nFor example:\n@example\nmovie=ref_movie.mpg, setpts=PTS-STARTPTS [main];\n[main][ref] psnr=\"stats_file=stats.log\" [out]\n@end example\n\nOn this example the input file being processed is compared with the\nreference file @file{ref_movie.mpg}. The PSNR of each individual frame\nis stored in @file{stats.log}.\n\n@item\nAnother example with different containers:\n@example\nffmpeg -i main.mpg -i ref.mkv -lavfi \"[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]psnr\" -f null -\n@end example\n@end itemize\n\n@anchor{pullup}\n"}]},{"filtergroup":["pullup"],"info":"\nPulldown reversal (inverse telecine) filter, capable of handling mixed\nhard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive\ncontent.\n\nThe pullup filter is designed to take advantage of future context in making\nits decisions. This filter is stateless in the sense that it does not lock\nonto a pattern to follow, but it instead looks forward to the following\nfields in order to identify matches and rebuild progressive frames.\n\nTo produce content with an even framerate, insert the fps filter after\npullup, use @code{fps=24000/1001} if the input frame rate is 29.97fps,\n@code{fps=24} for 30fps and the (rare) telecined 25fps input.\n\n","options":[{"names":["jl"],"info":""},{"names":["jr"],"info":""},{"names":["jt"],"info":""},{"names":["jb"],"info":"These options set the amount of \"junk\" to ignore at the left, right, top, and\nbottom of the image, respectively. Left and right are in units of 8 pixels,\nwhile top and bottom are in units of 2 lines.\nThe default is 8 pixels on each side.\n\n"},{"names":["sb"],"info":"Set the strict breaks. Setting this option to 1 will reduce the chances of\nfilter generating an occasional mismatched frame, but it may also cause an\nexcessive number of frames to be dropped during high motion sequences.\nConversely, setting it to -1 will make filter match fields more easily.\nThis may help processing of video where there is slight blurring between\nthe fields, but may also cause there to be interlaced frames in the output.\nDefault value is @code{0}.\n\n"},{"names":["mp"],"info":"Set the metric plane to use. It accepts the following values:\n@item l\nUse luma plane.\n\n@item u\nUse chroma blue plane.\n\n@item v\nUse chroma red plane.\n\nThis option may be set to use chroma plane instead of the default luma plane\nfor doing filter's computations. This may improve accuracy on very clean\nsource material, but more likely will decrease accuracy, especially if there\nis chroma noise (rainbow effect) or any grayscale video.\nThe main purpose of setting @option{mp} to a chroma plane is to reduce CPU\nload and make pullup usable in realtime on slow machines.\n\nFor best results (without duplicated frames in the output file) it is\nnecessary to change the output frame rate. For example, to inverse\ntelecine NTSC input:\n@example\nffmpeg -i input -vf pullup -r 24000/1001 ...\n@end example\n\n"}]},{"filtergroup":["qp"],"info":"\nChange video quantization parameters (QP).\n\nThe filter accepts the following option:\n\n","options":[{"names":["qp"],"info":"Set expression for quantization parameter.\n\nThe expression is evaluated through the eval API and can contain, among others,\nthe following constants:\n\n@item known\n1 if index is not 129, 0 otherwise.\n\n@item qp\nSequential index starting from -129 to 128.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSome equation like:\n@example\nqp=2+2*sin(PI*qp)\n@end example\n@end itemize\n\n"}]},{"filtergroup":["random"],"info":"\nFlush video frames from internal cache of frames into a random order.\nNo frame is discarded.\nInspired by @ref{frei0r} nervous filter.\n\n","options":[{"names":["frames"],"info":"Set size in number of frames of internal cache, in range from @code{2} to\n@code{512}. Default is @code{30}.\n\n"},{"names":["seed"],"info":"Set seed for random number generator, must be an integer included between\n@code{0} and @code{UINT32_MAX}. If not specified, or if explicitly set to\nless than @code{0}, the filter will try to use a good random seed on a\nbest effort basis.\n\n"}]},{"filtergroup":["readeia608"],"info":"\nRead closed captioning (EIA-608) information from the top lines of a video frame.\n\nThis filter adds frame metadata for @code{lavfi.readeia608.X.cc} and\n@code{lavfi.readeia608.X.line}, where @code{X} is the number of the identified line\nwith EIA-608 data (starting from 0). A description of each metadata value follows:\n\n","options":[{"names":["lavfi.readeia608.X.cc"],"info":"The two bytes stored as EIA-608 data (printed in hexadecimal).\n\n"},{"names":["lavfi.readeia608.X.line"],"info":"The number of the line on which the EIA-608 data was identified and read.\n\nThis filter accepts the following options:\n\n@item scan_min\nSet the line to start scanning for EIA-608 data. Default is @code{0}.\n\n@item scan_max\nSet the line to end scanning for EIA-608 data. Default is @code{29}.\n\n@item mac\nSet minimal acceptable amplitude change for sync codes detection.\nDefault is @code{0.2}. Allowed range is @code{[0.001 - 1]}.\n\n@item spw\nSet the ratio of width reserved for sync code detection.\nDefault is @code{0.27}. Allowed range is @code{[0.01 - 0.7]}.\n\n@item mhd\nSet the max peaks height difference for sync code detection.\nDefault is @code{0.1}. Allowed range is @code{[0.0 - 0.5]}.\n\n@item mpd\nSet max peaks period difference for sync code detection.\nDefault is @code{0.1}. Allowed range is @code{[0.0 - 0.5]}.\n\n@item msd\nSet the first two max start code bits differences.\nDefault is @code{0.02}. Allowed range is @code{[0.0 - 0.5]}.\n\n@item bhd\nSet the minimum ratio of bits height compared to 3rd start code bit.\nDefault is @code{0.75}. Allowed range is @code{[0.01 - 1]}.\n\n@item th_w\nSet the white color threshold. Default is @code{0.35}. Allowed range is @code{[0.1 - 1]}.\n\n@item th_b\nSet the black color threshold. Default is @code{0.15}. Allowed range is @code{[0.0 - 0.5]}.\n\n@item chp\nEnable checking the parity bit. In the event of a parity error, the filter will output\n@code{0x00} for that character. Default is false.\n\n@item lp\nLowpass lines prior to further processing. Default is disabled.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nOutput a csv with presentation time and the first two lines of identified EIA-608 captioning data.\n@example\nffprobe -f lavfi -i movie=captioned_video.mov,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.cc,lavfi.readeia608.1.cc -of csv\n@end example\n@end itemize\n\n"}]},{"filtergroup":["readvitc"],"info":"\nRead vertical interval timecode (VITC) information from the top lines of a\nvideo frame.\n\nThe filter adds frame metadata key @code{lavfi.readvitc.tc_str} with the\ntimecode value, if a valid timecode has been detected. Further metadata key\n@code{lavfi.readvitc.found} is set to 0/1 depending on whether\ntimecode data has been found or not.\n\nThis filter accepts the following options:\n\n","options":[{"names":["scan_max"],"info":"Set the maximum number of lines to scan for VITC data. If the value is set to\n@code{-1} the full video frame is scanned. Default is @code{45}.\n\n"},{"names":["thr_b"],"info":"Set the luma threshold for black. Accepts float numbers in the range [0.0,1.0],\ndefault value is @code{0.2}. The value must be equal or less than @code{thr_w}.\n\n"},{"names":["thr_w"],"info":"Set the luma threshold for white. Accepts float numbers in the range [0.0,1.0],\ndefault value is @code{0.6}. The value must be equal or greater than @code{thr_b}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDetect and draw VITC data onto the video frame; if no valid VITC is detected,\ndraw @code{--:--:--:--} as a placeholder:\n@example\nffmpeg -i input.avi -filter:v 'readvitc,drawtext=fontfile=FreeMono.ttf:text=%@{metadata\\\\:lavfi.readvitc.tc_str\\\\:--\\\\\\\\\\\\:--\\\\\\\\\\\\:--\\\\\\\\\\\\:--@}:x=(w-tw)/2:y=400-ascent'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["remap"],"info":"\nRemap pixels using 2nd: Xmap and 3rd: Ymap input video stream.\n\nDestination pixel at position (X, Y) will be picked from source (x, y) position\nwhere x = Xmap(X, Y) and y = Ymap(X, Y). If mapping values are out of range, zero\nvalue for pixel will be used for destination pixel.\n\nXmap and Ymap input video streams must be of same dimensions. Output video stream\nwill have Xmap/Ymap video stream dimensions.\nXmap and Ymap input video streams are 16bit depth, single channel.\n\n","options":[{"names":["format"],"info":"Specify pixel format of output from this filter. Can be @code{color} or @code{gray}.\nDefault is @code{color}.\n\n"}]},{"filtergroup":["removegrain"],"info":"\nThe removegrain filter is a spatial denoiser for progressive video.\n\n","options":[{"names":["m0"],"info":"Set mode for the first plane.\n\n"},{"names":["m1"],"info":"Set mode for the second plane.\n\n"},{"names":["m2"],"info":"Set mode for the third plane.\n\n"},{"names":["m3"],"info":"Set mode for the fourth plane.\n\nRange of mode is from 0 to 24. Description of each mode follows:\n\n@item 0\nLeave input plane unchanged. Default.\n\n@item 1\nClips the pixel with the minimum and maximum of the 8 neighbour pixels.\n\n@item 2\nClips the pixel with the second minimum and maximum of the 8 neighbour pixels.\n\n@item 3\nClips the pixel with the third minimum and maximum of the 8 neighbour pixels.\n\n@item 4\nClips the pixel with the fourth minimum and maximum of the 8 neighbour pixels.\nThis is equivalent to a median filter.\n\n@item 5\nLine-sensitive clipping giving the minimal change.\n\n@item 6\nLine-sensitive clipping, intermediate.\n\n@item 7\nLine-sensitive clipping, intermediate.\n\n@item 8\nLine-sensitive clipping, intermediate.\n\n@item 9\nLine-sensitive clipping on a line where the neighbours pixels are the closest.\n\n@item 10\nReplaces the target pixel with the closest neighbour.\n\n@item 11\n[1 2 1] horizontal and vertical kernel blur.\n\n@item 12\nSame as mode 11.\n\n@item 13\nBob mode, interpolates top field from the line where the neighbours\npixels are the closest.\n\n@item 14\nBob mode, interpolates bottom field from the line where the neighbours\npixels are the closest.\n\n@item 15\nBob mode, interpolates top field. Same as 13 but with a more complicated\ninterpolation formula.\n\n@item 16\nBob mode, interpolates bottom field. Same as 14 but with a more complicated\ninterpolation formula.\n\n@item 17\nClips the pixel with the minimum and maximum of respectively the maximum and\nminimum of each pair of opposite neighbour pixels.\n\n@item 18\nLine-sensitive clipping using opposite neighbours whose greatest distance from\nthe current pixel is minimal.\n\n@item 19\nReplaces the pixel with the average of its 8 neighbours.\n\n@item 20\nAverages the 9 pixels ([1 1 1] horizontal and vertical blur).\n\n@item 21\nClips pixels using the averages of opposite neighbour.\n\n@item 22\nSame as mode 21 but simpler and faster.\n\n@item 23\nSmall edge and halo removal, but reputed useless.\n\n@item 24\nSimilar as 23.\n\n"}]},{"filtergroup":["removelogo"],"info":"\nSuppress a TV station logo, using an image file to determine which\npixels comprise the logo. It works by filling in the pixels that\ncomprise the logo with neighboring pixels.\n\n","options":[{"names":["filename","f"],"info":"Set the filter bitmap file, which can be any image format supported by\nlibavformat. The width and height of the image file must match those of the\nvideo stream being processed.\n\nPixels in the provided bitmap image with a value of zero are not\nconsidered part of the logo, non-zero pixels are considered part of\nthe logo. If you use white (255) for the logo and black (0) for the\nrest, you will be safe. For making the filter bitmap, it is\nrecommended to take a screen capture of a black frame with the logo\nvisible, and then using a threshold filter followed by the erode\nfilter once or twice.\n\nIf needed, little splotches can be fixed manually. Remember that if\nlogo pixels are not covered, the filter quality will be much\nreduced. Marking too many pixels as part of the logo does not hurt as\nmuch, but it will increase the amount of blurring needed to cover over\nthe image and will destroy more information than necessary, and extra\npixels will slow things down on a large logo.\n\n"}]},{"filtergroup":["repeatfields"],"info":"\nThis filter uses the repeat_field flag from the Video ES headers and hard repeats\nfields based on its value.\n\n","options":[]},{"filtergroup":["reverse"],"info":"\nReverse a video clip.\n\nWarning: This filter requires memory to buffer the entire clip, so trimming\nis suggested.\n\n@subsection Examples\n\n@itemize\n@item\nTake the first 5 seconds of a clip, and reverse it.\n@example\ntrim=end=5,reverse\n@end example\n@end itemize\n\n","options":[]},{"filtergroup":["rgbashift"],"info":"Shift R/G/B/A pixels horizontally and/or vertically.\n\n","options":[{"names":["rh"],"info":"Set amount to shift red horizontally.\n"},{"names":["rv"],"info":"Set amount to shift red vertically.\n"},{"names":["gh"],"info":"Set amount to shift green horizontally.\n"},{"names":["gv"],"info":"Set amount to shift green vertically.\n"},{"names":["bh"],"info":"Set amount to shift blue horizontally.\n"},{"names":["bv"],"info":"Set amount to shift blue vertically.\n"},{"names":["ah"],"info":"Set amount to shift alpha horizontally.\n"},{"names":["av"],"info":"Set amount to shift alpha vertically.\n"},{"names":["edge"],"info":"Set edge mode, can be @var{smear}, default, or @var{warp}.\n\n@subsection Commands\n\nThis filter supports the all above options as @ref{commands}.\n\n"}]},{"filtergroup":["roberts"],"info":"Apply roberts cross operator to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\n\n"}]},{"filtergroup":["rotate"],"info":"\nRotate video by an arbitrary angle expressed in radians.\n\n","options":[{"names":["angle","a"],"info":"Set an expression for the angle by which to rotate the input video\nclockwise, expressed as a number of radians. A negative value will\nresult in a counter-clockwise rotation. By default it is set to \"0\".\n\nThis expression is evaluated for each frame.\n\n"},{"names":["out_w","ow"],"info":"Set the output width expression, default value is \"iw\".\nThis expression is evaluated just once during configuration.\n\n"},{"names":["out_h","oh"],"info":"Set the output height expression, default value is \"ih\".\nThis expression is evaluated just once during configuration.\n\n"},{"names":["bilinear"],"info":"Enable bilinear interpolation if set to 1, a value of 0 disables\nit. Default value is 1.\n\n"},{"names":["fillcolor","c"],"info":"Set the color used to fill the output area not covered by the rotated\nimage. For the general syntax of this option, check the\n@ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nIf the special value \"none\" is selected then no\nbackground is printed (useful for example if the background is never shown).\n\nDefault value is \"black\".\n\nThe expressions for the angle and the output size can contain the\nfollowing constants and functions:\n\n@item n\nsequential number of the input frame, starting from 0. It is always NAN\nbefore the first frame is filtered.\n\n@item t\ntime in seconds of the input frame, it is set to 0 when the filter is\nconfigured. It is always NAN before the first frame is filtered.\n\n@item hsub\n@item vsub\nhorizontal and vertical chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item in_w, iw\n@item in_h, ih\nthe input video width and height\n\n@item out_w, ow\n@item out_h, oh\nthe output width and height, that is the size of the padded area as\nspecified by the @var{width} and @var{height} expressions\n\n@item rotw(a)\n@item roth(a)\nthe minimal width/height required for completely containing the input\nvideo rotated by @var{a} radians.\n\nThese are only available when computing the @option{out_w} and\n@option{out_h} expressions.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nRotate the input by PI/6 radians clockwise:\n@example\nrotate=PI/6\n@end example\n\n@item\nRotate the input by PI/6 radians counter-clockwise:\n@example\nrotate=-PI/6\n@end example\n\n@item\nRotate the input by 45 degrees clockwise:\n@example\nrotate=45*PI/180\n@end example\n\n@item\nApply a constant rotation with period T, starting from an angle of PI/3:\n@example\nrotate=PI/3+2*PI*t/T\n@end example\n\n@item\nMake the input video rotation oscillating with a period of T\nseconds and an amplitude of A radians:\n@example\nrotate=A*sin(2*PI/T*t)\n@end example\n\n@item\nRotate the video, output size is chosen so that the whole rotating\ninput video is always completely contained in the output:\n@example\nrotate='2*PI*t:ow=hypot(iw,ih):oh=ow'\n@end example\n\n@item\nRotate the video, reduce the output size so that no background is ever\nshown:\n@example\nrotate=2*PI*t:ow='min(iw,ih)/sqrt(2)':oh=ow:c=none\n@end example\n@end itemize\n\n@subsection Commands\n\nThe filter supports the following commands:\n\n@table @option\n@item a, angle\nSet the angle expression.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n@end table\n\n"}]},{"filtergroup":["sab"],"info":"\nApply Shape Adaptive Blur.\n\n","options":[{"names":["luma_radius","lr"],"info":"Set luma blur filter strength, must be a value in range 0.1-4.0, default\nvalue is 1.0. A greater value will result in a more blurred image, and\nin slower processing.\n\n"},{"names":["luma_pre_filter_radius","lpfr"],"info":"Set luma pre-filter radius, must be a value in the 0.1-2.0 range, default\nvalue is 1.0.\n\n"},{"names":["luma_strength","ls"],"info":"Set luma maximum difference between pixels to still be considered, must\nbe a value in the 0.1-100.0 range, default value is 1.0.\n\n"},{"names":["chroma_radius","cr"],"info":"Set chroma blur filter strength, must be a value in range -0.9-4.0. A\ngreater value will result in a more blurred image, and in slower\nprocessing.\n\n"},{"names":["chroma_pre_filter_radius","cpfr"],"info":"Set chroma pre-filter radius, must be a value in the -0.9-2.0 range.\n\n"},{"names":["chroma_strength","cs"],"info":"Set chroma maximum difference between pixels to still be considered,\nmust be a value in the -0.9-100.0 range.\n\nEach chroma option value, if not explicitly specified, is set to the\ncorresponding luma option value.\n\n@anchor{scale}\n"}]},{"filtergroup":["scale"],"info":"\nScale (resize) the input video, using the libswscale library.\n\nThe scale filter forces the output display aspect ratio to be the same\nof the input, by changing the output sample aspect ratio.\n\nIf the input image format is different from the format requested by\nthe next filter, the scale filter will convert the input to the\nrequested format.\n\n@subsection Options\nThe filter accepts the following options, or any of the options\nsupported by the libswscale scaler.\n\nSee @ref{scaler_options,,the ffmpeg-scaler manual,ffmpeg-scaler} for\nthe complete list of scaler options.\n\n","options":[{"names":["width","w"],"info":""},{"names":["height","h"],"info":"Set the output video dimension expression. Default value is the input\ndimension.\n\nIf the @var{width} or @var{w} value is 0, the input width is used for\nthe output. If the @var{height} or @var{h} value is 0, the input height\nis used for the output.\n\nIf one and only one of the values is -n with n >= 1, the scale filter\nwill use a value that maintains the aspect ratio of the input image,\ncalculated from the other specified dimension. After that it will,\nhowever, make sure that the calculated dimension is divisible by n and\nadjust the value if necessary.\n\nIf both values are -n with n >= 1, the behavior will be identical to\nboth values being set to 0 as previously detailed.\n\nSee below for the list of accepted constants for use in the dimension\nexpression.\n\n"},{"names":["eval"],"info":"Specify when to evaluate @var{width} and @var{height} expression. It accepts the following values:\n\n@item init\nOnly evaluate expressions once during the filter initialization or when a command is processed.\n\n@item frame\nEvaluate expressions for each incoming frame.\n\n\nDefault value is @samp{init}.\n\n\n"},{"names":["interl"],"info":"Set the interlacing mode. It accepts the following values:\n\n@item 1\nForce interlaced aware scaling.\n\n@item 0\nDo not apply interlaced scaling.\n\n@item -1\nSelect interlaced aware scaling depending on whether the source frames\nare flagged as interlaced or not.\n\nDefault value is @samp{0}.\n\n"},{"names":["flags"],"info":"Set libswscale scaling flags. See\n@ref{sws_flags,,the ffmpeg-scaler manual,ffmpeg-scaler} for the\ncomplete list of values. If not explicitly specified the filter applies\nthe default flags.\n\n\n"},{"names":["param0","param1"],"info":"Set libswscale input parameters for scaling algorithms that need them. See\n@ref{sws_params,,the ffmpeg-scaler manual,ffmpeg-scaler} for the\ncomplete documentation. If not explicitly specified the filter applies\nempty parameters.\n\n\n\n"},{"names":["size","s"],"info":"Set the video size. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["in_color_matrix"],"info":""},{"names":["out_color_matrix"],"info":"Set in/output YCbCr color space type.\n\nThis allows the autodetected value to be overridden as well as allows forcing\na specific value used for the output and encoder.\n\nIf not specified, the color space type depends on the pixel format.\n\nPossible values:\n\n@item auto\nChoose automatically.\n\n@item bt709\nFormat conforming to International Telecommunication Union (ITU)\nRecommendation BT.709.\n\n@item fcc\nSet color space conforming to the United States Federal Communications\nCommission (FCC) Code of Federal Regulations (CFR) Title 47 (2003) 73.682 (a).\n\n@item bt601\n@item bt470\n@item smpte170m\nSet color space conforming to:\n\n@itemize\n@item\nITU Radiocommunication Sector (ITU-R) Recommendation BT.601\n\n@item\nITU-R Rec. BT.470-6 (1998) Systems B, B1, and G\n\n@item\nSociety of Motion Picture and Television Engineers (SMPTE) ST 170:2004\n\n@end itemize\n\n@item smpte240m\nSet color space conforming to SMPTE ST 240:1999.\n\n@item bt2020\nSet color space conforming to ITU-R BT.2020 non-constant luminance system.\n\n"},{"names":["in_range"],"info":""},{"names":["out_range"],"info":"Set in/output YCbCr sample range.\n\nThis allows the autodetected value to be overridden as well as allows forcing\na specific value used for the output and encoder. If not specified, the\nrange depends on the pixel format. Possible values:\n\n@item auto/unknown\nChoose automatically.\n\n@item jpeg/full/pc\nSet full range (0-255 in case of 8-bit luma).\n\n@item mpeg/limited/tv\nSet \"MPEG\" range (16-235 in case of 8-bit luma).\n\n"},{"names":["force_original_aspect_ratio"],"info":"Enable decreasing or increasing output video width or height if necessary to\nkeep the original aspect ratio. Possible values:\n\n@item disable\nScale the video as specified and disable this feature.\n\n@item decrease\nThe output video dimensions will automatically be decreased if needed.\n\n@item increase\nThe output video dimensions will automatically be increased if needed.\n\n\nOne useful instance of this option is that when you know a specific device's\nmaximum allowed resolution, you can use this to limit the output video to\nthat, while retaining the aspect ratio. For example, device A allows\n1280x720 playback, and your video is 1920x800. Using this option (set it to\ndecrease) and specifying 1280x720 to the command line makes the output\n1280x533.\n\nPlease note that this is a different thing than specifying -1 for @option{w}\nor @option{h}, you still need to specify the output resolution for this option\nto work.\n\n"},{"names":["force_divisible_by"],"info":"Ensures that both the output dimensions, width and height, are divisible by the\ngiven integer when used together with @option{force_original_aspect_ratio}. This\nworks similar to using @code{-n} in the @option{w} and @option{h} options.\n\nThis option respects the value set for @option{force_original_aspect_ratio},\nincreasing or decreasing the resolution accordingly. The video's aspect ratio\nmay be slightly modified.\n\nThis option can be handy if you need to have a video fit within or exceed\na defined resolution using @option{force_original_aspect_ratio} but also have\nencoder restrictions on width or height divisibility.\n\n\nThe values of the @option{w} and @option{h} options are expressions\ncontaining the following constants:\n\n@item in_w\n@item in_h\nThe input width and height\n\n@item iw\n@item ih\nThese are the same as @var{in_w} and @var{in_h}.\n\n@item out_w\n@item out_h\nThe output (scaled) width and height\n\n@item ow\n@item oh\nThese are the same as @var{out_w} and @var{out_h}\n\n@item a\nThe same as @var{iw} / @var{ih}\n\n@item sar\ninput sample aspect ratio\n\n@item dar\nThe input display aspect ratio. Calculated from @code{(iw / ih) * sar}.\n\n@item hsub\n@item vsub\nhorizontal and vertical input chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item ohsub\n@item ovsub\nhorizontal and vertical output chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nScale the input video to a size of 200x100\n@example\nscale=w=200:h=100\n@end example\n\nThis is equivalent to:\n@example\nscale=200:100\n@end example\n\nor:\n@example\nscale=200x100\n@end example\n\n@item\nSpecify a size abbreviation for the output size:\n@example\nscale=qcif\n@end example\n\nwhich can also be written as:\n@example\nscale=size=qcif\n@end example\n\n@item\nScale the input to 2x:\n@example\nscale=w=2*iw:h=2*ih\n@end example\n\n@item\nThe above is the same as:\n@example\nscale=2*in_w:2*in_h\n@end example\n\n@item\nScale the input to 2x with forced interlaced scaling:\n@example\nscale=2*iw:2*ih:interl=1\n@end example\n\n@item\nScale the input to half size:\n@example\nscale=w=iw/2:h=ih/2\n@end example\n\n@item\nIncrease the width, and set the height to the same size:\n@example\nscale=3/2*iw:ow\n@end example\n\n@item\nSeek Greek harmony:\n@example\nscale=iw:1/PHI*iw\nscale=ih*PHI:ih\n@end example\n\n@item\nIncrease the height, and set the width to 3/2 of the height:\n@example\nscale=w=3/2*oh:h=3/5*ih\n@end example\n\n@item\nIncrease the size, making the size a multiple of the chroma\nsubsample values:\n@example\nscale=\"trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub\"\n@end example\n\n@item\nIncrease the width to a maximum of 500 pixels,\nkeeping the same aspect ratio as the input:\n@example\nscale=w='min(500\\, iw*3/2):h=-1'\n@end example\n\n@item\nMake pixels square by combining scale and setsar:\n@example\nscale='trunc(ih*dar):ih',setsar=1/1\n@end example\n\n@item\nMake pixels square by combining scale and setsar,\nmaking sure the resulting resolution is even (required by some codecs):\n@example\nscale='trunc(ih*dar/2)*2:trunc(ih/2)*2',setsar=1/1\n@end example\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item width, w\n@item height, h\nSet the output video dimension expression.\nThe command accepts the same syntax of the corresponding option.\n\nIf the specified expression is not valid, it is kept at its current\nvalue.\n@end table\n\n"}]},{"filtergroup":["scale_npp"],"info":"\nUse the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel\nformat conversion on CUDA video frames. Setting the output width and height\nworks in the same way as for the @var{scale} filter.\n\nThe following additional options are accepted:\n","options":[{"names":["format"],"info":"The pixel format of the output CUDA frames. If set to the string \"same\" (the\ndefault), the input format will be kept. Note that automatic format negotiation\nand conversion is not yet supported for hardware frames\n\n"},{"names":["interp_algo"],"info":"The interpolation algorithm used for resizing. One of the following:\n@item nn\nNearest neighbour.\n\n@item linear\n@item cubic\n@item cubic2p_bspline\n2-parameter cubic (B=1, C=0)\n\n@item cubic2p_catmullrom\n2-parameter cubic (B=0, C=1/2)\n\n@item cubic2p_b05c03\n2-parameter cubic (B=1/2, C=3/10)\n\n@item super\nSupersampling\n\n@item lanczos\n\n"},{"names":["force_original_aspect_ratio"],"info":"Enable decreasing or increasing output video width or height if necessary to\nkeep the original aspect ratio. Possible values:\n\n@item disable\nScale the video as specified and disable this feature.\n\n@item decrease\nThe output video dimensions will automatically be decreased if needed.\n\n@item increase\nThe output video dimensions will automatically be increased if needed.\n\n\nOne useful instance of this option is that when you know a specific device's\nmaximum allowed resolution, you can use this to limit the output video to\nthat, while retaining the aspect ratio. For example, device A allows\n1280x720 playback, and your video is 1920x800. Using this option (set it to\ndecrease) and specifying 1280x720 to the command line makes the output\n1280x533.\n\nPlease note that this is a different thing than specifying -1 for @option{w}\nor @option{h}, you still need to specify the output resolution for this option\nto work.\n\n"},{"names":["force_divisible_by"],"info":"Ensures that both the output dimensions, width and height, are divisible by the\ngiven integer when used together with @option{force_original_aspect_ratio}. This\nworks similar to using @code{-n} in the @option{w} and @option{h} options.\n\nThis option respects the value set for @option{force_original_aspect_ratio},\nincreasing or decreasing the resolution accordingly. The video's aspect ratio\nmay be slightly modified.\n\nThis option can be handy if you need to have a video fit within or exceed\na defined resolution using @option{force_original_aspect_ratio} but also have\nencoder restrictions on width or height divisibility.\n\n\n"}]},{"filtergroup":["scale2ref"],"info":"\nScale (resize) the input video, based on a reference video.\n\nSee the scale filter for available options, scale2ref supports the same but\nuses the reference video instead of the main input as basis. scale2ref also\nsupports the following additional constants for the @option{w} and\n@option{h} options:\n\n","options":[{"names":["main_w"],"info":""},{"names":["main_h"],"info":"The main input video's width and height\n\n"},{"names":["main_a"],"info":"The same as @var{main_w} / @var{main_h}\n\n"},{"names":["main_sar"],"info":"The main input video's sample aspect ratio\n\n"},{"names":["main_dar","mdar"],"info":"The main input video's display aspect ratio. Calculated from\n@code{(main_w / main_h) * main_sar}.\n\n"},{"names":["main_hsub"],"info":""},{"names":["main_vsub"],"info":"The main input video's horizontal and vertical chroma subsample values.\nFor example for the pixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub}\nis 1.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nScale a subtitle stream (b) to match the main video (a) in size before overlaying\n@example\n'scale2ref[b][a];[a][b]overlay'\n@end example\n\n@item\nScale a logo to 1/10th the height of a video, while preserving its display aspect ratio.\n@example\n[logo-in][video-in]scale2ref=w=oh*mdar:h=ih/10[logo-out][video-out]\n@end example\n@end itemize\n\n"}]},{"filtergroup":["scroll"],"info":"Scroll input video horizontally and/or vertically by constant speed.\n\n","options":[{"names":["horizontal","h"],"info":"Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 1.\nNegative values changes scrolling direction.\n\n"},{"names":["vertical","v"],"info":"Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1.\nNegative values changes scrolling direction.\n\n"},{"names":["hpos"],"info":"Set the initial horizontal scrolling position. Default is 0. Allowed range is from 0 to 1.\n\n"},{"names":["vpos"],"info":"Set the initial vertical scrolling position. Default is 0. Allowed range is from 0 to 1.\n\n@subsection Commands\n\nThis filter supports the following @ref{commands}:\n@item horizontal, h\nSet the horizontal scrolling speed.\n@item vertical, v\nSet the vertical scrolling speed.\n\n@anchor{selectivecolor}\n"}]},{"filtergroup":["selectivecolor"],"info":"\nAdjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such\nas \"reds\", \"yellows\", \"greens\", \"cyans\", ...). The adjustment range is defined\nby the \"purity\" of the color (that is, how saturated it already is).\n\nThis filter is similar to the Adobe Photoshop Selective Color tool.\n\n","options":[{"names":["correction_method"],"info":"Select color correction method.\n\nAvailable values are:\n@item absolute\nSpecified adjustments are applied \"as-is\" (added/subtracted to original pixel\ncomponent value).\n@item relative\nSpecified adjustments are relative to the original component value.\nDefault is @code{absolute}.\n"},{"names":["reds"],"info":"Adjustments for red pixels (pixels where the red component is the maximum)\n"},{"names":["yellows"],"info":"Adjustments for yellow pixels (pixels where the blue component is the minimum)\n"},{"names":["greens"],"info":"Adjustments for green pixels (pixels where the green component is the maximum)\n"},{"names":["cyans"],"info":"Adjustments for cyan pixels (pixels where the red component is the minimum)\n"},{"names":["blues"],"info":"Adjustments for blue pixels (pixels where the blue component is the maximum)\n"},{"names":["magentas"],"info":"Adjustments for magenta pixels (pixels where the green component is the minimum)\n"},{"names":["whites"],"info":"Adjustments for white pixels (pixels where all components are greater than 128)\n"},{"names":["neutrals"],"info":"Adjustments for all pixels except pure black and pure white\n"},{"names":["blacks"],"info":"Adjustments for black pixels (pixels where all components are lesser than 128)\n"},{"names":["psfile"],"info":"Specify a Photoshop selective color file (@code{.asv}) to import the settings from.\n\nAll the adjustment settings (@option{reds}, @option{yellows}, ...) accept up to\n4 space separated floating point adjustment values in the [-1,1] range,\nrespectively to adjust the amount of cyan, magenta, yellow and black for the\npixels of its range.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nIncrease cyan by 50% and reduce yellow by 33% in every green areas, and\nincrease magenta by 27% in blue areas:\n@example\nselectivecolor=greens=.5 0 -.33 0:blues=0 .27\n@end example\n\n@item\nUse a Photoshop selective color preset:\n@example\nselectivecolor=psfile=MySelectiveColorPresets/Misty.asv\n@end example\n@end itemize\n\n@anchor{separatefields}\n"}]},{"filtergroup":["separatefields"],"info":"\nThe @code{separatefields} takes a frame-based video input and splits\neach frame into its components fields, producing a new half height clip\nwith twice the frame rate and twice the frame count.\n\nThis filter use field-dominance information in frame to decide which\nof each pair of fields to place first in the output.\nIf it gets it wrong use @ref{setfield} filter before @code{separatefields} filter.\n\n","options":[]},{"filtergroup":["setdar","setsar"],"info":"\nThe @code{setdar} filter sets the Display Aspect Ratio for the filter\noutput video.\n\nThis is done by changing the specified Sample (aka Pixel) Aspect\nRatio, according to the following equation:\n@example\n@var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}\n@end example\n\nKeep in mind that the @code{setdar} filter does not modify the pixel\ndimensions of the video frame. Also, the display aspect ratio set by\nthis filter may be changed by later filters in the filterchain,\ne.g. in case of scaling or if another \"setdar\" or a \"setsar\" filter is\napplied.\n\nThe @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for\nthe filter output video.\n\nNote that as a consequence of the application of this filter, the\noutput display aspect ratio will change according to the equation\nabove.\n\nKeep in mind that the sample aspect ratio set by the @code{setsar}\nfilter may be changed by later filters in the filterchain, e.g. if\nanother \"setsar\" or a \"setdar\" filter is applied.\n\nIt accepts the following parameters:\n\n","options":[{"names":["r","ratio","dar","(@code{setdar}","only)","sar","(@code{setsar}","only)"],"info":"Set the aspect ratio used by the filter.\n\nThe parameter can be a floating point number string, an expression, or\na string of the form @var{num}:@var{den}, where @var{num} and\n@var{den} are the numerator and denominator of the aspect ratio. If\nthe parameter is not specified, it is assumed the value \"0\".\nIn case the form \"@var{num}:@var{den}\" is used, the @code{:} character\nshould be escaped.\n\n"},{"names":["max"],"info":"Set the maximum integer value to use for expressing numerator and\ndenominator when reducing the expressed aspect ratio to a rational.\nDefault value is @code{100}.\n\n\nThe parameter @var{sar} is an expression containing\nthe following constants:\n\n@item E, PI, PHI\nThese are approximated values for the mathematical constants e\n(Euler's number), pi (Greek pi), and phi (the golden ratio).\n\n@item w, h\nThe input width and height.\n\n@item a\nThese are the same as @var{w} / @var{h}.\n\n@item sar\nThe input sample aspect ratio.\n\n@item dar\nThe input display aspect ratio. It is the same as\n(@var{w} / @var{h}) * @var{sar}.\n\n@item hsub, vsub\nHorizontal and vertical chroma subsample values. For example, for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nTo change the display aspect ratio to 16:9, specify one of the following:\n@example\nsetdar=dar=1.77777\nsetdar=dar=16/9\n@end example\n\n@item\nTo change the sample aspect ratio to 10:11, specify:\n@example\nsetsar=sar=10/11\n@end example\n\n@item\nTo set a display aspect ratio of 16:9, and specify a maximum integer value of\n1000 in the aspect ratio reduction, use the command:\n@example\nsetdar=ratio=16/9:max=1000\n@end example\n\n@end itemize\n\n@anchor{setfield}\n"}]},{"filtergroup":["setfield"],"info":"\nForce field for the output video frame.\n\nThe @code{setfield} filter marks the interlace type field for the\noutput frames. It does not change the input frame, but only sets the\ncorresponding property, which affects how the frame is treated by\nfollowing filters (e.g. @code{fieldorder} or @code{yadif}).\n\n","options":[{"names":["mode"],"info":"Available values are:\n\n@item auto\nKeep the same field property.\n\n@item bff\nMark the frame as bottom-field-first.\n\n@item tff\nMark the frame as top-field-first.\n\n@item prog\nMark the frame as progressive.\n\n@anchor{setparams}\n"}]},{"filtergroup":["setparams"],"info":"\nForce frame parameter for the output video frame.\n\nThe @code{setparams} filter marks interlace and color range for the\noutput frames. It does not change the input frame, but only sets the\ncorresponding property, which affects how the frame is treated by\nfilters/encoders.\n\n","options":[{"names":["field_mode"],"info":"Available values are:\n\n@item auto\nKeep the same field property (default).\n\n@item bff\nMark the frame as bottom-field-first.\n\n@item tff\nMark the frame as top-field-first.\n\n@item prog\nMark the frame as progressive.\n\n"},{"names":["range"],"info":"Available values are:\n\n@item auto\nKeep the same color range property (default).\n\n@item unspecified, unknown\nMark the frame as unspecified color range.\n\n@item limited, tv, mpeg\nMark the frame as limited range.\n\n@item full, pc, jpeg\nMark the frame as full range.\n\n"},{"names":["color_primaries"],"info":"Set the color primaries.\nAvailable values are:\n\n@item auto\nKeep the same color primaries property (default).\n\n@item bt709\n@item unknown\n@item bt470m\n@item bt470bg\n@item smpte170m\n@item smpte240m\n@item film\n@item bt2020\n@item smpte428\n@item smpte431\n@item smpte432\n@item jedec-p22\n\n"},{"names":["color_trc"],"info":"Set the color transfer.\nAvailable values are:\n\n@item auto\nKeep the same color trc property (default).\n\n@item bt709\n@item unknown\n@item bt470m\n@item bt470bg\n@item smpte170m\n@item smpte240m\n@item linear\n@item log100\n@item log316\n@item iec61966-2-4\n@item bt1361e\n@item iec61966-2-1\n@item bt2020-10\n@item bt2020-12\n@item smpte2084\n@item smpte428\n@item arib-std-b67\n\n"},{"names":["colorspace"],"info":"Set the colorspace.\nAvailable values are:\n\n@item auto\nKeep the same colorspace property (default).\n\n@item gbr\n@item bt709\n@item unknown\n@item fcc\n@item bt470bg\n@item smpte170m\n@item smpte240m\n@item ycgco\n@item bt2020nc\n@item bt2020c\n@item smpte2085\n@item chroma-derived-nc\n@item chroma-derived-c\n@item ictcp\n\n"}]},{"filtergroup":["showinfo"],"info":"\nShow a line containing various information for each input video frame.\nThe input video is not modified.\n\nThis filter supports the following options:\n\n","options":[{"names":["checksum"],"info":"Calculate checksums of each plane. By default enabled.\n\nThe shown line contains a sequence of key/value pairs of the form\n@var{key}:@var{value}.\n\nThe following values are shown in the output:\n\n@item n\nThe (sequential) number of the input frame, starting from 0.\n\n@item pts\nThe Presentation TimeStamp of the input frame, expressed as a number of\ntime base units. The time base unit depends on the filter input pad.\n\n@item pts_time\nThe Presentation TimeStamp of the input frame, expressed as a number of\nseconds.\n\n@item pos\nThe position of the frame in the input stream, or -1 if this information is\nunavailable and/or meaningless (for example in case of synthetic video).\n\n@item fmt\nThe pixel format name.\n\n@item sar\nThe sample aspect ratio of the input frame, expressed in the form\n@var{num}/@var{den}.\n\n@item s\nThe size of the input frame. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n@item i\nThe type of interlaced mode (\"P\" for \"progressive\", \"T\" for top field first, \"B\"\nfor bottom field first).\n\n@item iskey\nThis is 1 if the frame is a key frame, 0 otherwise.\n\n@item type\nThe picture type of the input frame (\"I\" for an I-frame, \"P\" for a\nP-frame, \"B\" for a B-frame, or \"?\" for an unknown type).\nAlso refer to the documentation of the @code{AVPictureType} enum and of\nthe @code{av_get_picture_type_char} function defined in\n@file{libavutil/avutil.h}.\n\n@item checksum\nThe Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame.\n\n@item plane_checksum\nThe Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,\nexpressed in the form \"[@var{c0} @var{c1} @var{c2} @var{c3}]\".\n\n"}]},{"filtergroup":["showpalette"],"info":"\nDisplays the 256 colors palette of each frame. This filter is only relevant for\n@var{pal8} pixel format frames.\n\nIt accepts the following option:\n\n","options":[{"names":["s"],"info":"Set the size of the box used to represent one palette color entry. Default is\n@code{30} (for a @code{30x30} pixel box).\n\n"}]},{"filtergroup":["shuffleframes"],"info":"\nReorder and/or duplicate and/or drop video frames.\n\nIt accepts the following parameters:\n\n","options":[{"names":["mapping"],"info":"Set the destination indexes of input frames.\nThis is space or '|' separated list of indexes that maps input frames to output\nframes. Number of indexes also sets maximal value that each index may have.\n'-1' index have special meaning and that is to drop frame.\n\nThe first frame has the index 0. The default is to keep the input unchanged.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSwap second and third frame of every three frames of the input:\n@example\nffmpeg -i INPUT -vf \"shuffleframes=0 2 1\" OUTPUT\n@end example\n\n@item\nSwap 10th and 1st frame of every ten frames of the input:\n@example\nffmpeg -i INPUT -vf \"shuffleframes=9 1 2 3 4 5 6 7 8 0\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["shuffleplanes"],"info":"\nReorder and/or duplicate video planes.\n\nIt accepts the following parameters:\n\n","options":[{"names":["map0"],"info":"The index of the input plane to be used as the first output plane.\n\n"},{"names":["map1"],"info":"The index of the input plane to be used as the second output plane.\n\n"},{"names":["map2"],"info":"The index of the input plane to be used as the third output plane.\n\n"},{"names":["map3"],"info":"The index of the input plane to be used as the fourth output plane.\n\n\nThe first plane has the index 0. The default is to keep the input unchanged.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSwap the second and third planes of the input:\n@example\nffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT\n@end example\n@end itemize\n\n@anchor{signalstats}\n"}]},{"filtergroup":["signalstats"],"info":"Evaluate various visual metrics that assist in determining issues associated\nwith the digitization of analog video media.\n\nBy default the filter will log these metadata values:\n\n","options":[{"names":["YMIN"],"info":"Display the minimal Y value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["YLOW"],"info":"Display the Y value at the 10% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["YAVG"],"info":"Display the average Y value within the input frame. Expressed in range of\n[0-255].\n\n"},{"names":["YHIGH"],"info":"Display the Y value at the 90% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["YMAX"],"info":"Display the maximum Y value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["UMIN"],"info":"Display the minimal U value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["ULOW"],"info":"Display the U value at the 10% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["UAVG"],"info":"Display the average U value within the input frame. Expressed in range of\n[0-255].\n\n"},{"names":["UHIGH"],"info":"Display the U value at the 90% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["UMAX"],"info":"Display the maximum U value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["VMIN"],"info":"Display the minimal V value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["VLOW"],"info":"Display the V value at the 10% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["VAVG"],"info":"Display the average V value within the input frame. Expressed in range of\n[0-255].\n\n"},{"names":["VHIGH"],"info":"Display the V value at the 90% percentile within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["VMAX"],"info":"Display the maximum V value contained within the input frame. Expressed in\nrange of [0-255].\n\n"},{"names":["SATMIN"],"info":"Display the minimal saturation value contained within the input frame.\nExpressed in range of [0-~181.02].\n\n"},{"names":["SATLOW"],"info":"Display the saturation value at the 10% percentile within the input frame.\nExpressed in range of [0-~181.02].\n\n"},{"names":["SATAVG"],"info":"Display the average saturation value within the input frame. Expressed in range\nof [0-~181.02].\n\n"},{"names":["SATHIGH"],"info":"Display the saturation value at the 90% percentile within the input frame.\nExpressed in range of [0-~181.02].\n\n"},{"names":["SATMAX"],"info":"Display the maximum saturation value contained within the input frame.\nExpressed in range of [0-~181.02].\n\n"},{"names":["HUEMED"],"info":"Display the median value for hue within the input frame. Expressed in range of\n[0-360].\n\n"},{"names":["HUEAVG"],"info":"Display the average value for hue within the input frame. Expressed in range of\n[0-360].\n\n"},{"names":["YDIF"],"info":"Display the average of sample value difference between all values of the Y\nplane in the current frame and corresponding values of the previous input frame.\nExpressed in range of [0-255].\n\n"},{"names":["UDIF"],"info":"Display the average of sample value difference between all values of the U\nplane in the current frame and corresponding values of the previous input frame.\nExpressed in range of [0-255].\n\n"},{"names":["VDIF"],"info":"Display the average of sample value difference between all values of the V\nplane in the current frame and corresponding values of the previous input frame.\nExpressed in range of [0-255].\n\n"},{"names":["YBITDEPTH"],"info":"Display bit depth of Y plane in current frame.\nExpressed in range of [0-16].\n\n"},{"names":["UBITDEPTH"],"info":"Display bit depth of U plane in current frame.\nExpressed in range of [0-16].\n\n"},{"names":["VBITDEPTH"],"info":"Display bit depth of V plane in current frame.\nExpressed in range of [0-16].\n\nThe filter accepts the following options:\n\n@item stat\n@item out\n\n@option{stat} specify an additional form of image analysis.\n@option{out} output video with the specified type of pixel highlighted.\n\nBoth options accept the following values:\n\n@table @samp\n@item tout\nIdentify @var{temporal outliers} pixels. A @var{temporal outlier} is a pixel\nunlike the neighboring pixels of the same field. Examples of temporal outliers\ninclude the results of video dropouts, head clogs, or tape tracking issues.\n\n@item vrep\nIdentify @var{vertical line repetition}. Vertical line repetition includes\nsimilar rows of pixels within a frame. In born-digital video vertical line\nrepetition is common, but this pattern is uncommon in video digitized from an\nanalog source. When it occurs in video that results from the digitization of an\nanalog source it can indicate concealment from a dropout compensator.\n\n@item brng\nIdentify pixels that fall outside of legal broadcast range.\n\n"},{"names":["color","c"],"info":"Set the highlight color for the @option{out} option. The default color is\nyellow.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nOutput data of various video metrics:\n@example\nffprobe -f lavfi movie=example.mov,signalstats=\"stat=tout+vrep+brng\" -show_frames\n@end example\n\n@item\nOutput specific data about the minimum and maximum values of the Y plane per frame:\n@example\nffprobe -f lavfi movie=example.mov,signalstats -show_entries frame_tags=lavfi.signalstats.YMAX,lavfi.signalstats.YMIN\n@end example\n\n@item\nPlayback video while highlighting pixels that are outside of broadcast range in red.\n@example\nffplay example.mov -vf signalstats=\"out=brng:color=red\"\n@end example\n\n@item\nPlayback video with signalstats metadata drawn over the frame.\n@example\nffplay example.mov -vf signalstats=stat=brng+vrep+tout,drawtext=fontfile=FreeSerif.ttf:textfile=signalstat_drawtext.txt\n@end example\n\nThe contents of signalstat_drawtext.txt used in the command are:\n@example\ntime %@{pts:hms@}\nY (%@{metadata:lavfi.signalstats.YMIN@}-%@{metadata:lavfi.signalstats.YMAX@})\nU (%@{metadata:lavfi.signalstats.UMIN@}-%@{metadata:lavfi.signalstats.UMAX@})\nV (%@{metadata:lavfi.signalstats.VMIN@}-%@{metadata:lavfi.signalstats.VMAX@})\nsaturation maximum: %@{metadata:lavfi.signalstats.SATMAX@}\n\n@end example\n@end itemize\n\n@anchor{signature}\n"}]},{"filtergroup":["signature"],"info":"\nCalculates the MPEG-7 Video Signature. The filter can handle more than one\ninput. In this case the matching between the inputs can be calculated additionally.\nThe filter always passes through the first input. The signature of each stream can\nbe written into a file.\n\nIt accepts the following options:\n\n","options":[{"names":["detectmode"],"info":"Enable or disable the matching process.\n\nAvailable values are:\n\n@item off\nDisable the calculation of a matching (default).\n@item full\nCalculate the matching for the whole video and output whether the whole video\nmatches or only parts.\n@item fast\nCalculate only until a matching is found or the video ends. Should be faster in\nsome cases.\n\n"},{"names":["nb_inputs"],"info":"Set the number of inputs. The option value must be a non negative integer.\nDefault value is 1.\n\n"},{"names":["filename"],"info":"Set the path to which the output is written. If there is more than one input,\nthe path must be a prototype, i.e. must contain %d or %0nd (where n is a positive\ninteger), that will be replaced with the input number. If no filename is\nspecified, no output will be written. This is the default.\n\n"},{"names":["format"],"info":"Choose the output format.\n\nAvailable values are:\n\n@item binary\nUse the specified binary representation (default).\n@item xml\nUse the specified xml representation.\n\n"},{"names":["th_d"],"info":"Set threshold to detect one word as similar. The option value must be an integer\ngreater than zero. The default value is 9000.\n\n"},{"names":["th_dc"],"info":"Set threshold to detect all words as similar. The option value must be an integer\ngreater than zero. The default value is 60000.\n\n"},{"names":["th_xh"],"info":"Set threshold to detect frames as similar. The option value must be an integer\ngreater than zero. The default value is 116.\n\n"},{"names":["th_di"],"info":"Set the minimum length of a sequence in frames to recognize it as matching\nsequence. The option value must be a non negative integer value.\nThe default value is 0.\n\n"},{"names":["th_it"],"info":"Set the minimum relation, that matching frames to all frames must have.\nThe option value must be a double value between 0 and 1. The default value is 0.5.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nTo calculate the signature of an input video and store it in signature.bin:\n@example\nffmpeg -i input.mkv -vf signature=filename=signature.bin -map 0:v -f null -\n@end example\n\n@item\nTo detect whether two videos match and store the signatures in XML format in\nsignature0.xml and signature1.xml:\n@example\nffmpeg -i input1.mkv -i input2.mkv -filter_complex \"[0:v][1:v] signature=nb_inputs=2:detectmode=full:format=xml:filename=signature%d.xml\" -map :v -f null -\n@end example\n\n@end itemize\n\n@anchor{smartblur}\n"}]},{"filtergroup":["smartblur"],"info":"\nBlur the input video without impacting the outlines.\n\nIt accepts the following options:\n\n","options":[{"names":["luma_radius","lr"],"info":"Set the luma radius. The option value must be a float number in\nthe range [0.1,5.0] that specifies the variance of the gaussian filter\nused to blur the image (slower if larger). Default value is 1.0.\n\n"},{"names":["luma_strength","ls"],"info":"Set the luma strength. The option value must be a float number\nin the range [-1.0,1.0] that configures the blurring. A value included\nin [0.0,1.0] will blur the image whereas a value included in\n[-1.0,0.0] will sharpen the image. Default value is 1.0.\n\n"},{"names":["luma_threshold","lt"],"info":"Set the luma threshold used as a coefficient to determine\nwhether a pixel should be blurred or not. The option value must be an\ninteger in the range [-30,30]. A value of 0 will filter all the image,\na value included in [0,30] will filter flat areas and a value included\nin [-30,0] will filter edges. Default value is 0.\n\n"},{"names":["chroma_radius","cr"],"info":"Set the chroma radius. The option value must be a float number in\nthe range [0.1,5.0] that specifies the variance of the gaussian filter\nused to blur the image (slower if larger). Default value is @option{luma_radius}.\n\n"},{"names":["chroma_strength","cs"],"info":"Set the chroma strength. The option value must be a float number\nin the range [-1.0,1.0] that configures the blurring. A value included\nin [0.0,1.0] will blur the image whereas a value included in\n[-1.0,0.0] will sharpen the image. Default value is @option{luma_strength}.\n\n"},{"names":["chroma_threshold","ct"],"info":"Set the chroma threshold used as a coefficient to determine\nwhether a pixel should be blurred or not. The option value must be an\ninteger in the range [-30,30]. A value of 0 will filter all the image,\na value included in [0,30] will filter flat areas and a value included\nin [-30,0] will filter edges. Default value is @option{luma_threshold}.\n\nIf a chroma option is not explicitly set, the corresponding luma value\nis set.\n\n"}]},{"filtergroup":["sobel"],"info":"Apply sobel operator to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\n\n@anchor{spp}\n"}]},{"filtergroup":["spp"],"info":"\nApply a simple postprocessing filter that compresses and decompresses the image\nat several (or - in the case of @option{quality} level @code{6} - all) shifts\nand average the results.\n\n","options":[{"names":["quality"],"info":"Set quality. This option defines the number of levels for averaging. It accepts\nan integer in the range 0-6. If set to @code{0}, the filter will have no\neffect. A value of @code{6} means the higher quality. For each increment of\nthat value the speed drops by a factor of approximately 2. Default value is\n@code{3}.\n\n"},{"names":["qp"],"info":"Force a constant quantization parameter. If not set, the filter will use the QP\nfrom the video stream (if available).\n\n"},{"names":["mode"],"info":"Set thresholding mode. Available modes are:\n\n@item hard\nSet hard thresholding (default).\n@item soft\nSet soft thresholding (better de-ringing effect, but likely blurrier).\n\n"},{"names":["use_bframe_qp"],"info":"Enable the use of the QP from the B-Frames if set to @code{1}. Using this\noption may cause flicker since the B-Frames have often larger QP. Default is\n@code{0} (not enabled).\n\n"}]},{"filtergroup":["sr"],"info":"\nScale the input by applying one of the super-resolution methods based on\nconvolutional neural networks. Supported models:\n\n@itemize\n@item\nSuper-Resolution Convolutional Neural Network model (SRCNN).\nSee @url{https://arxiv.org/abs/1501.00092}.\n\n@item\nEfficient Sub-Pixel Convolutional Neural Network model (ESPCN).\nSee @url{https://arxiv.org/abs/1609.05158}.\n@end itemize\n\nTraining scripts as well as scripts for model file (.pb) saving can be found at\n@url{https://github.com/XueweiMeng/sr/tree/sr_dnn_native}. Original repository\nis at @url{https://github.com/HighVoltageRocknRoll/sr.git}.\n\nNative model files (.model) can be generated from TensorFlow model\nfiles (.pb) by using tools/python/convert.py\n\n","options":[{"names":["dnn_backend"],"info":"Specify which DNN backend to use for model loading and execution. This option accepts\nthe following values:\n\n@item native\nNative implementation of DNN loading and execution.\n\n@item tensorflow\nTensorFlow backend. To enable this backend you\nneed to install the TensorFlow for C library (see\n@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with\n@code{--enable-libtensorflow}\n\nDefault value is @samp{native}.\n\n"},{"names":["model"],"info":"Set path to model file specifying network architecture and its parameters.\nNote that different backends use different file formats. TensorFlow backend\ncan load files for both formats, while native backend can load files for only\nits format.\n\n"},{"names":["scale_factor"],"info":"Set scale factor for SRCNN model. Allowed values are @code{2}, @code{3} and @code{4}.\nDefault value is @code{2}. Scale factor is necessary for SRCNN model, because it accepts\ninput upscaled using bicubic upscaling with proper scale factor.\n\n"}]},{"filtergroup":["ssim"],"info":"\nObtain the SSIM (Structural SImilarity Metric) between two input videos.\n\nThis filter takes in input two input videos, the first input is\nconsidered the \"main\" source and is passed unchanged to the\noutput. The second input is used as a \"reference\" video for computing\nthe SSIM.\n\nBoth video inputs must have the same resolution and pixel format for\nthis filter to work correctly. Also it assumes that both inputs\nhave the same number of frames, which are compared one by one.\n\nThe filter stores the calculated SSIM of each frame.\n\nThe description of the accepted parameters follows.\n\n","options":[{"names":["stats_file","f"],"info":"If specified the filter will use the named file to save the SSIM of\neach individual frame. When filename equals \"-\" the data is sent to\nstandard output.\n\nThe file printed if @var{stats_file} is selected, contains a sequence of\nkey/value pairs of the form @var{key}:@var{value} for each compared\ncouple of frames.\n\nA description of each shown parameter follows:\n\n@item n\nsequential number of the input frame, starting from 1\n\n@item Y, U, V, R, G, B\nSSIM of the compared frames for the component specified by the suffix.\n\n@item All\nSSIM of the compared frames for the whole frame.\n\n@item dB\nSame as above but in dB representation.\n\nThis filter also supports the @ref{framesync} options.\n\n","examples":"@subsection Examples\n@itemize\n@item\nFor example:\n@example\nmovie=ref_movie.mpg, setpts=PTS-STARTPTS [main];\n[main][ref] ssim=\"stats_file=stats.log\" [out]\n@end example\n\nOn this example the input file being processed is compared with the\nreference file @file{ref_movie.mpg}. The SSIM of each individual frame\nis stored in @file{stats.log}.\n\n@item\nAnother example with both psnr and ssim at same time:\n@example\nffmpeg -i main.mpg -i ref.mpg -lavfi \"ssim;[0:v][1:v]psnr\" -f null -\n@end example\n\n@item\nAnother example with different containers:\n@example\nffmpeg -i main.mpg -i ref.mkv -lavfi \"[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]ssim\" -f null -\n@end example\n@end itemize\n\n"}]},{"filtergroup":["stereo3d"],"info":"\nConvert between different stereoscopic image formats.\n\nThe filters accept the following options:\n\n","options":[{"names":["in"],"info":"Set stereoscopic image format of input.\n\nAvailable values for input image formats are:\n@item sbsl\nside by side parallel (left eye left, right eye right)\n\n@item sbsr\nside by side crosseye (right eye left, left eye right)\n\n@item sbs2l\nside by side parallel with half width resolution\n(left eye left, right eye right)\n\n@item sbs2r\nside by side crosseye with half width resolution\n(right eye left, left eye right)\n\n@item abl\n@item tbl\nabove-below (left eye above, right eye below)\n\n@item abr\n@item tbr\nabove-below (right eye above, left eye below)\n\n@item ab2l\n@item tb2l\nabove-below with half height resolution\n(left eye above, right eye below)\n\n@item ab2r\n@item tb2r\nabove-below with half height resolution\n(right eye above, left eye below)\n\n@item al\nalternating frames (left eye first, right eye second)\n\n@item ar\nalternating frames (right eye first, left eye second)\n\n@item irl\ninterleaved rows (left eye has top row, right eye starts on next row)\n\n@item irr\ninterleaved rows (right eye has top row, left eye starts on next row)\n\n@item icl\ninterleaved columns, left eye first\n\n@item icr\ninterleaved columns, right eye first\n\nDefault value is @samp{sbsl}.\n\n"},{"names":["out"],"info":"Set stereoscopic image format of output.\n\n@item sbsl\nside by side parallel (left eye left, right eye right)\n\n@item sbsr\nside by side crosseye (right eye left, left eye right)\n\n@item sbs2l\nside by side parallel with half width resolution\n(left eye left, right eye right)\n\n@item sbs2r\nside by side crosseye with half width resolution\n(right eye left, left eye right)\n\n@item abl\n@item tbl\nabove-below (left eye above, right eye below)\n\n@item abr\n@item tbr\nabove-below (right eye above, left eye below)\n\n@item ab2l\n@item tb2l\nabove-below with half height resolution\n(left eye above, right eye below)\n\n@item ab2r\n@item tb2r\nabove-below with half height resolution\n(right eye above, left eye below)\n\n@item al\nalternating frames (left eye first, right eye second)\n\n@item ar\nalternating frames (right eye first, left eye second)\n\n@item irl\ninterleaved rows (left eye has top row, right eye starts on next row)\n\n@item irr\ninterleaved rows (right eye has top row, left eye starts on next row)\n\n@item arbg\nanaglyph red/blue gray\n(red filter on left eye, blue filter on right eye)\n\n@item argg\nanaglyph red/green gray\n(red filter on left eye, green filter on right eye)\n\n@item arcg\nanaglyph red/cyan gray\n(red filter on left eye, cyan filter on right eye)\n\n@item arch\nanaglyph red/cyan half colored\n(red filter on left eye, cyan filter on right eye)\n\n@item arcc\nanaglyph red/cyan color\n(red filter on left eye, cyan filter on right eye)\n\n@item arcd\nanaglyph red/cyan color optimized with the least squares projection of dubois\n(red filter on left eye, cyan filter on right eye)\n\n@item agmg\nanaglyph green/magenta gray\n(green filter on left eye, magenta filter on right eye)\n\n@item agmh\nanaglyph green/magenta half colored\n(green filter on left eye, magenta filter on right eye)\n\n@item agmc\nanaglyph green/magenta colored\n(green filter on left eye, magenta filter on right eye)\n\n@item agmd\nanaglyph green/magenta color optimized with the least squares projection of dubois\n(green filter on left eye, magenta filter on right eye)\n\n@item aybg\nanaglyph yellow/blue gray\n(yellow filter on left eye, blue filter on right eye)\n\n@item aybh\nanaglyph yellow/blue half colored\n(yellow filter on left eye, blue filter on right eye)\n\n@item aybc\nanaglyph yellow/blue colored\n(yellow filter on left eye, blue filter on right eye)\n\n@item aybd\nanaglyph yellow/blue color optimized with the least squares projection of dubois\n(yellow filter on left eye, blue filter on right eye)\n\n@item ml\nmono output (left eye only)\n\n@item mr\nmono output (right eye only)\n\n@item chl\ncheckerboard, left eye first\n\n@item chr\ncheckerboard, right eye first\n\n@item icl\ninterleaved columns, left eye first\n\n@item icr\ninterleaved columns, right eye first\n\n@item hdmi\nHDMI frame pack\n\nDefault value is @samp{arcd}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nConvert input video from side by side parallel to anaglyph yellow/blue dubois:\n@example\nstereo3d=sbsl:aybd\n@end example\n\n@item\nConvert input video from above below (left eye above, right eye below) to side by side crosseye.\n@example\nstereo3d=abl:sbsr\n@end example\n@end itemize\n\n"}]},{"filtergroup":["streamselect","astreamselect"],"info":"Select video or audio streams.\n\n","options":[{"names":["inputs"],"info":"Set number of inputs. Default is 2.\n\n"},{"names":["map"],"info":"Set input indexes to remap to outputs.\n\n@subsection Commands\n\nThe @code{streamselect} and @code{astreamselect} filter supports the following\ncommands:\n\n@item map\nSet input indexes to remap to outputs.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSelect first 5 seconds 1st stream and rest of time 2nd stream:\n@example\nsendcmd='5.0 streamselect map 1',streamselect=inputs=2:map=0\n@end example\n\n@item\nSame as above, but for audio:\n@example\nasendcmd='5.0 astreamselect map 1',astreamselect=inputs=2:map=0\n@end example\n@end itemize\n\n@anchor{subtitles}\n"}]},{"filtergroup":["subtitles"],"info":"\nDraw subtitles on top of input video using the libass library.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libass}. This filter also requires a build with libavcodec and\nlibavformat to convert the passed subtitles file to ASS (Advanced Substation\nAlpha) subtitles format.\n\n","options":[{"names":["filename","f"],"info":"Set the filename of the subtitle file to read. It must be specified.\n\n"},{"names":["original_size"],"info":"Specify the size of the original video, the video for which the ASS file\nwas composed. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDue to a misdesign in ASS aspect ratio arithmetic, this is necessary to\ncorrectly scale the fonts if the aspect ratio has been changed.\n\n"},{"names":["fontsdir"],"info":"Set a directory path containing fonts that can be used by the filter.\nThese fonts will be used in addition to whatever the font provider uses.\n\n"},{"names":["alpha"],"info":"Process alpha channel, by default alpha channel is untouched.\n\n"},{"names":["charenc"],"info":"Set subtitles input character encoding. @code{subtitles} filter only. Only\nuseful if not UTF-8.\n\n"},{"names":["stream_index","si"],"info":"Set subtitles stream index. @code{subtitles} filter only.\n\n"},{"names":["force_style"],"info":"Override default style or script info parameters of the subtitles. It accepts a\nstring containing ASS style format @code{KEY=VALUE} couples separated by \",\".\n\nIf the first key is not specified, it is assumed that the first value\nspecifies the @option{filename}.\n\nFor example, to render the file @file{sub.srt} on top of the input\nvideo, use the command:\n@example\nsubtitles=sub.srt\n@end example\n\nwhich is equivalent to:\n@example\nsubtitles=filename=sub.srt\n@end example\n\nTo render the default subtitles stream from file @file{video.mkv}, use:\n@example\nsubtitles=video.mkv\n@end example\n\nTo render the second subtitles stream from that file, use:\n@example\nsubtitles=video.mkv:si=1\n@end example\n\nTo make the subtitles stream from @file{sub.srt} appear in 80% transparent blue\n@code{DejaVu Serif}, use:\n@example\nsubtitles=sub.srt:force_style='FontName=DejaVu Serif,PrimaryColour=&HCCFF0000'\n@end example\n\n"}]},{"filtergroup":["super2xsai"],"info":"\nScale the input by 2x and smooth using the Super2xSaI (Scale and\nInterpolate) pixel art scaling algorithm.\n\nUseful for enlarging pixel art images without reducing sharpness.\n\n","options":[]},{"filtergroup":["swaprect"],"info":"\nSwap two rectangular objects in video.\n\nThis filter accepts the following options:\n\n","options":[{"names":["w"],"info":"Set object width.\n\n"},{"names":["h"],"info":"Set object height.\n\n"},{"names":["x1"],"info":"Set 1st rect x coordinate.\n\n"},{"names":["y1"],"info":"Set 1st rect y coordinate.\n\n"},{"names":["x2"],"info":"Set 2nd rect x coordinate.\n\n"},{"names":["y2"],"info":"Set 2nd rect y coordinate.\n\nAll expressions are evaluated once for each frame.\n\nThe all options are expressions containing the following constants:\n\n@item w\n@item h\nThe input width and height.\n\n@item a\nsame as @var{w} / @var{h}\n\n@item sar\ninput sample aspect ratio\n\n@item dar\ninput display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}\n\n@item n\nThe number of the input frame, starting from 0.\n\n@item t\nThe timestamp expressed in seconds. It's NAN if the input timestamp is unknown.\n\n@item pos\nthe position in the file of the input frame, NAN if unknown\n\n"}]},{"filtergroup":["swapuv"],"info":"Swap U & V plane.\n\n","options":[]},{"filtergroup":["telecine"],"info":"\nApply telecine process to the video.\n\nThis filter accepts the following options:\n\n","options":[{"names":["first_field"],"info":"@item top, t\ntop field first\n@item bottom, b\nbottom field first\nThe default value is @code{top}.\n\n"},{"names":["pattern"],"info":"A string of numbers representing the pulldown pattern you wish to apply.\nThe default value is @code{23}.\n\n@example\nSome typical patterns:\n\nNTSC output (30i):\n27.5p: 32222\n24p: 23 (classic)\n24p: 2332 (preferred)\n20p: 33\n18p: 334\n16p: 3444\n\nPAL output (25i):\n27.5p: 12222\n24p: 222222222223 (\"Euro pulldown\")\n16.67p: 33\n16p: 33333334\n@end example\n\n"}]},{"filtergroup":["threshold"],"info":"\nApply threshold effect to video stream.\n\nThis filter needs four video streams to perform thresholding.\nFirst stream is stream we are filtering.\nSecond stream is holding threshold values, third stream is holding min values,\nand last, fourth stream is holding max values.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\nFor example if first stream pixel's component value is less then threshold value\nof pixel component from 2nd threshold stream, third stream value will picked,\notherwise fourth stream pixel component value will be picked.\n\nUsing color source filter one can perform various types of thresholding:\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nBinary threshold, using gray color as threshold:\n@example\nffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=black -f lavfi -i color=white -lavfi threshold output.avi\n@end example\n\n@item\nInverted binary threshold, using gray color as threshold:\n@example\nffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=white -f lavfi -i color=black -lavfi threshold output.avi\n@end example\n\n@item\nTruncate binary threshold, using gray color as threshold:\n@example\nffmpeg -i 320x240.avi -f lavfi -i color=gray -i 320x240.avi -f lavfi -i color=gray -lavfi threshold output.avi\n@end example\n\n@item\nThreshold to zero, using gray color as threshold:\n@example\nffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=white -i 320x240.avi -lavfi threshold output.avi\n@end example\n\n@item\nInverted threshold to zero, using gray color as threshold:\n@example\nffmpeg -i 320x240.avi -f lavfi -i color=gray -i 320x240.avi -f lavfi -i color=white -lavfi threshold output.avi\n@end example\n@end itemize\n\n"}]},{"filtergroup":["thumbnail"],"info":"Select the most representative frame in a given sequence of consecutive frames.\n\n","options":[{"names":["n"],"info":"Set the frames batch size to analyze; in a set of @var{n} frames, the filter\nwill pick one of them, and then handle the next batch of @var{n} frames until\nthe end. Default is @code{100}.\n\nSince the filter keeps track of the whole frames sequence, a bigger @var{n}\nvalue will result in a higher memory usage, so a high value is not recommended.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nExtract one picture each 50 frames:\n@example\nthumbnail=50\n@end example\n\n@item\nComplete example of a thumbnail creation with @command{ffmpeg}:\n@example\nffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png\n@end example\n@end itemize\n\n"}]},{"filtergroup":["tile"],"info":"\nTile several successive frames together.\n\n","options":[{"names":["layout"],"info":"Set the grid size (i.e. the number of lines and columns). For the syntax of\nthis option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["nb_frames"],"info":"Set the maximum number of frames to render in the given area. It must be less\nthan or equal to @var{w}x@var{h}. The default value is @code{0}, meaning all\nthe area will be used.\n\n"},{"names":["margin"],"info":"Set the outer border margin in pixels.\n\n"},{"names":["padding"],"info":"Set the inner border thickness (i.e. the number of pixels between frames). For\nmore advanced padding options (such as having different values for the edges),\nrefer to the pad video filter.\n\n"},{"names":["color"],"info":"Specify the color of the unused area. For the syntax of this option, check the\n@ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nThe default value of @var{color} is \"black\".\n\n"},{"names":["overlap"],"info":"Set the number of frames to overlap when tiling several successive frames together.\nThe value must be between @code{0} and @var{nb_frames - 1}.\n\n"},{"names":["init_padding"],"info":"Set the number of frames to initially be empty before displaying first output frame.\nThis controls how soon will one get first output frame.\nThe value must be between @code{0} and @var{nb_frames - 1}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nProduce 8x8 PNG tiles of all keyframes (@option{-skip_frame nokey}) in a movie:\n@example\nffmpeg -skip_frame nokey -i file.avi -vf 'scale=128:72,tile=8x8' -an -vsync 0 keyframes%03d.png\n@end example\nThe @option{-vsync 0} is necessary to prevent @command{ffmpeg} from\nduplicating each output frame to accommodate the originally detected frame\nrate.\n\n@item\nDisplay @code{5} pictures in an area of @code{3x2} frames,\nwith @code{7} pixels between them, and @code{2} pixels of initial margin, using\nmixed flat and named options:\n@example\ntile=3x2:nb_frames=5:padding=7:margin=2\n@end example\n@end itemize\n\n"}]},{"filtergroup":["tinterlace"],"info":"\nPerform various types of temporal field interlacing.\n\nFrames are counted starting from 1, so the first input frame is\nconsidered odd.\n\n","options":[{"names":["mode"],"info":"Specify the mode of the interlacing. This option can also be specified\nas a value alone. See below for a list of values for this option.\n\nAvailable values are:\n\n@item merge, 0\nMove odd frames into the upper field, even into the lower field,\ngenerating a double height frame at half frame rate.\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n\nOutput:\n11111 33333\n22222 44444\n11111 33333\n22222 44444\n11111 33333\n22222 44444\n11111 33333\n22222 44444\n@end example\n\n@item drop_even, 1\nOnly output odd frames, even frames are dropped, generating a frame with\nunchanged height at half frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n\nOutput:\n11111 33333\n11111 33333\n11111 33333\n11111 33333\n@end example\n\n@item drop_odd, 2\nOnly output even frames, odd frames are dropped, generating a frame with\nunchanged height at half frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n\nOutput:\n 22222 44444\n 22222 44444\n 22222 44444\n 22222 44444\n@end example\n\n@item pad, 3\nExpand each frame to full height, but pad alternate lines with black,\ngenerating a frame with double height at the same input frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n\nOutput:\n11111 ..... 33333 .....\n..... 22222 ..... 44444\n11111 ..... 33333 .....\n..... 22222 ..... 44444\n11111 ..... 33333 .....\n..... 22222 ..... 44444\n11111 ..... 33333 .....\n..... 22222 ..... 44444\n@end example\n\n\n@item interleave_top, 4\nInterleave the upper field from odd frames with the lower field from\neven frames, generating a frame with unchanged height at half frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111<- 22222 33333<- 44444\n11111 22222<- 33333 44444<-\n11111<- 22222 33333<- 44444\n11111 22222<- 33333 44444<-\n\nOutput:\n11111 33333\n22222 44444\n11111 33333\n22222 44444\n@end example\n\n\n@item interleave_bottom, 5\nInterleave the lower field from odd frames with the upper field from\neven frames, generating a frame with unchanged height at half frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222<- 33333 44444<-\n11111<- 22222 33333<- 44444\n11111 22222<- 33333 44444<-\n11111<- 22222 33333<- 44444\n\nOutput:\n22222 44444\n11111 33333\n22222 44444\n11111 33333\n@end example\n\n\n@item interlacex2, 6\nDouble frame rate with unchanged height. Frames are inserted each\ncontaining the second temporal field from the previous input frame and\nthe first temporal field from the next input frame. This mode relies on\nthe top_field_first flag. Useful for interlaced video displays with no\nfield synchronisation.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n 11111 22222 33333 44444\n11111 22222 33333 44444\n 11111 22222 33333 44444\n\nOutput:\n11111 22222 22222 33333 33333 44444 44444\n 11111 11111 22222 22222 33333 33333 44444\n11111 22222 22222 33333 33333 44444 44444\n 11111 11111 22222 22222 33333 33333 44444\n@end example\n\n\n@item mergex2, 7\nMove odd frames into the upper field, even into the lower field,\ngenerating a double height frame at same frame rate.\n\n@example\n ------> time\nInput:\nFrame 1 Frame 2 Frame 3 Frame 4\n\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n11111 22222 33333 44444\n\nOutput:\n11111 33333 33333 55555\n22222 22222 44444 44444\n11111 33333 33333 55555\n22222 22222 44444 44444\n11111 33333 33333 55555\n22222 22222 44444 44444\n11111 33333 33333 55555\n22222 22222 44444 44444\n@end example\n\n\nNumeric values are deprecated but are accepted for backward\ncompatibility reasons.\n\nDefault mode is @code{merge}.\n\n"},{"names":["flags"],"info":"Specify flags influencing the filter process.\n\nAvailable value for @var{flags} is:\n\n@item low_pass_filter, vlpf\nEnable linear vertical low-pass filtering in the filter.\nVertical low-pass filtering is required when creating an interlaced\ndestination from a progressive source which contains high-frequency\nvertical detail. Filtering will reduce interlace 'twitter' and Moire\npatterning.\n\n@item complex_filter, cvlpf\nEnable complex vertical low-pass filtering.\nThis will slightly less reduce interlace 'twitter' and Moire\npatterning but better retain detail and subjective sharpness impression.\n\n\nVertical low-pass filtering can only be enabled for @option{mode}\n@var{interleave_top} and @var{interleave_bottom}.\n\n\n"}]},{"filtergroup":["tmix"],"info":"\nMix successive video frames.\n\nA description of the accepted options follows.\n\n","options":[{"names":["frames"],"info":"The number of successive frames to mix. If unspecified, it defaults to 3.\n\n"},{"names":["weights"],"info":"Specify weight of each input video frame.\nEach weight is separated by space. If number of weights is smaller than\nnumber of @var{frames} last specified weight will be used for all remaining\nunset weights.\n\n"},{"names":["scale"],"info":"Specify scale, if it is set it will be multiplied with sum\nof each weight multiplied with pixel values to give final destination\npixel value. By default @var{scale} is auto scaled to sum of weights.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nAverage 7 successive frames:\n@example\ntmix=frames=7:weights=\"1 1 1 1 1 1 1\"\n@end example\n\n@item\nApply simple temporal convolution:\n@example\ntmix=frames=3:weights=\"-1 3 -1\"\n@end example\n\n@item\nSimilar as above but only showing temporal differences:\n@example\ntmix=frames=3:weights=\"-1 2 -1\":scale=1\n@end example\n@end itemize\n\n@anchor{tonemap}\n"}]},{"filtergroup":["tonemap"],"info":"Tone map colors from different dynamic ranges.\n\nThis filter expects data in single precision floating point, as it needs to\noperate on (and can output) out-of-range values. Another filter, such as\n@ref{zscale}, is needed to convert the resulting frame to a usable format.\n\nThe tonemapping algorithms implemented only work on linear light, so input\ndata should be linearized beforehand (and possibly correctly tagged).\n\n@example\nffmpeg -i INPUT -vf zscale=transfer=linear,tonemap=clip,zscale=transfer=bt709,format=yuv420p OUTPUT\n@end example\n\n@subsection Options\nThe filter accepts the following options.\n\n","options":[{"names":["tonemap"],"info":"Set the tone map algorithm to use.\n\nPossible values are:\n@item none\nDo not apply any tone map, only desaturate overbright pixels.\n\n@item clip\nHard-clip any out-of-range values. Use it for perfect color accuracy for\nin-range values, while distorting out-of-range values.\n\n@item linear\nStretch the entire reference gamut to a linear multiple of the display.\n\n@item gamma\nFit a logarithmic transfer between the tone curves.\n\n@item reinhard\nPreserve overall image brightness with a simple curve, using nonlinear\ncontrast, which results in flattening details and degrading color accuracy.\n\n@item hable\nPreserve both dark and bright details better than @var{reinhard}, at the cost\nof slightly darkening everything. Use it when detail preservation is more\nimportant than color and brightness accuracy.\n\n@item mobius\nSmoothly map out-of-range values, while retaining contrast and colors for\nin-range material as much as possible. Use it when color accuracy is more\nimportant than detail preservation.\n\nDefault is none.\n\n"},{"names":["param"],"info":"Tune the tone mapping algorithm.\n\nThis affects the following algorithms:\n@item none\nIgnored.\n\n@item linear\nSpecifies the scale factor to use while stretching.\nDefault to 1.0.\n\n@item gamma\nSpecifies the exponent of the function.\nDefault to 1.8.\n\n@item clip\nSpecify an extra linear coefficient to multiply into the signal before clipping.\nDefault to 1.0.\n\n@item reinhard\nSpecify the local contrast coefficient at the display peak.\nDefault to 0.5, which means that in-gamut values will be about half as bright\nas when clipping.\n\n@item hable\nIgnored.\n\n@item mobius\nSpecify the transition point from linear to mobius transform. Every value\nbelow this point is guaranteed to be mapped 1:1. The higher the value, the\nmore accurate the result will be, at the cost of losing bright details.\nDefault to 0.3, which due to the steep initial slope still preserves in-range\ncolors fairly accurately.\n\n"},{"names":["desat"],"info":"Apply desaturation for highlights that exceed this level of brightness. The\nhigher the parameter, the more color information will be preserved. This\nsetting helps prevent unnaturally blown-out colors for super-highlights, by\n(smoothly) turning into white instead. This makes images feel more natural,\nat the cost of reducing information about out-of-range colors.\n\nThe default of 2.0 is somewhat conservative and will mostly just apply to\nskies or directly sunlit surfaces. A setting of 0.0 disables this option.\n\nThis option works only if the input frame has a supported color tag.\n\n"},{"names":["peak"],"info":"Override signal/nominal/reference peak with this value. Useful when the\nembedded peak information in display metadata is not reliable or when tone\nmapping from a lower range to a higher range.\n\n"}]},{"filtergroup":["tpad"],"info":"\nTemporarily pad video frames.\n\n","options":[{"names":["start"],"info":"Specify number of delay frames before input video stream.\n\n"},{"names":["stop"],"info":"Specify number of padding frames after input video stream.\nSet to -1 to pad indefinitely.\n\n"},{"names":["start_mode"],"info":"Set kind of frames added to beginning of stream.\nCan be either @var{add} or @var{clone}.\nWith @var{add} frames of solid-color are added.\nWith @var{clone} frames are clones of first frame.\n\n"},{"names":["stop_mode"],"info":"Set kind of frames added to end of stream.\nCan be either @var{add} or @var{clone}.\nWith @var{add} frames of solid-color are added.\nWith @var{clone} frames are clones of last frame.\n\n"},{"names":["start_duration","stop_duration"],"info":"Specify the duration of the start/stop delay. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\nThese options override @var{start} and @var{stop}.\n\n"},{"names":["color"],"info":"Specify the color of the padded area. For the syntax of this option,\ncheck the @ref{color syntax,,\"Color\" section in the ffmpeg-utils\nmanual,ffmpeg-utils}.\n\nThe default value of @var{color} is \"black\".\n\n@anchor{transpose}\n"}]},{"filtergroup":["transpose"],"info":"\nTranspose rows with columns in the input video and optionally flip it.\n\nIt accepts the following parameters:\n\n","options":[{"names":["dir"],"info":"Specify the transposition direction.\n\nCan assume the following values:\n@item 0, 4, cclock_flip\nRotate by 90 degrees counterclockwise and vertically flip (default), that is:\n@example\nL.R L.l\n. . -> . .\nl.r R.r\n@end example\n\n@item 1, 5, clock\nRotate by 90 degrees clockwise, that is:\n@example\nL.R l.L\n. . -> . .\nl.r r.R\n@end example\n\n@item 2, 6, cclock\nRotate by 90 degrees counterclockwise, that is:\n@example\nL.R R.r\n. . -> . .\nl.r L.l\n@end example\n\n@item 3, 7, clock_flip\nRotate by 90 degrees clockwise and vertically flip, that is:\n@example\nL.R r.R\n. . -> . .\nl.r l.L\n@end example\n\nFor values between 4-7, the transposition is only done if the input\nvideo geometry is portrait and not landscape. These values are\ndeprecated, the @code{passthrough} option should be used instead.\n\nNumerical values are deprecated, and should be dropped in favor of\nsymbolic constants.\n\n"},{"names":["passthrough"],"info":"Do not apply the transposition if the input geometry matches the one\nspecified by the specified value. It accepts the following values:\n@item none\nAlways apply transposition.\n@item portrait\nPreserve portrait geometry (when @var{height} >= @var{width}).\n@item landscape\nPreserve landscape geometry (when @var{width} >= @var{height}).\n\nDefault value is @code{none}.\n\nFor example to rotate by 90 degrees clockwise and preserve portrait\nlayout:\n@example\ntranspose=dir=1:passthrough=portrait\n@end example\n\nThe command above can also be specified as:\n@example\ntranspose=1:portrait\n@end example\n\n"}]},{"filtergroup":["transpose_npp"],"info":"\nTranspose rows with columns in the input video and optionally flip it.\nFor more in depth examples see the @ref{transpose} video filter, which shares mostly the same options.\n\nIt accepts the following parameters:\n\n","options":[{"names":["dir"],"info":"Specify the transposition direction.\n\nCan assume the following values:\n@item cclock_flip\nRotate by 90 degrees counterclockwise and vertically flip. (default)\n\n@item clock\nRotate by 90 degrees clockwise.\n\n@item cclock\nRotate by 90 degrees counterclockwise.\n\n@item clock_flip\nRotate by 90 degrees clockwise and vertically flip.\n\n"},{"names":["passthrough"],"info":"Do not apply the transposition if the input geometry matches the one\nspecified by the specified value. It accepts the following values:\n@item none\nAlways apply transposition. (default)\n@item portrait\nPreserve portrait geometry (when @var{height} >= @var{width}).\n@item landscape\nPreserve landscape geometry (when @var{width} >= @var{height}).\n\n\n"}]},{"filtergroup":["trim"],"info":"Trim the input so that the output contains one continuous subpart of the input.\n\nIt accepts the following parameters:\n","options":[{"names":["start"],"info":"Specify the time of the start of the kept section, i.e. the frame with the\ntimestamp @var{start} will be the first frame in the output.\n\n"},{"names":["end"],"info":"Specify the time of the first frame that will be dropped, i.e. the frame\nimmediately preceding the one with the timestamp @var{end} will be the last\nframe in the output.\n\n"},{"names":["start_pts"],"info":"This is the same as @var{start}, except this option sets the start timestamp\nin timebase units instead of seconds.\n\n"},{"names":["end_pts"],"info":"This is the same as @var{end}, except this option sets the end timestamp\nin timebase units instead of seconds.\n\n"},{"names":["duration"],"info":"The maximum duration of the output in seconds.\n\n"},{"names":["start_frame"],"info":"The number of the first frame that should be passed to the output.\n\n"},{"names":["end_frame"],"info":"The number of the first frame that should be dropped.\n\n@option{start}, @option{end}, and @option{duration} are expressed as time\nduration specifications; see\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\n\nNote that the first two sets of the start/end options and the @option{duration}\noption look at the frame timestamp, while the _frame variants simply count the\nframes that pass through the filter. Also note that this filter does not modify\nthe timestamps. If you wish for the output timestamps to start at zero, insert a\nsetpts filter after the trim filter.\n\nIf multiple start or end options are set, this filter tries to be greedy and\nkeep all the frames that match at least one of the specified constraints. To keep\nonly the part that matches all the constraints at once, chain multiple trim\nfilters.\n\nThe defaults are such that all the input is kept. So it is possible to set e.g.\njust the end values to keep everything before the specified time.\n\nExamples:\n@itemize\n"},{"names":[],"info":"Drop everything except the second minute of input:\n@example\nffmpeg -i INPUT -vf trim=60:120\n@end example\n\n"},{"names":[],"info":"Keep only the first second:\n@example\nffmpeg -i INPUT -vf trim=duration=1\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["unpremultiply"],"info":"Apply alpha unpremultiply effect to input video stream using first plane\nof second stream as alpha.\n\nBoth streams must have same dimensions and same pixel format.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes will be processed, unprocessed planes will be copied.\nBy default value 0xf, all planes will be processed.\n\nIf the format has 1 or 2 components, then luma is bit 0.\nIf the format has 3 or 4 components:\nfor RGB formats bit 0 is green, bit 1 is blue and bit 2 is red;\nfor YUV formats bit 0 is luma, bit 1 is chroma-U and bit 2 is chroma-V.\nIf present, the alpha channel is always the last bit.\n\n"},{"names":["inplace"],"info":"Do not require 2nd input for processing, instead use alpha plane from input stream.\n\n@anchor{unsharp}\n"}]},{"filtergroup":["unsharp"],"info":"\nSharpen or blur the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["luma_msize_x","lx"],"info":"Set the luma matrix horizontal size. It must be an odd integer between\n3 and 23. The default value is 5.\n\n"},{"names":["luma_msize_y","ly"],"info":"Set the luma matrix vertical size. It must be an odd integer between 3\nand 23. The default value is 5.\n\n"},{"names":["luma_amount","la"],"info":"Set the luma effect strength. It must be a floating point number, reasonable\nvalues lay between -1.5 and 1.5.\n\nNegative values will blur the input video, while positive values will\nsharpen it, a value of zero will disable the effect.\n\nDefault value is 1.0.\n\n"},{"names":["chroma_msize_x","cx"],"info":"Set the chroma matrix horizontal size. It must be an odd integer\nbetween 3 and 23. The default value is 5.\n\n"},{"names":["chroma_msize_y","cy"],"info":"Set the chroma matrix vertical size. It must be an odd integer\nbetween 3 and 23. The default value is 5.\n\n"},{"names":["chroma_amount","ca"],"info":"Set the chroma effect strength. It must be a floating point number, reasonable\nvalues lay between -1.5 and 1.5.\n\nNegative values will blur the input video, while positive values will\nsharpen it, a value of zero will disable the effect.\n\nDefault value is 0.0.\n\n\nAll parameters are optional and default to the equivalent of the\nstring '5:5:1.0:5:5:0.0'.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply strong luma sharpen effect:\n@example\nunsharp=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5\n@end example\n\n@item\nApply a strong blur of both luma and chroma parameters:\n@example\nunsharp=7:7:-2:7:7:-2\n@end example\n@end itemize\n\n"}]},{"filtergroup":["uspp"],"info":"\nApply ultra slow/simple postprocessing filter that compresses and decompresses\nthe image at several (or - in the case of @option{quality} level @code{8} - all)\nshifts and average the results.\n\nThe way this differs from the behavior of spp is that uspp actually encodes &\ndecodes each case with libavcodec Snow, whereas spp uses a simplified intra only 8x8\nDCT similar to MJPEG.\n\n","options":[{"names":["quality"],"info":"Set quality. This option defines the number of levels for averaging. It accepts\nan integer in the range 0-8. If set to @code{0}, the filter will have no\neffect. A value of @code{8} means the higher quality. For each increment of\nthat value the speed drops by a factor of approximately 2. Default value is\n@code{3}.\n\n"},{"names":["qp"],"info":"Force a constant quantization parameter. If not set, the filter will use the QP\nfrom the video stream (if available).\n\n"}]},{"filtergroup":["v360"],"info":"\nConvert 360 videos between various formats.\n\n","options":[{"names":["input"],"info":""},{"names":["output"],"info":"Set format of the input/output video.\n\nAvailable formats:\n\n\n@item e\n@item equirect\nEquirectangular projection.\n\n@item c3x2\n@item c6x1\n@item c1x6\nCubemap with 3x2/6x1/1x6 layout.\n\nFormat specific options:\n\n@table @option\n@item in_pad\n@item out_pad\nSet padding proportion for the input/output cubemap. Values in decimals.\n\nExample values:\n@table @samp\n@item 0\nNo padding.\n@item 0.01\n1% of face is padding. For example, with 1920x1280 resolution face size would be 640x640 and padding would be 3 pixels from each side. (640 * 0.01 = 6 pixels)\n\nDefault value is @b{@samp{0}}.\n\n"},{"names":["fin_pad"],"info":""},{"names":["fout_pad"],"info":"Set fixed padding for the input/output cubemap. Values in pixels.\n\nDefault value is @b{@samp{0}}. If greater than zero it overrides other padding options.\n\n"},{"names":["in_forder"],"info":""},{"names":["out_forder"],"info":"Set order of faces for the input/output cubemap. Choose one direction for each position.\n\nDesignation of directions:\n@item r\nright\n@item l\nleft\n@item u\nup\n@item d\ndown\n@item f\nforward\n@item b\nback\n\nDefault value is @b{@samp{rludfb}}.\n\n"},{"names":["in_frot"],"info":""},{"names":["out_frot"],"info":"Set rotation of faces for the input/output cubemap. Choose one angle for each position.\n\nDesignation of angles:\n@item 0\n0 degrees clockwise\n@item 1\n90 degrees clockwise\n@item 2\n180 degrees clockwise\n@item 3\n270 degrees clockwise\n\nDefault value is @b{@samp{000000}}.\n\n"},{"names":["eac"],"info":"Equi-Angular Cubemap.\n\n"},{"names":["flat"],"info":""},{"names":["gnomonic"],"info":""},{"names":["rectilinear"],"info":"Regular video. @i{(output only)}\n\nFormat specific options:\n@item h_fov\n@item v_fov\n@item d_fov\nSet horizontal/vertical/diagonal field of view. Values in degrees.\n\nIf diagonal field of view is set it overrides horizontal and vertical field of view.\n\n"},{"names":["dfisheye"],"info":"Dual fisheye.\n\nFormat specific options:\n@item in_pad\n@item out_pad\nSet padding proportion. Values in decimals.\n\nExample values:\n@table @samp\n@item 0\nNo padding.\n@item 0.01\n1% padding.\n\nDefault value is @b{@samp{0}}.\n\n"},{"names":["barrel"],"info":""},{"names":["fb"],"info":"Facebook's 360 format.\n\n"},{"names":["sg"],"info":"Stereographic format.\n\nFormat specific options:\n@item h_fov\n@item v_fov\n@item d_fov\nSet horizontal/vertical/diagonal field of view. Values in degrees.\n\nIf diagonal field of view is set it overrides horizontal and vertical field of view.\n\n"},{"names":["mercator"],"info":"Mercator format.\n\n"},{"names":["ball"],"info":"Ball format, gives significant distortion toward the back.\n\n"},{"names":["hammer"],"info":"Hammer-Aitoff map projection format.\n\n"},{"names":["sinusoidal"],"info":"Sinusoidal map projection format.\n\n\n"},{"names":["interp"],"info":"Set interpolation method.@*\n@i{Note: more complex interpolation methods require much more memory to run.}\n\nAvailable methods:\n\n@item near\n@item nearest\nNearest neighbour.\n@item line\n@item linear\nBilinear interpolation.\n@item cube\n@item cubic\nBicubic interpolation.\n@item lanc\n@item lanczos\nLanczos interpolation.\n\nDefault value is @b{@samp{line}}.\n\n"},{"names":["w"],"info":""},{"names":["h"],"info":"Set the output video resolution.\n\nDefault resolution depends on formats.\n\n"},{"names":["in_stereo"],"info":""},{"names":["out_stereo"],"info":"Set the input/output stereo format.\n\n@item 2d\n2D mono\n@item sbs\nSide by side\n@item tb\nTop bottom\n\nDefault value is @b{@samp{2d}} for input and output format.\n\n"},{"names":["yaw"],"info":""},{"names":["pitch"],"info":""},{"names":["roll"],"info":"Set rotation for the output video. Values in degrees.\n\n"},{"names":["rorder"],"info":"Set rotation order for the output video. Choose one item for each position.\n\n@item y, Y\nyaw\n@item p, P\npitch\n@item r, R\nroll\n\nDefault value is @b{@samp{ypr}}.\n\n"},{"names":["h_flip"],"info":""},{"names":["v_flip"],"info":""},{"names":["d_flip"],"info":"Flip the output video horizontally(swaps left-right)/vertically(swaps up-down)/in-depth(swaps back-forward). Boolean values.\n\n"},{"names":["ih_flip"],"info":""},{"names":["iv_flip"],"info":"Set if input video is flipped horizontally/vertically. Boolean values.\n\n"},{"names":["in_trans"],"info":"Set if input video is transposed. Boolean value, by default disabled.\n\n"},{"names":["out_trans"],"info":"Set if output video needs to be transposed. Boolean value, by default disabled.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nConvert equirectangular video to cubemap with 3x2 layout and 1% padding using bicubic interpolation:\n@example\nffmpeg -i input.mkv -vf v360=e:c3x2:cubic:out_pad=0.01 output.mkv\n@end example\n@item\nExtract back view of Equi-Angular Cubemap:\n@example\nffmpeg -i input.mkv -vf v360=eac:flat:yaw=180 output.mkv\n@end example\n@item\nConvert transposed and horizontally flipped Equi-Angular Cubemap in side-by-side stereo format to equirectangular top-bottom stereo format:\n@example\nv360=eac:equirect:in_stereo=sbs:in_trans=1:ih_flip=1:out_stereo=tb\n@end example\n@end itemize\n\n"}]},{"filtergroup":["vaguedenoiser"],"info":"\nApply a wavelet based denoiser.\n\nIt transforms each frame from the video input into the wavelet domain,\nusing Cohen-Daubechies-Feauveau 9/7. Then it applies some filtering to\nthe obtained coefficients. It does an inverse wavelet transform after.\nDue to wavelet properties, it should give a nice smoothed result, and\nreduced noise, without blurring picture features.\n\nThis filter accepts the following options:\n\n","options":[{"names":["threshold"],"info":"The filtering strength. The higher, the more filtered the video will be.\nHard thresholding can use a higher threshold than soft thresholding\nbefore the video looks overfiltered. Default value is 2.\n\n"},{"names":["method"],"info":"The filtering method the filter will use.\n\nIt accepts the following values:\n@item hard\nAll values under the threshold will be zeroed.\n\n@item soft\nAll values under the threshold will be zeroed. All values above will be\nreduced by the threshold.\n\n@item garrote\nScales or nullifies coefficients - intermediary between (more) soft and\n(less) hard thresholding.\n\nDefault is garrote.\n\n"},{"names":["nsteps"],"info":"Number of times, the wavelet will decompose the picture. Picture can't\nbe decomposed beyond a particular point (typically, 8 for a 640x480\nframe - as 2^9 = 512 > 480). Valid values are integers between 1 and 32. Default value is 6.\n\n"},{"names":["percent"],"info":"Partial of full denoising (limited coefficients shrinking), from 0 to 100. Default value is 85.\n\n"},{"names":["planes"],"info":"A list of the planes to process. By default all planes are processed.\n\n"}]},{"filtergroup":["vectorscope"],"info":"\nDisplay 2 color component values in the two dimensional graph (which is called\na vectorscope).\n\nThis filter accepts the following options:\n\n","options":[{"names":["mode","m"],"info":"Set vectorscope mode.\n\nIt accepts the following values:\n@item gray\nGray values are displayed on graph, higher brightness means more pixels have\nsame component color value on location in graph. This is the default mode.\n\n@item color\nGray values are displayed on graph. Surrounding pixels values which are not\npresent in video frame are drawn in gradient of 2 color components which are\nset by option @code{x} and @code{y}. The 3rd color component is static.\n\n@item color2\nActual color components values present in video frame are displayed on graph.\n\n@item color3\nSimilar as color2 but higher frequency of same values @code{x} and @code{y}\non graph increases value of another color component, which is luminance by\ndefault values of @code{x} and @code{y}.\n\n@item color4\nActual colors present in video frame are displayed on graph. If two different\ncolors map to same position on graph then color with higher value of component\nnot present in graph is picked.\n\n@item color5\nGray values are displayed on graph. Similar to @code{color} but with 3rd color\ncomponent picked from radial gradient.\n\n"},{"names":["x"],"info":"Set which color component will be represented on X-axis. Default is @code{1}.\n\n"},{"names":["y"],"info":"Set which color component will be represented on Y-axis. Default is @code{2}.\n\n"},{"names":["intensity","i"],"info":"Set intensity, used by modes: gray, color, color3 and color5 for increasing brightness\nof color component which represents frequency of (X, Y) location in graph.\n\n"},{"names":["envelope","e"],"info":"@item none\nNo envelope, this is default.\n\n@item instant\nInstant envelope, even darkest single pixel will be clearly highlighted.\n\n@item peak\nHold maximum and minimum values presented in graph over time. This way you\ncan still spot out of range values without constantly looking at vectorscope.\n\n@item peak+instant\nPeak and instant envelope combined together.\n\n"},{"names":["graticule","g"],"info":"Set what kind of graticule to draw.\n@item none\n@item green\n@item color\n\n"},{"names":["opacity","o"],"info":"Set graticule opacity.\n\n"},{"names":["flags","f"],"info":"Set graticule flags.\n\n@item white\nDraw graticule for white point.\n\n@item black\nDraw graticule for black point.\n\n@item name\nDraw color points short names.\n\n"},{"names":["bgopacity","b"],"info":"Set background opacity.\n\n"},{"names":["lthreshold","l"],"info":"Set low threshold for color component not represented on X or Y axis.\nValues lower than this value will be ignored. Default is 0.\nNote this value is multiplied with actual max possible value one pixel component\ncan have. So for 8-bit input and low threshold value of 0.1 actual threshold\nis 0.1 * 255 = 25.\n\n"},{"names":["hthreshold","h"],"info":"Set high threshold for color component not represented on X or Y axis.\nValues higher than this value will be ignored. Default is 1.\nNote this value is multiplied with actual max possible value one pixel component\ncan have. So for 8-bit input and high threshold value of 0.9 actual threshold\nis 0.9 * 255 = 230.\n\n"},{"names":["colorspace","c"],"info":"Set what kind of colorspace to use when drawing graticule.\n@item auto\n@item 601\n@item 709\nDefault is auto.\n\n@anchor{vidstabdetect}\n"}]},{"filtergroup":["vidstabdetect"],"info":"\nAnalyze video stabilization/deshaking. Perform pass 1 of 2, see\n@ref{vidstabtransform} for pass 2.\n\nThis filter generates a file with relative translation and rotation\ntransform information about subsequent frames, which is then used by\nthe @ref{vidstabtransform} filter.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libvidstab}.\n\nThis filter accepts the following options:\n\n","options":[{"names":["result"],"info":"Set the path to the file used to write the transforms information.\nDefault value is @file{transforms.trf}.\n\n"},{"names":["shakiness"],"info":"Set how shaky the video is and how quick the camera is. It accepts an\ninteger in the range 1-10, a value of 1 means little shakiness, a\nvalue of 10 means strong shakiness. Default value is 5.\n\n"},{"names":["accuracy"],"info":"Set the accuracy of the detection process. It must be a value in the\nrange 1-15. A value of 1 means low accuracy, a value of 15 means high\naccuracy. Default value is 15.\n\n"},{"names":["stepsize"],"info":"Set stepsize of the search process. The region around minimum is\nscanned with 1 pixel resolution. Default value is 6.\n\n"},{"names":["mincontrast"],"info":"Set minimum contrast. Below this value a local measurement field is\ndiscarded. Must be a floating point value in the range 0-1. Default\nvalue is 0.3.\n\n"},{"names":["tripod"],"info":"Set reference frame number for tripod mode.\n\nIf enabled, the motion of the frames is compared to a reference frame\nin the filtered stream, identified by the specified number. The idea\nis to compensate all movements in a more-or-less static scene and keep\nthe camera view absolutely still.\n\nIf set to 0, it is disabled. The frames are counted starting from 1.\n\n"},{"names":["show"],"info":"Show fields and transforms in the resulting frames. It accepts an\ninteger in the range 0-2. Default value is 0, which disables any\nvisualization.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nUse default values:\n@example\nvidstabdetect\n@end example\n\n@item\nAnalyze strongly shaky movie and put the results in file\n@file{mytransforms.trf}:\n@example\nvidstabdetect=shakiness=10:accuracy=15:result=\"mytransforms.trf\"\n@end example\n\n@item\nVisualize the result of internal transformations in the resulting\nvideo:\n@example\nvidstabdetect=show=1\n@end example\n\n@item\nAnalyze a video with medium shakiness using @command{ffmpeg}:\n@example\nffmpeg -i input -vf vidstabdetect=shakiness=5:show=1 dummy.avi\n@end example\n@end itemize\n\n@anchor{vidstabtransform}\n"}]},{"filtergroup":["vidstabtransform"],"info":"\nVideo stabilization/deshaking: pass 2 of 2,\nsee @ref{vidstabdetect} for pass 1.\n\nRead a file with transform information for each frame and\napply/compensate them. Together with the @ref{vidstabdetect}\nfilter this can be used to deshake videos. See also\n@url{http://public.hronopik.de/vid.stab}. It is important to also use\nthe @ref{unsharp} filter, see below.\n\nTo enable compilation of this filter you need to configure FFmpeg with\n@code{--enable-libvidstab}.\n\n@subsection Options\n\n","options":[{"names":["input"],"info":"Set path to the file used to read the transforms. Default value is\n@file{transforms.trf}.\n\n"},{"names":["smoothing"],"info":"Set the number of frames (value*2 + 1) used for lowpass filtering the\ncamera movements. Default value is 10.\n\nFor example a number of 10 means that 21 frames are used (10 in the\npast and 10 in the future) to smoothen the motion in the video. A\nlarger value leads to a smoother video, but limits the acceleration of\nthe camera (pan/tilt movements). 0 is a special case where a static\ncamera is simulated.\n\n"},{"names":["optalgo"],"info":"Set the camera path optimization algorithm.\n\nAccepted values are:\n@item gauss\ngaussian kernel low-pass filter on camera motion (default)\n@item avg\naveraging on transformations\n\n"},{"names":["maxshift"],"info":"Set maximal number of pixels to translate frames. Default value is -1,\nmeaning no limit.\n\n"},{"names":["maxangle"],"info":"Set maximal angle in radians (degree*PI/180) to rotate frames. Default\nvalue is -1, meaning no limit.\n\n"},{"names":["crop"],"info":"Specify how to deal with borders that may be visible due to movement\ncompensation.\n\nAvailable values are:\n@item keep\nkeep image information from previous frame (default)\n@item black\nfill the border black\n\n"},{"names":["invert"],"info":"Invert transforms if set to 1. Default value is 0.\n\n"},{"names":["relative"],"info":"Consider transforms as relative to previous frame if set to 1,\nabsolute if set to 0. Default value is 0.\n\n"},{"names":["zoom"],"info":"Set percentage to zoom. A positive value will result in a zoom-in\neffect, a negative value in a zoom-out effect. Default value is 0 (no\nzoom).\n\n"},{"names":["optzoom"],"info":"Set optimal zooming to avoid borders.\n\nAccepted values are:\n@item 0\ndisabled\n@item 1\noptimal static zoom value is determined (only very strong movements\nwill lead to visible borders) (default)\n@item 2\noptimal adaptive zoom value is determined (no borders will be\nvisible), see @option{zoomspeed}\n\nNote that the value given at zoom is added to the one calculated here.\n\n"},{"names":["zoomspeed"],"info":"Set percent to zoom maximally each frame (enabled when\n@option{optzoom} is set to 2). Range is from 0 to 5, default value is\n0.25.\n\n"},{"names":["interpol"],"info":"Specify type of interpolation.\n\nAvailable values are:\n@item no\nno interpolation\n@item linear\nlinear only horizontal\n@item bilinear\nlinear in both directions (default)\n@item bicubic\ncubic in both directions (slow)\n\n"},{"names":["tripod"],"info":"Enable virtual tripod mode if set to 1, which is equivalent to\n@code{relative=0:smoothing=0}. Default value is 0.\n\nUse also @code{tripod} option of @ref{vidstabdetect}.\n\n"},{"names":["debug"],"info":"Increase log verbosity if set to 1. Also the detected global motions\nare written to the temporary file @file{global_motions.trf}. Default\nvalue is 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nUse @command{ffmpeg} for a typical stabilization with default values:\n@example\nffmpeg -i inp.mpeg -vf vidstabtransform,unsharp=5:5:0.8:3:3:0.4 inp_stabilized.mpeg\n@end example\n\nNote the use of the @ref{unsharp} filter which is always recommended.\n\n@item\nZoom in a bit more and load transform data from a given file:\n@example\nvidstabtransform=zoom=5:input=\"mytransforms.trf\"\n@end example\n\n@item\nSmoothen the video even more:\n@example\nvidstabtransform=smoothing=30\n@end example\n@end itemize\n\n"}]},{"filtergroup":["vflip"],"info":"\nFlip the input video vertically.\n\nFor example, to vertically flip a video with @command{ffmpeg}:\n@example\nffmpeg -i in.avi -vf \"vflip\" out.avi\n@end example\n\n","options":[]},{"filtergroup":["vfrdet"],"info":"\nDetect variable frame rate video.\n\nThis filter tries to detect if the input is variable or constant frame rate.\n\nAt end it will output number of frames detected as having variable delta pts,\nand ones with constant delta pts.\nIf there was frames with variable delta, than it will also show min, max and\naverage delta encountered.\n\n","options":[]},{"filtergroup":["vibrance"],"info":"\nBoost or alter saturation.\n\n","options":[{"names":["intensity"],"info":"Set strength of boost if positive value or strength of alter if negative value.\nDefault is 0. Allowed range is from -2 to 2.\n\n"},{"names":["rbal"],"info":"Set the red balance. Default is 1. Allowed range is from -10 to 10.\n\n"},{"names":["gbal"],"info":"Set the green balance. Default is 1. Allowed range is from -10 to 10.\n\n"},{"names":["bbal"],"info":"Set the blue balance. Default is 1. Allowed range is from -10 to 10.\n\n"},{"names":["rlum"],"info":"Set the red luma coefficient.\n\n"},{"names":["glum"],"info":"Set the green luma coefficient.\n\n"},{"names":["blum"],"info":"Set the blue luma coefficient.\n\n"},{"names":["alternate"],"info":"If @code{intensity} is negative and this is set to 1, colors will change,\notherwise colors will be less saturated, more towards gray.\n\n@anchor{vignette}\n"}]},{"filtergroup":["vignette"],"info":"\nMake or reverse a natural vignetting effect.\n\n","options":[{"names":["angle","a"],"info":"Set lens angle expression as a number of radians.\n\nThe value is clipped in the @code{[0,PI/2]} range.\n\nDefault value: @code{\"PI/5\"}\n\n"},{"names":["x0"],"info":""},{"names":["y0"],"info":"Set center coordinates expressions. Respectively @code{\"w/2\"} and @code{\"h/2\"}\nby default.\n\n"},{"names":["mode"],"info":"Set forward/backward mode.\n\nAvailable modes are:\n@item forward\nThe larger the distance from the central point, the darker the image becomes.\n\n@item backward\nThe larger the distance from the central point, the brighter the image becomes.\nThis can be used to reverse a vignette effect, though there is no automatic\ndetection to extract the lens @option{angle} and other settings (yet). It can\nalso be used to create a burning effect.\n\nDefault value is @samp{forward}.\n\n"},{"names":["eval"],"info":"Set evaluation mode for the expressions (@option{angle}, @option{x0}, @option{y0}).\n\nIt accepts the following values:\n@item init\nEvaluate expressions only once during the filter initialization.\n\n@item frame\nEvaluate expressions for each incoming frame. This is way slower than the\n@samp{init} mode since it requires all the scalers to be re-computed, but it\nallows advanced dynamic expressions.\n\nDefault value is @samp{init}.\n\n"},{"names":["dither"],"info":"Set dithering to reduce the circular banding effects. Default is @code{1}\n(enabled).\n\n"},{"names":["aspect"],"info":"Set vignette aspect. This setting allows one to adjust the shape of the vignette.\nSetting this value to the SAR of the input will make a rectangular vignetting\nfollowing the dimensions of the video.\n\nDefault is @code{1/1}.\n\n@subsection Expressions\n\nThe @option{alpha}, @option{x0} and @option{y0} expressions can contain the\nfollowing parameters.\n\n@item w\n@item h\ninput width and height\n\n@item n\nthe number of input frame, starting from 0\n\n@item pts\nthe PTS (Presentation TimeStamp) time of the filtered video frame, expressed in\n@var{TB} units, NAN if undefined\n\n@item r\nframe rate of the input video, NAN if the input frame rate is unknown\n\n@item t\nthe PTS (Presentation TimeStamp) of the filtered video frame,\nexpressed in seconds, NAN if undefined\n\n@item tb\ntime base of the input video\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply simple strong vignetting effect:\n@example\nvignette=PI/4\n@end example\n\n@item\nMake a flickering vignetting:\n@example\nvignette='PI/4+random(1)*PI/50':eval=frame\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["vmafmotion"],"info":"\nObtain the average VMAF motion score of a video.\nIt is one of the component metrics of VMAF.\n\nThe obtained average motion score is printed through the logging system.\n\n","options":[{"names":["stats_file"],"info":"If specified, the filter will use the named file to save the motion score of\neach frame with respect to the previous frame.\nWhen filename equals \"-\" the data is sent to standard output.\n\nExample:\n@example\nffmpeg -i ref.mpg -vf vmafmotion -f null -\n@end example\n\n"}]},{"filtergroup":["vstack"],"info":"Stack input videos vertically.\n\nAll streams must be of same pixel format and of same width.\n\nNote that this filter is faster than using @ref{overlay} and @ref{pad} filter\nto create same output.\n\n","options":[{"names":["inputs"],"info":"Set number of input streams. Default is 2.\n\n"},{"names":["shortest"],"info":"If set to 1, force the output to terminate when the shortest input\nterminates. Default value is 0.\n\n"}]},{"filtergroup":["w3fdif"],"info":"\nDeinterlace the input video (\"w3fdif\" stands for \"Weston 3 Field\nDeinterlacing Filter\").\n\nBased on the process described by Martin Weston for BBC R&D, and\nimplemented based on the de-interlace algorithm written by Jim\nEasterbrook for BBC R&D, the Weston 3 field deinterlacing filter\nuses filter coefficients calculated by BBC R&D.\n\nThis filter uses field-dominance information in frame to decide which\nof each pair of fields to place first in the output.\nIf it gets it wrong use @ref{setfield} filter before @code{w3fdif} filter.\n\nThere are two sets of filter coefficients, so called \"simple\"\nand \"complex\". Which set of filter coefficients is used can\nbe set by passing an optional parameter:\n\n","options":[{"names":["filter"],"info":"Set the interlacing filter coefficients. Accepts one of the following values:\n\n@item simple\nSimple filter coefficient set.\n@item complex\nMore-complex filter coefficient set.\nDefault value is @samp{complex}.\n\n"},{"names":["deint"],"info":"Specify which frames to deinterlace. Accepts one of the following values:\n\n@item all\nDeinterlace all frames,\n@item interlaced\nOnly deinterlace frames marked as interlaced.\n\nDefault value is @samp{all}.\n\n"}]},{"filtergroup":["waveform"],"info":"Video waveform monitor.\n\nThe waveform monitor plots color component intensity. By default luminance\nonly. Each column of the waveform corresponds to a column of pixels in the\nsource video.\n\nIt accepts the following options:\n\n","options":[{"names":["mode","m"],"info":"Can be either @code{row}, or @code{column}. Default is @code{column}.\nIn row mode, the graph on the left side represents color component value 0 and\nthe right side represents value = 255. In column mode, the top side represents\ncolor component value = 0 and bottom side represents value = 255.\n\n"},{"names":["intensity","i"],"info":"Set intensity. Smaller values are useful to find out how many values of the same\nluminance are distributed across input rows/columns.\nDefault value is @code{0.04}. Allowed range is [0, 1].\n\n"},{"names":["mirror","r"],"info":"Set mirroring mode. @code{0} means unmirrored, @code{1} means mirrored.\nIn mirrored mode, higher values will be represented on the left\nside for @code{row} mode and at the top for @code{column} mode. Default is\n@code{1} (mirrored).\n\n"},{"names":["display","d"],"info":"Set display mode.\nIt accepts the following values:\n@item overlay\nPresents information identical to that in the @code{parade}, except\nthat the graphs representing color components are superimposed directly\nover one another.\n\nThis display mode makes it easier to spot relative differences or similarities\nin overlapping areas of the color components that are supposed to be identical,\nsuch as neutral whites, grays, or blacks.\n\n@item stack\nDisplay separate graph for the color components side by side in\n@code{row} mode or one below the other in @code{column} mode.\n\n@item parade\nDisplay separate graph for the color components side by side in\n@code{column} mode or one below the other in @code{row} mode.\n\nUsing this display mode makes it easy to spot color casts in the highlights\nand shadows of an image, by comparing the contours of the top and the bottom\ngraphs of each waveform. Since whites, grays, and blacks are characterized\nby exactly equal amounts of red, green, and blue, neutral areas of the picture\nshould display three waveforms of roughly equal width/height. If not, the\ncorrection is easy to perform by making level adjustments the three waveforms.\nDefault is @code{stack}.\n\n"},{"names":["components","c"],"info":"Set which color components to display. Default is 1, which means only luminance\nor red color component if input is in RGB colorspace. If is set for example to\n7 it will display all 3 (if) available color components.\n\n"},{"names":["envelope","e"],"info":"@item none\nNo envelope, this is default.\n\n@item instant\nInstant envelope, minimum and maximum values presented in graph will be easily\nvisible even with small @code{step} value.\n\n@item peak\nHold minimum and maximum values presented in graph across time. This way you\ncan still spot out of range values without constantly looking at waveforms.\n\n@item peak+instant\nPeak and instant envelope combined together.\n\n"},{"names":["filter","f"],"info":"@item lowpass\nNo filtering, this is default.\n\n@item flat\nLuma and chroma combined together.\n\n@item aflat\nSimilar as above, but shows difference between blue and red chroma.\n\n@item xflat\nSimilar as above, but use different colors.\n\n@item yflat\nSimilar as above, but again with different colors.\n\n@item chroma\nDisplays only chroma.\n\n@item color\nDisplays actual color value on waveform.\n\n@item acolor\nSimilar as above, but with luma showing frequency of chroma values.\n\n"},{"names":["graticule","g"],"info":"Set which graticule to display.\n\n@item none\nDo not display graticule.\n\n@item green\nDisplay green graticule showing legal broadcast ranges.\n\n@item orange\nDisplay orange graticule showing legal broadcast ranges.\n\n@item invert\nDisplay invert graticule showing legal broadcast ranges.\n\n"},{"names":["opacity","o"],"info":"Set graticule opacity.\n\n"},{"names":["flags","fl"],"info":"Set graticule flags.\n\n@item numbers\nDraw numbers above lines. By default enabled.\n\n@item dots\nDraw dots instead of lines.\n\n"},{"names":["scale","s"],"info":"Set scale used for displaying graticule.\n\n@item digital\n@item millivolts\n@item ire\nDefault is digital.\n\n"},{"names":["bgopacity","b"],"info":"Set background opacity.\n\n"}]},{"filtergroup":["weave","doubleweave"],"info":"\nThe @code{weave} takes a field-based video input and join\neach two sequential fields into single frame, producing a new double\nheight clip with half the frame rate and half the frame count.\n\nThe @code{doubleweave} works same as @code{weave} but without\nhalving frame rate and frame count.\n\nIt accepts the following option:\n\n","options":[{"names":["first_field"],"info":"Set first field. Available values are:\n\n@item top, t\nSet the frame as top-field-first.\n\n@item bottom, b\nSet the frame as bottom-field-first.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nInterlace video using @ref{select} and @ref{separatefields} filter:\n@example\nseparatefields,select=eq(mod(n,4),0)+eq(mod(n,4),3),weave\n@end example\n@end itemize\n\n"}]},{"filtergroup":["xbr"],"info":"Apply the xBR high-quality magnification filter which is designed for pixel\nart. It follows a set of edge-detection rules, see\n@url{https://forums.libretro.com/t/xbr-algorithm-tutorial/123}.\n\nIt accepts the following option:\n\n","options":[{"names":["n"],"info":"Set the scaling dimension: @code{2} for @code{2xBR}, @code{3} for\n@code{3xBR} and @code{4} for @code{4xBR}.\nDefault is @code{3}.\n\n"}]},{"filtergroup":["xmedian"],"info":"Pick median pixels from several input videos.\n\n","options":[{"names":["inputs"],"info":"Set number of inputs.\nDefault is 3. Allowed range is from 3 to 255.\nIf number of inputs is even number, than result will be mean value between two median values.\n\n"},{"names":["planes"],"info":"Set which planes to filter. Default value is @code{15}, by which all planes are processed.\n\n"}]},{"filtergroup":["xstack"],"info":"Stack video inputs into custom layout.\n\nAll streams must be of same pixel format.\n\n","options":[{"names":["inputs"],"info":"Set number of input streams. Default is 2.\n\n"},{"names":["layout"],"info":"Specify layout of inputs.\nThis option requires the desired layout configuration to be explicitly set by the user.\nThis sets position of each video input in output. Each input\nis separated by '|'.\nThe first number represents the column, and the second number represents the row.\nNumbers start at 0 and are separated by '_'. Optionally one can use wX and hX,\nwhere X is video input from which to take width or height.\nMultiple values can be used when separated by '+'. In such\ncase values are summed together.\n\nNote that if inputs are of different sizes gaps may appear, as not all of\nthe output video frame will be filled. Similarly, videos can overlap each\nother if their position doesn't leave enough space for the full frame of\nadjoining videos.\n\nFor 2 inputs, a default layout of @code{0_0|w0_0} is set. In all other cases,\na layout must be set by the user.\n\n"},{"names":["shortest"],"info":"If set to 1, force the output to terminate when the shortest input\nterminates. Default value is 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nDisplay 4 inputs into 2x2 grid.\n\nLayout:\n@example\ninput1(0, 0) | input3(w0, 0)\ninput2(0, h0) | input4(w0, h0)\n@end example\n\n@example\nxstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0\n@end example\n\nNote that if inputs are of different sizes, gaps or overlaps may occur.\n\n@item\nDisplay 4 inputs into 1x4 grid.\n\nLayout:\n@example\ninput1(0, 0)\ninput2(0, h0)\ninput3(0, h0+h1)\ninput4(0, h0+h1+h2)\n@end example\n\n@example\nxstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2\n@end example\n\nNote that if inputs are of different widths, unused space will appear.\n\n@item\nDisplay 9 inputs into 3x3 grid.\n\nLayout:\n@example\ninput1(0, 0) | input4(w0, 0) | input7(w0+w3, 0)\ninput2(0, h0) | input5(w0, h0) | input8(w0+w3, h0)\ninput3(0, h0+h1) | input6(w0, h0+h1) | input9(w0+w3, h0+h1)\n@end example\n\n@example\nxstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1\n@end example\n\nNote that if inputs are of different sizes, gaps or overlaps may occur.\n\n@item\nDisplay 16 inputs into 4x4 grid.\n\nLayout:\n@example\ninput1(0, 0) | input5(w0, 0) | input9 (w0+w4, 0) | input13(w0+w4+w8, 0)\ninput2(0, h0) | input6(w0, h0) | input10(w0+w4, h0) | input14(w0+w4+w8, h0)\ninput3(0, h0+h1) | input7(w0, h0+h1) | input11(w0+w4, h0+h1) | input15(w0+w4+w8, h0+h1)\ninput4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| input16(w0+w4+w8, h0+h1+h2)\n@end example\n\n@example\nxstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|\nw0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2\n@end example\n\nNote that if inputs are of different sizes, gaps or overlaps may occur.\n\n@end itemize\n\n@anchor{yadif}\n"}]},{"filtergroup":["yadif"],"info":"\nDeinterlace the input video (\"yadif\" means \"yet another deinterlacing\nfilter\").\n\nIt accepts the following parameters:\n\n\n","options":[{"names":["mode"],"info":"The interlacing mode to adopt. It accepts one of the following values:\n\n@item 0, send_frame\nOutput one frame for each frame.\n@item 1, send_field\nOutput one frame for each field.\n@item 2, send_frame_nospatial\nLike @code{send_frame}, but it skips the spatial interlacing check.\n@item 3, send_field_nospatial\nLike @code{send_field}, but it skips the spatial interlacing check.\n\nThe default value is @code{send_frame}.\n\n"},{"names":["parity"],"info":"The picture field parity assumed for the input interlaced video. It accepts one\nof the following values:\n\n@item 0, tff\nAssume the top field is first.\n@item 1, bff\nAssume the bottom field is first.\n@item -1, auto\nEnable automatic detection of field parity.\n\nThe default value is @code{auto}.\nIf the interlacing is unknown or the decoder does not export this information,\ntop field first will be assumed.\n\n"},{"names":["deint"],"info":"Specify which frames to deinterlace. Accepts one of the following\nvalues:\n\n@item 0, all\nDeinterlace all frames.\n@item 1, interlaced\nOnly deinterlace frames marked as interlaced.\n\nThe default value is @code{all}.\n\n"}]},{"filtergroup":["yadif_cuda"],"info":"\nDeinterlace the input video using the @ref{yadif} algorithm, but implemented\nin CUDA so that it can work as part of a GPU accelerated pipeline with nvdec\nand/or nvenc.\n\nIt accepts the following parameters:\n\n\n","options":[{"names":["mode"],"info":"The interlacing mode to adopt. It accepts one of the following values:\n\n@item 0, send_frame\nOutput one frame for each frame.\n@item 1, send_field\nOutput one frame for each field.\n@item 2, send_frame_nospatial\nLike @code{send_frame}, but it skips the spatial interlacing check.\n@item 3, send_field_nospatial\nLike @code{send_field}, but it skips the spatial interlacing check.\n\nThe default value is @code{send_frame}.\n\n"},{"names":["parity"],"info":"The picture field parity assumed for the input interlaced video. It accepts one\nof the following values:\n\n@item 0, tff\nAssume the top field is first.\n@item 1, bff\nAssume the bottom field is first.\n@item -1, auto\nEnable automatic detection of field parity.\n\nThe default value is @code{auto}.\nIf the interlacing is unknown or the decoder does not export this information,\ntop field first will be assumed.\n\n"},{"names":["deint"],"info":"Specify which frames to deinterlace. Accepts one of the following\nvalues:\n\n@item 0, all\nDeinterlace all frames.\n@item 1, interlaced\nOnly deinterlace frames marked as interlaced.\n\nThe default value is @code{all}.\n\n"}]},{"filtergroup":["yaepblur"],"info":"\nApply blur filter while preserving edges (\"yaepblur\" means \"yet another edge preserving blur filter\").\nThe algorithm is described in\n\"J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980.\"\n\nIt accepts the following parameters:\n\n","options":[{"names":["radius","r"],"info":"Set the window radius. Default value is 3.\n\n"},{"names":["planes","p"],"info":"Set which planes to filter. Default is only the first plane.\n\n"},{"names":["sigma","s"],"info":"Set blur strength. Default value is 128.\n\n@subsection Commands\nThis filter supports same @ref{commands} as options.\n\n"}]},{"filtergroup":["zoompan"],"info":"\nApply Zoom & Pan effect.\n\nThis filter accepts the following options:\n\n","options":[{"names":["zoom","z"],"info":"Set the zoom expression. Range is 1-10. Default is 1.\n\n"},{"names":["x"],"info":""},{"names":["y"],"info":"Set the x and y expression. Default is 0.\n\n"},{"names":["d"],"info":"Set the duration expression in number of frames.\nThis sets for how many number of frames effect will last for\nsingle input image.\n\n"},{"names":["s"],"info":"Set the output image size, default is 'hd720'.\n\n"},{"names":["fps"],"info":"Set the output frame rate, default is '25'.\n\nEach expression can contain the following constants:\n\n@item in_w, iw\nInput width.\n\n@item in_h, ih\nInput height.\n\n@item out_w, ow\nOutput width.\n\n@item out_h, oh\nOutput height.\n\n@item in\nInput frame count.\n\n@item on\nOutput frame count.\n\n@item x\n@item y\nLast calculated 'x' and 'y' position from 'x' and 'y' expression\nfor current input frame.\n\n@item px\n@item py\n'x' and 'y' of last output frame of previous input frame or 0 when there was\nnot yet such frame (first input frame).\n\n@item zoom\nLast calculated zoom from 'z' expression for current input frame.\n\n@item pzoom\nLast calculated zoom of last output frame of previous input frame.\n\n@item duration\nNumber of output frames for current input frame. Calculated from 'd' expression\nfor each input frame.\n\n@item pduration\nnumber of output frames created for previous input frame\n\n@item a\nRational number: input width / input height\n\n@item sar\nsample aspect ratio\n\n@item dar\ndisplay aspect ratio\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nZoom-in up to 1.5 and pan at same time to some spot near center of picture:\n@example\nzoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360\n@end example\n\n@item\nZoom-in up to 1.5 and pan always at center of picture:\n@example\nzoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'\n@end example\n\n@item\nSame as above but without pausing:\n@example\nzoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'\n@end example\n@end itemize\n\n@anchor{zscale}\n"}]},{"filtergroup":["zscale"],"info":"Scale (resize) the input video, using the z.lib library:\n@url{https://github.com/sekrit-twc/zimg}. To enable compilation of this\nfilter, you need to configure FFmpeg with @code{--enable-libzimg}.\n\nThe zscale filter forces the output display aspect ratio to be the same\nas the input, by changing the output sample aspect ratio.\n\nIf the input image format is different from the format requested by\nthe next filter, the zscale filter will convert the input to the\nrequested format.\n\n@subsection Options\nThe filter accepts the following options.\n\n","options":[{"names":["width","w"],"info":""},{"names":["height","h"],"info":"Set the output video dimension expression. Default value is the input\ndimension.\n\nIf the @var{width} or @var{w} value is 0, the input width is used for\nthe output. If the @var{height} or @var{h} value is 0, the input height\nis used for the output.\n\nIf one and only one of the values is -n with n >= 1, the zscale filter\nwill use a value that maintains the aspect ratio of the input image,\ncalculated from the other specified dimension. After that it will,\nhowever, make sure that the calculated dimension is divisible by n and\nadjust the value if necessary.\n\nIf both values are -n with n >= 1, the behavior will be identical to\nboth values being set to 0 as previously detailed.\n\nSee below for the list of accepted constants for use in the dimension\nexpression.\n\n"},{"names":["size","s"],"info":"Set the video size. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["dither","d"],"info":"Set the dither type.\n\nPossible values are:\n@item none\n@item ordered\n@item random\n@item error_diffusion\n\nDefault is none.\n\n"},{"names":["filter","f"],"info":"Set the resize filter type.\n\nPossible values are:\n@item point\n@item bilinear\n@item bicubic\n@item spline16\n@item spline36\n@item lanczos\n\nDefault is bilinear.\n\n"},{"names":["range","r"],"info":"Set the color range.\n\nPossible values are:\n@item input\n@item limited\n@item full\n\nDefault is same as input.\n\n"},{"names":["primaries","p"],"info":"Set the color primaries.\n\nPossible values are:\n@item input\n@item 709\n@item unspecified\n@item 170m\n@item 240m\n@item 2020\n\nDefault is same as input.\n\n"},{"names":["transfer","t"],"info":"Set the transfer characteristics.\n\nPossible values are:\n@item input\n@item 709\n@item unspecified\n@item 601\n@item linear\n@item 2020_10\n@item 2020_12\n@item smpte2084\n@item iec61966-2-1\n@item arib-std-b67\n\nDefault is same as input.\n\n"},{"names":["matrix","m"],"info":"Set the colorspace matrix.\n\nPossible value are:\n@item input\n@item 709\n@item unspecified\n@item 470bg\n@item 170m\n@item 2020_ncl\n@item 2020_cl\n\nDefault is same as input.\n\n"},{"names":["rangein","rin"],"info":"Set the input color range.\n\nPossible values are:\n@item input\n@item limited\n@item full\n\nDefault is same as input.\n\n"},{"names":["primariesin","pin"],"info":"Set the input color primaries.\n\nPossible values are:\n@item input\n@item 709\n@item unspecified\n@item 170m\n@item 240m\n@item 2020\n\nDefault is same as input.\n\n"},{"names":["transferin","tin"],"info":"Set the input transfer characteristics.\n\nPossible values are:\n@item input\n@item 709\n@item unspecified\n@item 601\n@item linear\n@item 2020_10\n@item 2020_12\n\nDefault is same as input.\n\n"},{"names":["matrixin","min"],"info":"Set the input colorspace matrix.\n\nPossible value are:\n@item input\n@item 709\n@item unspecified\n@item 470bg\n@item 170m\n@item 2020_ncl\n@item 2020_cl\n\n"},{"names":["chromal","c"],"info":"Set the output chroma location.\n\nPossible values are:\n@item input\n@item left\n@item center\n@item topleft\n@item top\n@item bottomleft\n@item bottom\n\n"},{"names":["chromalin","cin"],"info":"Set the input chroma location.\n\nPossible values are:\n@item input\n@item left\n@item center\n@item topleft\n@item top\n@item bottomleft\n@item bottom\n\n"},{"names":["npl"],"info":"Set the nominal peak luminance.\n\nThe values of the @option{w} and @option{h} options are expressions\ncontaining the following constants:\n\n@item in_w\n@item in_h\nThe input width and height\n\n@item iw\n@item ih\nThese are the same as @var{in_w} and @var{in_h}.\n\n@item out_w\n@item out_h\nThe output (scaled) width and height\n\n@item ow\n@item oh\nThese are the same as @var{out_w} and @var{out_h}\n\n@item a\nThe same as @var{iw} / @var{ih}\n\n@item sar\ninput sample aspect ratio\n\n@item dar\nThe input display aspect ratio. Calculated from @code{(iw / ih) * sar}.\n\n@item hsub\n@item vsub\nhorizontal and vertical input chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n@item ohsub\n@item ovsub\nhorizontal and vertical output chroma subsample values. For example for the\npixel format \"yuv422p\" @var{hsub} is 2 and @var{vsub} is 1.\n\n\n@c man end VIDEO FILTERS\n\n@chapter OpenCL Video Filters\n@c man begin OPENCL VIDEO FILTERS\n\nBelow is a description of the currently available OpenCL video filters.\n\nTo enable compilation of these filters you need to configure FFmpeg with\n@code{--enable-opencl}.\n\nRunning OpenCL filters requires you to initialize a hardware device and to pass that device to all filters in any filter graph.\n\n@item -init_hw_device opencl[=@var{name}][:@var{device}[,@var{key=value}...]]\nInitialise a new hardware device of type @var{opencl} called @var{name}, using the\ngiven device parameters.\n\n@item -filter_hw_device @var{name}\nPass the hardware device called @var{name} to all filters in any filter graph.\n\n\nFor more detailed information see @url{https://www.ffmpeg.org/ffmpeg.html#Advanced-Video-options}\n\n@itemize\n"},{"names":[],"info":"Example of choosing the first device on the second platform and running avgblur_opencl filter with default parameters on it.\n@example\n-init_hw_device opencl=gpu:1.0 -filter_hw_device gpu -i INPUT -vf \"hwupload, avgblur_opencl, hwdownload\" OUTPUT\n@end example\n@end itemize\n\nSince OpenCL filters are not able to access frame data in normal memory, all frame data needs to be uploaded(@ref{hwupload}) to hardware surfaces connected to the appropriate device before being used and then downloaded(@ref{hwdownload}) back to normal memory. Note that @ref{hwupload} will upload to a surface with the same layout as the software frame, so it may be necessary to add a @ref{format} filter immediately before to get the input into the right format and @ref{hwdownload} does not support all formats on the output - it may be necessary to insert an additional @ref{format} filter immediately following in the graph to get the output in a supported format.\n\n"}]},{"filtergroup":["avgblur_opencl"],"info":"\nApply average blur filter.\n\n","options":[{"names":["sizeX"],"info":"Set horizontal radius size.\nRange is @code{[1, 1024]} and default value is @code{1}.\n\n"},{"names":["planes"],"info":"Set which planes to filter. Default value is @code{0xf}, by which all planes are processed.\n\n"},{"names":["sizeY"],"info":"Set vertical radius size. Range is @code{[1, 1024]} and default value is @code{0}. If zero, @code{sizeX} value will be used.\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply average blur filter with horizontal and vertical size of 3, setting each pixel of the output to the average value of the 7x7 region centered on it in the input. For pixels on the edges of the image, the region does not extend beyond the image boundaries, and so out-of-range coordinates are not used in the calculations.\n@example\n-i INPUT -vf \"hwupload, avgblur_opencl=3, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["boxblur_opencl"],"info":"\nApply a boxblur algorithm to the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["luma_radius","lr"],"info":""},{"names":["luma_power","lp"],"info":""},{"names":["chroma_radius","cr"],"info":""},{"names":["chroma_power","cp"],"info":""},{"names":["alpha_radius","ar"],"info":""},{"names":["alpha_power","ap"],"info":"\n\nA description of the accepted options follows.\n\n@item luma_radius, lr\n@item chroma_radius, cr\n@item alpha_radius, ar\nSet an expression for the box radius in pixels used for blurring the\ncorresponding input plane.\n\nThe radius value must be a non-negative number, and must not be\ngreater than the value of the expression @code{min(w,h)/2} for the\nluma and alpha planes, and of @code{min(cw,ch)/2} for the chroma\nplanes.\n\nDefault value for @option{luma_radius} is \"2\". If not specified,\n@option{chroma_radius} and @option{alpha_radius} default to the\ncorresponding value set for @option{luma_radius}.\n\nThe expressions can contain the following constants:\n@table @option\n@item w\n@item h\nThe input width and height in pixels.\n\n@item cw\n@item ch\nThe input chroma image width and height in pixels.\n\n@item hsub\n@item vsub\nThe horizontal and vertical chroma subsample values. For example, for the\npixel format \"yuv422p\", @var{hsub} is 2 and @var{vsub} is 1.\n\n"},{"names":["luma_power","lp"],"info":""},{"names":["chroma_power","cp"],"info":""},{"names":["alpha_power","ap"],"info":"Specify how many times the boxblur filter is applied to the\ncorresponding plane.\n\nDefault value for @option{luma_power} is 2. If not specified,\n@option{chroma_power} and @option{alpha_power} default to the\ncorresponding value set for @option{luma_power}.\n\nA value of 0 will disable the effect.\n\n","examples":"@subsection Examples\n\nApply boxblur filter, setting each pixel of the output to the average value of box-radiuses @var{luma_radius}, @var{chroma_radius}, @var{alpha_radius} for each plane respectively. The filter will apply @var{luma_power}, @var{chroma_power}, @var{alpha_power} times onto the corresponding plane. For pixels on the edges of the image, the radius does not extend beyond the image boundaries, and so out-of-range coordinates are not used in the calculations.\n\n@itemize\n@item\nApply a boxblur filter with the luma, chroma, and alpha radius\nset to 2 and luma, chroma, and alpha power set to 3. The filter will run 3 times with box-radius set to 2 for every plane of the image.\n@example\n-i INPUT -vf \"hwupload, boxblur_opencl=luma_radius=2:luma_power=3, hwdownload\" OUTPUT\n-i INPUT -vf \"hwupload, boxblur_opencl=2:3, hwdownload\" OUTPUT\n@end example\n\n@item\nApply a boxblur filter with luma radius set to 2, luma_power to 1, chroma_radius to 4, chroma_power to 5, alpha_radius to 3 and alpha_power to 7.\n\nFor the luma plane, a 2x2 box radius will be run once.\n\nFor the chroma plane, a 4x4 box radius will be run 5 times.\n\nFor the alpha plane, a 3x3 box radius will be run 7 times.\n@example\n-i INPUT -vf \"hwupload, boxblur_opencl=2:1:4:5:3:7, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["convolution_opencl"],"info":"\nApply convolution of 3x3, 5x5, 7x7 matrix.\n\n","options":[{"names":["0m"],"info":""},{"names":["1m"],"info":""},{"names":["2m"],"info":""},{"names":["3m"],"info":"Set matrix for each plane.\nMatrix is sequence of 9, 25 or 49 signed numbers.\nDefault value for each plane is @code{0 0 0 0 1 0 0 0 0}.\n\n"},{"names":["0rdiv"],"info":""},{"names":["1rdiv"],"info":""},{"names":["2rdiv"],"info":""},{"names":["3rdiv"],"info":"Set multiplier for calculated value for each plane.\nIf unset or 0, it will be sum of all matrix elements.\nThe option value must be a float number greater or equal to @code{0.0}. Default value is @code{1.0}.\n\n"},{"names":["0bias"],"info":""},{"names":["1bias"],"info":""},{"names":["2bias"],"info":""},{"names":["3bias"],"info":"Set bias for each plane. This value is added to the result of the multiplication.\nUseful for making the overall image brighter or darker.\nThe option value must be a float number greater or equal to @code{0.0}. Default value is @code{0.0}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply sharpen:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0, hwdownload\" OUTPUT\n@end example\n\n@item\nApply blur:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9, hwdownload\" OUTPUT\n@end example\n\n@item\nApply edge enhance:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128, hwdownload\" OUTPUT\n@end example\n\n@item\nApply edge detect:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128, hwdownload\" OUTPUT\n@end example\n\n@item\nApply laplacian edge detector which includes diagonals:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0, hwdownload\" OUTPUT\n@end example\n\n@item\nApply emboss:\n@example\n-i INPUT -vf \"hwupload, convolution_opencl=-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["dilation_opencl"],"info":"\nApply dilation effect to the video.\n\nThis filter replaces the pixel by the local(3x3) maximum.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.\nIf @code{0}, plane will remain unchanged.\n\n"},{"names":["coordinates"],"info":"Flag which specifies the pixel to refer to.\nRange is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.\n\nFlags to local 3x3 coordinates region centered on @code{x}:\n\n 1 2 3\n\n 4 x 5\n\n 6 7 8\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.\n@example\n-i INPUT -vf \"hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["erosion_opencl"],"info":"\nApply erosion effect to the video.\n\nThis filter replaces the pixel by the local(3x3) minimum.\n\nIt accepts the following options:\n\n","options":[{"names":["threshold0"],"info":""},{"names":["threshold1"],"info":""},{"names":["threshold2"],"info":""},{"names":["threshold3"],"info":"Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.\nIf @code{0}, plane will remain unchanged.\n\n"},{"names":["coordinates"],"info":"Flag which specifies the pixel to refer to.\nRange is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.\n\nFlags to local 3x3 coordinates region centered on @code{x}:\n\n 1 2 3\n\n 4 x 5\n\n 6 7 8\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply erosion filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local minimum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local minimum is more then threshold of the corresponding plane, output pixel will be set to input pixel - threshold of corresponding plane.\n@example\n-i INPUT -vf \"hwupload, erosion_opencl=30:40:50:coordinates=231, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["colorkey_opencl"],"info":"RGB colorspace color keying.\n\n","options":[{"names":["color"],"info":"The color which will be replaced with transparency.\n\n"},{"names":["similarity"],"info":"Similarity percentage with the key color.\n\n0.01 matches only the exact key color, while 1.0 matches everything.\n\n"},{"names":["blend"],"info":"Blend percentage.\n\n0.0 makes pixels either fully transparent, or not transparent at all.\n\nHigher values result in semi-transparent pixels, with a higher transparency\nthe more similar the pixels color is to the key color.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nMake every semi-green pixel in the input transparent with some slight blending:\n@example\n-i INPUT -vf \"hwupload, colorkey_opencl=green:0.3:0.1, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["deshake_opencl"],"info":"Feature-point based video stabilization filter.\n\n","options":[{"names":["tripod"],"info":"Simulates a tripod by preventing any camera movement whatsoever from the original frame. Defaults to @code{0}.\n\n"},{"names":["debug"],"info":"Whether or not additional debug info should be displayed, both in the processed output and in the console.\n\nNote that in order to see console debug output you will also need to pass @code{-v verbose} to ffmpeg.\n\nViewing point matches in the output video is only supported for RGB input.\n\nDefaults to @code{0}.\n\n"},{"names":["adaptive_crop"],"info":"Whether or not to do a tiny bit of cropping at the borders to cut down on the amount of mirrored pixels.\n\nDefaults to @code{1}.\n\n"},{"names":["refine_features"],"info":"Whether or not feature points should be refined at a sub-pixel level.\n\nThis can be turned off for a slight performance gain at the cost of precision.\n\nDefaults to @code{1}.\n\n"},{"names":["smooth_strength"],"info":"The strength of the smoothing applied to the camera path from @code{0.0} to @code{1.0}.\n\n@code{1.0} is the maximum smoothing strength while values less than that result in less smoothing.\n\n@code{0.0} causes the filter to adaptively choose a smoothing strength on a per-frame basis.\n\nDefaults to @code{0.0}.\n\n"},{"names":["smooth_window_multiplier"],"info":"Controls the size of the smoothing window (the number of frames buffered to determine motion information from).\n\nThe size of the smoothing window is determined by multiplying the framerate of the video by this number.\n\nAcceptable values range from @code{0.1} to @code{10.0}.\n\nLarger values increase the amount of motion data available for determining how to smooth the camera path,\npotentially improving smoothness, but also increase latency and memory usage.\n\nDefaults to @code{2.0}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nStabilize a video with a fixed, medium smoothing strength:\n@example\n-i INPUT -vf \"hwupload, deshake_opencl=smooth_strength=0.5, hwdownload\" OUTPUT\n@end example\n\n@item\nStabilize a video with debugging (both in console and in rendered video):\n@example\n-i INPUT -filter_complex \"[0:v]format=rgba, hwupload, deshake_opencl=debug=1, hwdownload, format=rgba, format=yuv420p\" -v verbose OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["nlmeans_opencl"],"info":"\nNon-local Means denoise filter through OpenCL, this filter accepts same options as @ref{nlmeans}.\n\n","options":[]},{"filtergroup":["overlay_opencl"],"info":"\nOverlay one video on top of another.\n\nIt takes two inputs and has one output. The first input is the \"main\" video on which the second input is overlaid.\nThis filter requires same memory layout for all the inputs. So, format conversion may be needed.\n\n","options":[{"names":["x"],"info":"Set the x coordinate of the overlaid video on the main video.\nDefault value is @code{0}.\n\n"},{"names":["y"],"info":"Set the y coordinate of the overlaid video on the main video.\nDefault value is @code{0}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nOverlay an image LOGO at the top-left corner of the INPUT video. Both inputs are yuv420p format.\n@example\n-i INPUT -i LOGO -filter_complex \"[0:v]hwupload[a], [1:v]format=yuv420p, hwupload[b], [a][b]overlay_opencl, hwdownload\" OUTPUT\n@end example\n@item\nThe inputs have same memory layout for color channels , the overlay has additional alpha plane, like INPUT is yuv420p, and the LOGO is yuva420p.\n@example\n-i INPUT -i LOGO -filter_complex \"[0:v]hwupload[a], [1:v]format=yuva420p, hwupload[b], [a][b]overlay_opencl, hwdownload\" OUTPUT\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["prewitt_opencl"],"info":"\nApply the Prewitt operator (@url{https://en.wikipedia.org/wiki/Prewitt_operator}) to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes to filter. Default value is @code{0xf}, by which all planes are processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\nRange is @code{[0.0, 65535]} and default value is @code{1.0}.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\nRange is @code{[-65535, 65535]} and default value is @code{0.0}.\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply the Prewitt operator with scale set to 2 and delta set to 10.\n@example\n-i INPUT -vf \"hwupload, prewitt_opencl=scale=2:delta=10, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["roberts_opencl"],"info":"Apply the Roberts cross operator (@url{https://en.wikipedia.org/wiki/Roberts_cross}) to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes to filter. Default value is @code{0xf}, by which all planes are processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\nRange is @code{[0.0, 65535]} and default value is @code{1.0}.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\nRange is @code{[-65535, 65535]} and default value is @code{0.0}.\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply the Roberts cross operator with scale set to 2 and delta set to 10\n@example\n-i INPUT -vf \"hwupload, roberts_opencl=scale=2:delta=10, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["sobel_opencl"],"info":"\nApply the Sobel operator (@url{https://en.wikipedia.org/wiki/Sobel_operator}) to input video stream.\n\nThe filter accepts the following option:\n\n","options":[{"names":["planes"],"info":"Set which planes to filter. Default value is @code{0xf}, by which all planes are processed.\n\n"},{"names":["scale"],"info":"Set value which will be multiplied with filtered result.\nRange is @code{[0.0, 65535]} and default value is @code{1.0}.\n\n"},{"names":["delta"],"info":"Set value which will be added to filtered result.\nRange is @code{[-65535, 65535]} and default value is @code{0.0}.\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Apply sobel operator with scale set to 2 and delta set to 10\n@example\n-i INPUT -vf \"hwupload, sobel_opencl=scale=2:delta=10, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["tonemap_opencl"],"info":"\nPerform HDR(PQ/HLG) to SDR conversion with tone-mapping.\n\nIt accepts the following parameters:\n\n","options":[{"names":["tonemap"],"info":"Specify the tone-mapping operator to be used. Same as tonemap option in @ref{tonemap}.\n\n"},{"names":["param"],"info":"Tune the tone mapping algorithm. same as param option in @ref{tonemap}.\n\n"},{"names":["desat"],"info":"Apply desaturation for highlights that exceed this level of brightness. The\nhigher the parameter, the more color information will be preserved. This\nsetting helps prevent unnaturally blown-out colors for super-highlights, by\n(smoothly) turning into white instead. This makes images feel more natural,\nat the cost of reducing information about out-of-range colors.\n\nThe default value is 0.5, and the algorithm here is a little different from\nthe cpu version tonemap currently. A setting of 0.0 disables this option.\n\n"},{"names":["threshold"],"info":"The tonemapping algorithm parameters is fine-tuned per each scene. And a threshold\nis used to detect whether the scene has changed or not. If the distance between\nthe current frame average brightness and the current running average exceeds\na threshold value, we would re-calculate scene average and peak brightness.\nThe default value is 0.2.\n\n"},{"names":["format"],"info":"Specify the output pixel format.\n\nCurrently supported formats are:\n@item p010\n@item nv12\n\n"},{"names":["range","r"],"info":"Set the output color range.\n\nPossible values are:\n@item tv/mpeg\n@item pc/jpeg\n\nDefault is same as input.\n\n"},{"names":["primaries","p"],"info":"Set the output color primaries.\n\nPossible values are:\n@item bt709\n@item bt2020\n\nDefault is same as input.\n\n"},{"names":["transfer","t"],"info":"Set the output transfer characteristics.\n\nPossible values are:\n@item bt709\n@item bt2020\n\nDefault is bt709.\n\n"},{"names":["matrix","m"],"info":"Set the output colorspace matrix.\n\nPossible value are:\n@item bt709\n@item bt2020\n\nDefault is same as input.\n\n\n@subsection Example\n\n@itemize\n"},{"names":[],"info":"Convert HDR(PQ/HLG) video to bt2020-transfer-characteristic p010 format using linear operator.\n@example\n-i INPUT -vf \"format=p010,hwupload,tonemap_opencl=t=bt2020:tonemap=linear:format=p010,hwdownload,format=p010\" OUTPUT\n@end example\n@end itemize\n\n"}]},{"filtergroup":["unsharp_opencl"],"info":"\nSharpen or blur the input video.\n\nIt accepts the following parameters:\n\n","options":[{"names":["luma_msize_x","lx"],"info":"Set the luma matrix horizontal size.\nRange is @code{[1, 23]} and default value is @code{5}.\n\n"},{"names":["luma_msize_y","ly"],"info":"Set the luma matrix vertical size.\nRange is @code{[1, 23]} and default value is @code{5}.\n\n"},{"names":["luma_amount","la"],"info":"Set the luma effect strength.\nRange is @code{[-10, 10]} and default value is @code{1.0}.\n\nNegative values will blur the input video, while positive values will\nsharpen it, a value of zero will disable the effect.\n\n"},{"names":["chroma_msize_x","cx"],"info":"Set the chroma matrix horizontal size.\nRange is @code{[1, 23]} and default value is @code{5}.\n\n"},{"names":["chroma_msize_y","cy"],"info":"Set the chroma matrix vertical size.\nRange is @code{[1, 23]} and default value is @code{5}.\n\n"},{"names":["chroma_amount","ca"],"info":"Set the chroma effect strength.\nRange is @code{[-10, 10]} and default value is @code{0.0}.\n\nNegative values will blur the input video, while positive values will\nsharpen it, a value of zero will disable the effect.\n\n\nAll parameters are optional and default to the equivalent of the\nstring '5:5:1.0:5:5:0.0'.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nApply strong luma sharpen effect:\n@example\n-i INPUT -vf \"hwupload, unsharp_opencl=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5, hwdownload\" OUTPUT\n@end example\n\n@item\nApply a strong blur of both luma and chroma parameters:\n@example\n-i INPUT -vf \"hwupload, unsharp_opencl=7:7:-2:7:7:-2, hwdownload\" OUTPUT\n@end example\n@end itemize\n\n@c man end OPENCL VIDEO FILTERS\n\n@chapter Video Sources\n@c man begin VIDEO SOURCES\n\nBelow is a description of the currently available video sources.\n\n"}]},{"filtergroup":["buffer"],"info":"\nBuffer video frames, and make them available to the filter chain.\n\nThis source is mainly intended for a programmatic use, in particular\nthrough the interface defined in @file{libavfilter/vsrc_buffer.h}.\n\nIt accepts the following parameters:\n\n","options":[{"names":["video_size"],"info":"Specify the size (width and height) of the buffered video frames. For the\nsyntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["width"],"info":"The input video width.\n\n"},{"names":["height"],"info":"The input video height.\n\n"},{"names":["pix_fmt"],"info":"A string representing the pixel format of the buffered video frames.\nIt may be a number corresponding to a pixel format, or a pixel format\nname.\n\n"},{"names":["time_base"],"info":"Specify the timebase assumed by the timestamps of the buffered frames.\n\n"},{"names":["frame_rate"],"info":"Specify the frame rate expected for the video stream.\n\n"},{"names":["pixel_aspect","sar"],"info":"The sample (pixel) aspect ratio of the input video.\n\n"},{"names":["sws_param"],"info":"Specify the optional parameters to be used for the scale filter which\nis automatically inserted when an input change is detected in the\ninput size or format.\n\n"},{"names":["hw_frames_ctx"],"info":"When using a hardware pixel format, this should be a reference to an\nAVHWFramesContext describing input frames.\n\nFor example:\n@example\nbuffer=width=320:height=240:pix_fmt=yuv410p:time_base=1/24:sar=1\n@end example\n\nwill instruct the source to accept video frames with size 320x240 and\nwith format \"yuv410p\", assuming 1/24 as the timestamps timebase and\nsquare pixels (1:1 sample aspect ratio).\nSince the pixel format with name \"yuv410p\" corresponds to the number 6\n(check the enum AVPixelFormat definition in @file{libavutil/pixfmt.h}),\nthis example corresponds to:\n@example\nbuffer=size=320x240:pixfmt=6:time_base=1/24:pixel_aspect=1/1\n@end example\n\nAlternatively, the options can be specified as a flat string, but this\nsyntax is deprecated:\n\n@var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}[:@var{sws_param}]\n\n"}]},{"filtergroup":["cellauto"],"info":"\nCreate a pattern generated by an elementary cellular automaton.\n\nThe initial state of the cellular automaton can be defined through the\n@option{filename} and @option{pattern} options. If such options are\nnot specified an initial state is created randomly.\n\nAt each new frame a new row in the video is filled with the result of\nthe cellular automaton next generation. The behavior when the whole\nframe is filled is defined by the @option{scroll} option.\n\nThis source accepts the following options:\n\n","options":[{"names":["filename","f"],"info":"Read the initial cellular automaton state, i.e. the starting row, from\nthe specified file.\nIn the file, each non-whitespace character is considered an alive\ncell, a newline will terminate the row, and further characters in the\nfile will be ignored.\n\n"},{"names":["pattern","p"],"info":"Read the initial cellular automaton state, i.e. the starting row, from\nthe specified string.\n\nEach non-whitespace character in the string is considered an alive\ncell, a newline will terminate the row, and further characters in the\nstring will be ignored.\n\n"},{"names":["rate","r"],"info":"Set the video rate, that is the number of frames generated per second.\nDefault is 25.\n\n"},{"names":["random_fill_ratio","ratio"],"info":"Set the random fill ratio for the initial cellular automaton row. It\nis a floating point number value ranging from 0 to 1, defaults to\n1/PHI.\n\nThis option is ignored when a file or a pattern is specified.\n\n"},{"names":["random_seed","seed"],"info":"Set the seed for filling randomly the initial row, must be an integer\nincluded between 0 and UINT32_MAX. If not specified, or if explicitly\nset to -1, the filter will try to use a good random seed on a best\neffort basis.\n\n"},{"names":["rule"],"info":"Set the cellular automaton rule, it is a number ranging from 0 to 255.\nDefault value is 110.\n\n"},{"names":["size","s"],"info":"Set the size of the output video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\nIf @option{filename} or @option{pattern} is specified, the size is set\nby default to the width of the specified initial state row, and the\nheight is set to @var{width} * PHI.\n\nIf @option{size} is set, it must contain the width of the specified\npattern string, and the specified pattern will be centered in the\nlarger row.\n\nIf a filename or a pattern string is not specified, the size value\ndefaults to \"320x518\" (used for a randomly generated initial state).\n\n"},{"names":["scroll"],"info":"If set to 1, scroll the output upward when all the rows in the output\nhave been already filled. If set to 0, the new generated row will be\nwritten over the top row just after the bottom row is filled.\nDefaults to 1.\n\n"},{"names":["start_full","full"],"info":"If set to 1, completely fill the output with generated rows before\noutputting the first frame.\nThis is the default behavior, for disabling set the value to 0.\n\n"},{"names":["stitch"],"info":"If set to 1, stitch the left and right row edges together.\nThis is the default behavior, for disabling set the value to 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nRead the initial state from @file{pattern}, and specify an output of\nsize 200x400.\n@example\ncellauto=f=pattern:s=200x400\n@end example\n\n@item\nGenerate a random initial row with a width of 200 cells, with a fill\nratio of 2/3:\n@example\ncellauto=ratio=2/3:s=200x200\n@end example\n\n@item\nCreate a pattern generated by rule 18 starting by a single alive cell\ncentered on an initial row with width 100:\n@example\ncellauto=p=@@:s=100x400:full=0:rule=18\n@end example\n\n@item\nSpecify a more elaborated initial pattern:\n@example\ncellauto=p='@@@@ @@ @@@@':s=100x400:full=0:rule=18\n@end example\n\n@end itemize\n\n@anchor{coreimagesrc}\n"}]},{"filtergroup":["coreimagesrc"],"info":"Video source generated on GPU using Apple's CoreImage API on OSX.\n\nThis video source is a specialized version of the @ref{coreimage} video filter.\nUse a core image generator at the beginning of the applied filterchain to\ngenerate the content.\n\nThe coreimagesrc video source accepts the following options:\n","options":[{"names":["list_generators"],"info":"List all available generators along with all their respective options as well as\npossible minimum and maximum values along with the default values.\n@example\nlist_generators=true\n@end example\n\n"},{"names":["size","s"],"info":"Specify the size of the sourced video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nThe default value is @code{320x240}.\n\n"},{"names":["rate","r"],"info":"Specify the frame rate of the sourced video, as the number of frames\ngenerated per second. It has to be a string in the format\n@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a floating point\nnumber or a valid video frame rate abbreviation. The default value is\n\"25\".\n\n"},{"names":["sar"],"info":"Set the sample aspect ratio of the sourced video.\n\n"},{"names":["duration","d"],"info":"Set the duration of the sourced video. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\n\nIf not specified, or the expressed duration is negative, the video is\nsupposed to be generated forever.\n\nAdditionally, all options of the @ref{coreimage} video filter are accepted.\nA complete filterchain can be used for further processing of the\ngenerated input without CPU-HOST transfer. See @ref{coreimage} documentation\nand examples for details.\n\n","examples":"@subsection Examples\n\n@itemize\n\n@item\nUse CIQRCodeGenerator to create a QR code for the FFmpeg homepage,\ngiven as complete and escaped command-line for Apple's standard bash shell:\n@example\nffmpeg -f lavfi -i coreimagesrc=s=100x100:filter=CIQRCodeGenerator@@inputMessage=https\\\\\\\\\\://FFmpeg.org/@@inputCorrectionLevel=H -frames:v 1 QRCode.png\n@end example\nThis example is equivalent to the QRCode example of @ref{coreimage} without the\nneed for a nullsrc video source.\n@end itemize\n\n\n"}]},{"filtergroup":["mandelbrot"],"info":"\nGenerate a Mandelbrot set fractal, and progressively zoom towards the\npoint specified with @var{start_x} and @var{start_y}.\n\nThis source accepts the following options:\n\n","options":[{"names":["end_pts"],"info":"Set the terminal pts value. Default value is 400.\n\n"},{"names":["end_scale"],"info":"Set the terminal scale value.\nMust be a floating point value. Default value is 0.3.\n\n"},{"names":["inner"],"info":"Set the inner coloring mode, that is the algorithm used to draw the\nMandelbrot fractal internal region.\n\nIt shall assume one of the following values:\n@item black\nSet black mode.\n@item convergence\nShow time until convergence.\n@item mincol\nSet color based on point closest to the origin of the iterations.\n@item period\nSet period mode.\n\nDefault value is @var{mincol}.\n\n"},{"names":["bailout"],"info":"Set the bailout value. Default value is 10.0.\n\n"},{"names":["maxiter"],"info":"Set the maximum of iterations performed by the rendering\nalgorithm. Default value is 7189.\n\n"},{"names":["outer"],"info":"Set outer coloring mode.\nIt shall assume one of following values:\n@item iteration_count\nSet iteration count mode.\n@item normalized_iteration_count\nset normalized iteration count mode.\nDefault value is @var{normalized_iteration_count}.\n\n"},{"names":["rate","r"],"info":"Set frame rate, expressed as number of frames per second. Default\nvalue is \"25\".\n\n"},{"names":["size","s"],"info":"Set frame size. For the syntax of this option, check the @ref{video size syntax,,\"Video\nsize\" section in the ffmpeg-utils manual,ffmpeg-utils}. Default value is \"640x480\".\n\n"},{"names":["start_scale"],"info":"Set the initial scale value. Default value is 3.0.\n\n"},{"names":["start_x"],"info":"Set the initial x position. Must be a floating point value between\n-100 and 100. Default value is -0.743643887037158704752191506114774.\n\n"},{"names":["start_y"],"info":"Set the initial y position. Must be a floating point value between\n-100 and 100. Default value is -0.131825904205311970493132056385139.\n\n"}]},{"filtergroup":["mptestsrc"],"info":"\nGenerate various test patterns, as generated by the MPlayer test filter.\n\nThe size of the generated video is fixed, and is 256x256.\nThis source is useful in particular for testing encoding features.\n\nThis source accepts the following options:\n\n","options":[{"names":["rate","r"],"info":"Specify the frame rate of the sourced video, as the number of frames\ngenerated per second. It has to be a string in the format\n@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a floating point\nnumber or a valid video frame rate abbreviation. The default value is\n\"25\".\n\n"},{"names":["duration","d"],"info":"Set the duration of the sourced video. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\n\nIf not specified, or the expressed duration is negative, the video is\nsupposed to be generated forever.\n\n"},{"names":["test","t"],"info":"\nSet the number or the name of the test to perform. Supported tests are:\n@item dc_luma\n@item dc_chroma\n@item freq_luma\n@item freq_chroma\n@item amp_luma\n@item amp_chroma\n@item cbp\n@item mv\n@item ring1\n@item ring2\n@item all\n\n@item max_frames, m\nSet the maximum number of frames generated for each test, default value is 30.\n\n\nDefault value is \"all\", which will cycle through the list of all tests.\n\nSome examples:\n@example\nmptestsrc=t=dc_luma\n@end example\n\nwill generate a \"dc_luma\" test pattern.\n\n"}]},{"filtergroup":["frei0r_src"],"info":"\nProvide a frei0r source.\n\nTo enable compilation of this filter you need to install the frei0r\nheader and configure FFmpeg with @code{--enable-frei0r}.\n\nThis source accepts the following parameters:\n\n","options":[{"names":["size"],"info":"The size of the video to generate. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["framerate"],"info":"The framerate of the generated video. It may be a string of the form\n@var{num}/@var{den} or a frame rate abbreviation.\n\n"},{"names":["filter_name"],"info":"The name to the frei0r source to load. For more information regarding frei0r and\nhow to set the parameters, read the @ref{frei0r} section in the video filters\ndocumentation.\n\n"},{"names":["filter_params"],"info":"A '|'-separated list of parameters to pass to the frei0r source.\n\n\nFor example, to generate a frei0r partik0l source with size 200x200\nand frame rate 10 which is overlaid on the overlay filter main input:\n@example\nfrei0r_src=size=200x200:framerate=10:filter_name=partik0l:filter_params=1234 [overlay]; [in][overlay] overlay\n@end example\n\n"}]},{"filtergroup":["life"],"info":"\nGenerate a life pattern.\n\nThis source is based on a generalization of John Conway's life game.\n\nThe sourced input represents a life grid, each pixel represents a cell\nwhich can be in one of two possible states, alive or dead. Every cell\ninteracts with its eight neighbours, which are the cells that are\nhorizontally, vertically, or diagonally adjacent.\n\nAt each interaction the grid evolves according to the adopted rule,\nwhich specifies the number of neighbor alive cells which will make a\ncell stay alive or born. The @option{rule} option allows one to specify\nthe rule to adopt.\n\nThis source accepts the following options:\n\n","options":[{"names":["filename","f"],"info":"Set the file from which to read the initial grid state. In the file,\neach non-whitespace character is considered an alive cell, and newline\nis used to delimit the end of each row.\n\nIf this option is not specified, the initial grid is generated\nrandomly.\n\n"},{"names":["rate","r"],"info":"Set the video rate, that is the number of frames generated per second.\nDefault is 25.\n\n"},{"names":["random_fill_ratio","ratio"],"info":"Set the random fill ratio for the initial random grid. It is a\nfloating point number value ranging from 0 to 1, defaults to 1/PHI.\nIt is ignored when a file is specified.\n\n"},{"names":["random_seed","seed"],"info":"Set the seed for filling the initial random grid, must be an integer\nincluded between 0 and UINT32_MAX. If not specified, or if explicitly\nset to -1, the filter will try to use a good random seed on a best\neffort basis.\n\n"},{"names":["rule"],"info":"Set the life rule.\n\nA rule can be specified with a code of the kind \"S@var{NS}/B@var{NB}\",\nwhere @var{NS} and @var{NB} are sequences of numbers in the range 0-8,\n@var{NS} specifies the number of alive neighbor cells which make a\nlive cell stay alive, and @var{NB} the number of alive neighbor cells\nwhich make a dead cell to become alive (i.e. to \"born\").\n\"s\" and \"b\" can be used in place of \"S\" and \"B\", respectively.\n\nAlternatively a rule can be specified by an 18-bits integer. The 9\nhigh order bits are used to encode the next cell state if it is alive\nfor each number of neighbor alive cells, the low order bits specify\nthe rule for \"borning\" new cells. Higher order bits encode for an\nhigher number of neighbor cells.\nFor example the number 6153 = @code{(12<<9)+9} specifies a stay alive\nrule of 12 and a born rule of 9, which corresponds to \"S23/B03\".\n\nDefault value is \"S23/B3\", which is the original Conway's game of life\nrule, and will keep a cell alive if it has 2 or 3 neighbor alive\ncells, and will born a new cell if there are three alive cells around\na dead cell.\n\n"},{"names":["size","s"],"info":"Set the size of the output video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\nIf @option{filename} is specified, the size is set by default to the\nsame size of the input file. If @option{size} is set, it must contain\nthe size specified in the input file, and the initial grid defined in\nthat file is centered in the larger resulting area.\n\nIf a filename is not specified, the size value defaults to \"320x240\"\n(used for a randomly generated initial grid).\n\n"},{"names":["stitch"],"info":"If set to 1, stitch the left and right grid edges together, and the\ntop and bottom edges also. Defaults to 1.\n\n"},{"names":["mold"],"info":"Set cell mold speed. If set, a dead cell will go from @option{death_color} to\n@option{mold_color} with a step of @option{mold}. @option{mold} can have a\nvalue from 0 to 255.\n\n"},{"names":["life_color"],"info":"Set the color of living (or new born) cells.\n\n"},{"names":["death_color"],"info":"Set the color of dead cells. If @option{mold} is set, this is the first color\nused to represent a dead cell.\n\n"},{"names":["mold_color"],"info":"Set mold color, for definitely dead and moldy cells.\n\nFor the syntax of these 3 color options, check the @ref{color syntax,,\"Color\" section in the\nffmpeg-utils manual,ffmpeg-utils}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nRead a grid from @file{pattern}, and center it on a grid of size\n300x300 pixels:\n@example\nlife=f=pattern:s=300x300\n@end example\n\n@item\nGenerate a random grid of size 200x200, with a fill ratio of 2/3:\n@example\nlife=ratio=2/3:s=200x200\n@end example\n\n@item\nSpecify a custom rule for evolving a randomly generated grid:\n@example\nlife=rule=S14/B34\n@end example\n\n@item\nFull example with slow death effect (mold) using @command{ffplay}:\n@example\nffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16\n@end example\n@end itemize\n\n@anchor{allrgb}\n@anchor{allyuv}\n@anchor{color}\n@anchor{haldclutsrc}\n@anchor{nullsrc}\n@anchor{pal75bars}\n@anchor{pal100bars}\n@anchor{rgbtestsrc}\n@anchor{smptebars}\n@anchor{smptehdbars}\n@anchor{testsrc}\n@anchor{testsrc2}\n@anchor{yuvtestsrc}\n"}]},{"filtergroup":["allrgb","allyuv","color","haldclutsrc","nullsrc","pal75bars","pal100bars","rgbtestsrc","smptebars","smptehdbars","testsrc","testsrc2","yuvtestsrc"],"info":"\nThe @code{allrgb} source returns frames of size 4096x4096 of all rgb colors.\n\nThe @code{allyuv} source returns frames of size 4096x4096 of all yuv colors.\n\nThe @code{color} source provides an uniformly colored input.\n\nThe @code{haldclutsrc} source provides an identity Hald CLUT. See also\n@ref{haldclut} filter.\n\nThe @code{nullsrc} source returns unprocessed video frames. It is\nmainly useful to be employed in analysis / debugging tools, or as the\nsource for filters which ignore the input data.\n\nThe @code{pal75bars} source generates a color bars pattern, based on\nEBU PAL recommendations with 75% color levels.\n\nThe @code{pal100bars} source generates a color bars pattern, based on\nEBU PAL recommendations with 100% color levels.\n\nThe @code{rgbtestsrc} source generates an RGB test pattern useful for\ndetecting RGB vs BGR issues. You should see a red, green and blue\nstripe from top to bottom.\n\nThe @code{smptebars} source generates a color bars pattern, based on\nthe SMPTE Engineering Guideline EG 1-1990.\n\nThe @code{smptehdbars} source generates a color bars pattern, based on\nthe SMPTE RP 219-2002.\n\nThe @code{testsrc} source generates a test video pattern, showing a\ncolor pattern, a scrolling gradient and a timestamp. This is mainly\nintended for testing purposes.\n\nThe @code{testsrc2} source is similar to testsrc, but supports more\npixel formats instead of just @code{rgb24}. This allows using it as an\ninput for other tests without requiring a format conversion.\n\nThe @code{yuvtestsrc} source generates an YUV test pattern. You should\nsee a y, cb and cr stripe from top to bottom.\n\nThe sources accept the following parameters:\n\n","options":[{"names":["level"],"info":"Specify the level of the Hald CLUT, only available in the @code{haldclutsrc}\nsource. A level of @code{N} generates a picture of @code{N*N*N} by @code{N*N*N}\npixels to be used as identity matrix for 3D lookup tables. Each component is\ncoded on a @code{1/(N*N)} scale.\n\n"},{"names":["color","c"],"info":"Specify the color of the source, only available in the @code{color}\nsource. For the syntax of this option, check the\n@ref{color syntax,,\"Color\" section in the ffmpeg-utils manual,ffmpeg-utils}.\n\n"},{"names":["size","s"],"info":"Specify the size of the sourced video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nThe default value is @code{320x240}.\n\nThis option is not available with the @code{allrgb}, @code{allyuv}, and\n@code{haldclutsrc} filters.\n\n"},{"names":["rate","r"],"info":"Specify the frame rate of the sourced video, as the number of frames\ngenerated per second. It has to be a string in the format\n@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a floating point\nnumber or a valid video frame rate abbreviation. The default value is\n\"25\".\n\n"},{"names":["duration","d"],"info":"Set the duration of the sourced video. See\n@ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}\nfor the accepted syntax.\n\nIf not specified, or the expressed duration is negative, the video is\nsupposed to be generated forever.\n\n"},{"names":["sar"],"info":"Set the sample aspect ratio of the sourced video.\n\n"},{"names":["alpha"],"info":"Specify the alpha (opacity) of the background, only available in the\n@code{testsrc2} source. The value must be between 0 (fully transparent) and\n255 (fully opaque, the default).\n\n"},{"names":["decimals","n"],"info":"Set the number of decimals to show in the timestamp, only available in the\n@code{testsrc} source.\n\nThe displayed timestamp value will correspond to the original\ntimestamp value multiplied by the power of 10 of the specified\nvalue. Default value is 0.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nGenerate a video with a duration of 5.3 seconds, with size\n176x144 and a frame rate of 10 frames per second:\n@example\ntestsrc=duration=5.3:size=qcif:rate=10\n@end example\n\n@item\nThe following graph description will generate a red source\nwith an opacity of 0.2, with size \"qcif\" and a frame rate of 10\nframes per second:\n@example\ncolor=c=red@@0.2:s=qcif:r=10\n@end example\n\n@item\nIf the input content is to be ignored, @code{nullsrc} can be used. The\nfollowing command generates noise in the luminance plane by employing\nthe @code{geq} filter:\n@example\nnullsrc=s=256x256, geq=random(1)*255:128:128\n@end example\n@end itemize\n\n@subsection Commands\n\nThe @code{color} source supports the following commands:\n\n@table @option\n@item c, color\nSet the color of the created image. Accepts the same syntax of the\ncorresponding @option{color} option.\n@end table\n\n"}]},{"filtergroup":["openclsrc"],"info":"\nGenerate video using an OpenCL program.\n\n","options":[{"names":["source"],"info":"OpenCL program source file.\n\n"},{"names":["kernel"],"info":"Kernel name in program.\n\n"},{"names":["size","s"],"info":"Size of frames to generate. This must be set.\n\n"},{"names":["format"],"info":"Pixel format to use for the generated frames. This must be set.\n\n"},{"names":["rate","r"],"info":"Number of frames generated every second. Default value is '25'.\n\n\nFor details of how the program loading works, see the @ref{program_opencl}\nfilter.\n\nExample programs:\n\n@itemize\n"},{"names":[],"info":"Generate a colour ramp by setting pixel values from the position of the pixel\nin the output image. (Note that this will work with all pixel formats, but\nthe generated output will not be the same.)\n@verbatim\n__kernel void ramp(__write_only image2d_t dst,\n unsigned int index)\n{\n int2 loc = (int2)(get_global_id(0), get_global_id(1));\n\n float4 val;\n val.xy = val.zw = convert_float2(loc) / convert_float2(get_image_dim(dst));\n\n write_imagef(dst, loc, val);\n}\n@end verbatim\n\n"},{"names":[],"info":"Generate a Sierpinski carpet pattern, panning by a single pixel each frame.\n@verbatim\n__kernel void sierpinski_carpet(__write_only image2d_t dst,\n unsigned int index)\n{\n int2 loc = (int2)(get_global_id(0), get_global_id(1));\n\n float4 value = 0.0f;\n int x = loc.x + index;\n int y = loc.y + index;\n while (x > 0 || y > 0) {\n if (x % 3 == 1 && y % 3 == 1) {\n value = 1.0f;\n break;\n }\n x /= 3;\n y /= 3;\n }\n\n write_imagef(dst, loc, value);\n}\n@end verbatim\n\n@end itemize\n\n"}]},{"filtergroup":["sierpinski"],"info":"\nGenerate a Sierpinski carpet/triangle fractal, and randomly pan around.\n\nThis source accepts the following options:\n\n","options":[{"names":["size","s"],"info":"Set frame size. For the syntax of this option, check the @ref{video size syntax,,\"Video\nsize\" section in the ffmpeg-utils manual,ffmpeg-utils}. Default value is \"640x480\".\n\n"},{"names":["rate","r"],"info":"Set frame rate, expressed as number of frames per second. Default\nvalue is \"25\".\n\n"},{"names":["seed"],"info":"Set seed which is used for random panning.\n\n"},{"names":["jump"],"info":"Set max jump for single pan destination. Allowed range is from 1 to 10000.\n\n"},{"names":["type"],"info":"Set fractal type, can be default @code{carpet} or @code{triangle}.\n\n@c man end VIDEO SOURCES\n\n@chapter Video Sinks\n@c man begin VIDEO SINKS\n\nBelow is a description of the currently available video sinks.\n\n"}]},{"filtergroup":["buffersink"],"info":"\nBuffer video frames, and make them available to the end of the filter\ngraph.\n\nThis sink is mainly intended for programmatic use, in particular\nthrough the interface defined in @file{libavfilter/buffersink.h}\nor the options system.\n\nIt accepts a pointer to an AVBufferSinkContext structure, which\ndefines the incoming buffers' formats, to be passed as the opaque\nparameter to @code{avfilter_init_filter} for initialization.\n\n","options":[]},{"filtergroup":["nullsink"],"info":"\nNull video sink: do absolutely nothing with the input video. It is\nmainly useful as a template and for use in analysis / debugging\ntools.\n\n@c man end VIDEO SINKS\n\n@chapter Multimedia Filters\n@c man begin MULTIMEDIA FILTERS\n\nBelow is a description of the currently available multimedia filters.\n\n","options":[]},{"filtergroup":["abitscope"],"info":"\nConvert input audio to a video output, displaying the audio bit scope.\n\n","options":[{"names":["rate","r"],"info":"Set frame rate, expressed as number of frames per second. Default\nvalue is \"25\".\n\n"},{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{1024x256}.\n\n"},{"names":["colors"],"info":"Specify list of colors separated by space or by '|' which will be used to\ndraw channels. Unrecognized or missing colors will be replaced\nby white color.\n\n"}]},{"filtergroup":["adrawgraph"],"info":"Draw a graph using input audio metadata.\n\nSee @ref{drawgraph}\n\n","options":[]},{"filtergroup":["agraphmonitor"],"info":"\nSee @ref{graphmonitor}.\n\n","options":[]},{"filtergroup":["ahistogram"],"info":"\nConvert input audio to a video output, displaying the volume histogram.\n\n","options":[{"names":["dmode"],"info":"Specify how histogram is calculated.\n\nIt accepts the following values:\n@item single\nUse single histogram for all channels.\n@item separate\nUse separate histogram for each channel.\nDefault is @code{single}.\n\n"},{"names":["rate","r"],"info":"Set frame rate, expressed as number of frames per second. Default\nvalue is \"25\".\n\n"},{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{hd720}.\n\n"},{"names":["scale"],"info":"Set display scale.\n\nIt accepts the following values:\n@item log\nlogarithmic\n@item sqrt\nsquare root\n@item cbrt\ncubic root\n@item lin\nlinear\n@item rlog\nreverse logarithmic\nDefault is @code{log}.\n\n"},{"names":["ascale"],"info":"Set amplitude scale.\n\nIt accepts the following values:\n@item log\nlogarithmic\n@item lin\nlinear\nDefault is @code{log}.\n\n"},{"names":["acount"],"info":"Set how much frames to accumulate in histogram.\nDefault is 1. Setting this to -1 accumulates all frames.\n\n"},{"names":["rheight"],"info":"Set histogram ratio of window height.\n\n"},{"names":["slide"],"info":"Set sonogram sliding.\n\nIt accepts the following values:\n@item replace\nreplace old rows with new ones.\n@item scroll\nscroll from top to bottom.\nDefault is @code{replace}.\n\n"}]},{"filtergroup":["aphasemeter"],"info":"\nMeasures phase of input audio, which is exported as metadata @code{lavfi.aphasemeter.phase},\nrepresenting mean phase of current audio frame. A video output can also be produced and is\nenabled by default. The audio is passed through as first output.\n\nAudio will be rematrixed to stereo if it has a different channel layout. Phase value is in\nrange @code{[-1, 1]} where @code{-1} means left and right channels are completely out of phase\nand @code{1} means channels are in phase.\n\nThe filter accepts the following options, all related to its video output:\n\n","options":[{"names":["rate","r"],"info":"Set the output frame rate. Default value is @code{25}.\n\n"},{"names":["size","s"],"info":"Set the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{800x400}.\n\n"},{"names":["rc"],"info":""},{"names":["gc"],"info":""},{"names":["bc"],"info":"Specify the red, green, blue contrast. Default values are @code{2},\n@code{7} and @code{1}.\nAllowed range is @code{[0, 255]}.\n\n"},{"names":["mpc"],"info":"Set color which will be used for drawing median phase. If color is\n@code{none} which is default, no median phase value will be drawn.\n\n"},{"names":["video"],"info":"Enable video output. Default is enabled.\n\n"}]},{"filtergroup":["avectorscope"],"info":"\nConvert input audio to a video output, representing the audio vector\nscope.\n\nThe filter is used to measure the difference between channels of stereo\naudio stream. A monaural signal, consisting of identical left and right\nsignal, results in straight vertical line. Any stereo separation is visible\nas a deviation from this line, creating a Lissajous figure.\nIf the straight (or deviation from it) but horizontal line appears this\nindicates that the left and right channels are out of phase.\n\n","options":[{"names":["mode","m"],"info":"Set the vectorscope mode.\n\nAvailable values are:\n@item lissajous\nLissajous rotated by 45 degrees.\n\n@item lissajous_xy\nSame as above but not rotated.\n\n@item polar\nShape resembling half of circle.\n\nDefault value is @samp{lissajous}.\n\n"},{"names":["size","s"],"info":"Set the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{400x400}.\n\n"},{"names":["rate","r"],"info":"Set the output frame rate. Default value is @code{25}.\n\n"},{"names":["rc"],"info":""},{"names":["gc"],"info":""},{"names":["bc"],"info":""},{"names":["ac"],"info":"Specify the red, green, blue and alpha contrast. Default values are @code{40},\n@code{160}, @code{80} and @code{255}.\nAllowed range is @code{[0, 255]}.\n\n"},{"names":["rf"],"info":""},{"names":["gf"],"info":""},{"names":["bf"],"info":""},{"names":["af"],"info":"Specify the red, green, blue and alpha fade. Default values are @code{15},\n@code{10}, @code{5} and @code{5}.\nAllowed range is @code{[0, 255]}.\n\n"},{"names":["zoom"],"info":"Set the zoom factor. Default value is @code{1}. Allowed range is @code{[0, 10]}.\nValues lower than @var{1} will auto adjust zoom factor to maximal possible value.\n\n"},{"names":["draw"],"info":"Set the vectorscope drawing mode.\n\nAvailable values are:\n@item dot\nDraw dot for each sample.\n\n@item line\nDraw line between previous and current sample.\n\nDefault value is @samp{dot}.\n\n"},{"names":["scale"],"info":"Specify amplitude scale of audio samples.\n\nAvailable values are:\n@item lin\nLinear.\n\n@item sqrt\nSquare root.\n\n@item cbrt\nCubic root.\n\n@item log\nLogarithmic.\n\n"},{"names":["swap"],"info":"Swap left channel axis with right channel axis.\n\n"},{"names":["mirror"],"info":"Mirror axis.\n\n@item none\nNo mirror.\n\n@item x\nMirror only x axis.\n\n@item y\nMirror only y axis.\n\n@item xy\nMirror both axis.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nComplete example using @command{ffplay}:\n@example\nffplay -f lavfi 'amovie=input.mp3, asplit [a][out1];\n [a] avectorscope=zoom=1.3:rc=2:gc=200:bc=10:rf=1:gf=8:bf=7 [out0]'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["bench","abench"],"info":"\nBenchmark part of a filtergraph.\n\n","options":[{"names":["action"],"info":"Start or stop a timer.\n\nAvailable values are:\n@item start\nGet the current time, set it as frame metadata (using the key\n@code{lavfi.bench.start_time}), and forward the frame to the next filter.\n\n@item stop\nGet the current time and fetch the @code{lavfi.bench.start_time} metadata from\nthe input frame metadata to get the time difference. Time difference, average,\nmaximum and minimum time (respectively @code{t}, @code{avg}, @code{max} and\n@code{min}) are then printed. The timestamps are expressed in seconds.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nBenchmark @ref{selectivecolor} filter:\n@example\nbench=start,selectivecolor=reds=-.2 .12 -.49,bench=stop\n@end example\n@end itemize\n\n"}]},{"filtergroup":["concat"],"info":"\nConcatenate audio and video streams, joining them together one after the\nother.\n\nThe filter works on segments of synchronized video and audio streams. All\nsegments must have the same number of streams of each type, and that will\nalso be the number of streams at output.\n\n","options":[{"names":["n"],"info":"Set the number of segments. Default is 2.\n\n"},{"names":["v"],"info":"Set the number of output video streams, that is also the number of video\nstreams in each segment. Default is 1.\n\n"},{"names":["a"],"info":"Set the number of output audio streams, that is also the number of audio\nstreams in each segment. Default is 0.\n\n"},{"names":["unsafe"],"info":"Activate unsafe mode: do not fail if segments have a different format.\n\n\nThe filter has @var{v}+@var{a} outputs: first @var{v} video outputs, then\n@var{a} audio outputs.\n\nThere are @var{n}x(@var{v}+@var{a}) inputs: first the inputs for the first\nsegment, in the same order as the outputs, then the inputs for the second\nsegment, etc.\n\nRelated streams do not always have exactly the same duration, for various\nreasons including codec frame size or sloppy authoring. For that reason,\nrelated synchronized streams (e.g. a video and its audio track) should be\nconcatenated at once. The concat filter will use the duration of the longest\nstream in each segment (except the last one), and if necessary pad shorter\naudio streams with silence.\n\nFor this filter to work correctly, all segments must start at timestamp 0.\n\nAll corresponding streams must have the same parameters in all segments; the\nfiltering system will automatically select a common pixel format for video\nstreams, and a common sample format, sample rate and channel layout for\naudio streams, but other settings, such as resolution, must be converted\nexplicitly by the user.\n\nDifferent frame rates are acceptable but will result in variable frame rate\nat output; be sure to configure the output file to handle it.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nConcatenate an opening, an episode and an ending, all in bilingual version\n(video in stream 0, audio in streams 1 and 2):\n@example\nffmpeg -i opening.mkv -i episode.mkv -i ending.mkv -filter_complex \\\n '[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] [2:0] [2:1] [2:2]\n concat=n=3:v=1:a=2 [v] [a1] [a2]' \\\n -map '[v]' -map '[a1]' -map '[a2]' output.mkv\n@end example\n\n@item\nConcatenate two parts, handling audio and video separately, using the\n(a)movie sources, and adjusting the resolution:\n@example\nmovie=part1.mp4, scale=512:288 [v1] ; amovie=part1.mp4 [a1] ;\nmovie=part2.mp4, scale=512:288 [v2] ; amovie=part2.mp4 [a2] ;\n[v1] [v2] concat [outv] ; [a1] [a2] concat=v=0:a=1 [outa]\n@end example\nNote that a desync will happen at the stitch if the audio and video streams\ndo not have exactly the same duration in the first file.\n\n@end itemize\n\n@subsection Commands\n\nThis filter supports the following commands:\n@table @option\n@item next\nClose the current segment and step to the next one\n@end table\n\n@anchor{ebur128}\n"}]},{"filtergroup":["ebur128"],"info":"\nEBU R128 scanner filter. This filter takes an audio stream and analyzes its loudness\nlevel. By default, it logs a message at a frequency of 10Hz with the\nMomentary loudness (identified by @code{M}), Short-term loudness (@code{S}),\nIntegrated loudness (@code{I}) and Loudness Range (@code{LRA}).\n\nThe filter can only analyze streams which have a sampling rate of 48000 Hz and whose\nsample format is double-precision floating point. The input stream will be converted to\nthis specification, if needed. Users may need to insert aformat and/or aresample filters\nafter this filter to obtain the original parameters.\n\nThe filter also has a video output (see the @var{video} option) with a real\ntime graph to observe the loudness evolution. The graphic contains the logged\nmessage mentioned above, so it is not printed anymore when this option is set,\nunless the verbose logging is set. The main graphing area contains the\nshort-term loudness (3 seconds of analysis), and the gauge on the right is for\nthe momentary loudness (400 milliseconds), but can optionally be configured\nto instead display short-term loudness (see @var{gauge}).\n\nThe green area marks a +/- 1LU target range around the target loudness\n(-23LUFS by default, unless modified through @var{target}).\n\nMore information about the Loudness Recommendation EBU R128 on\n@url{http://tech.ebu.ch/loudness}.\n\n","options":[{"names":["video"],"info":"Activate the video output. The audio stream is passed unchanged whether this\noption is set or no. The video stream will be the first output stream if\nactivated. Default is @code{0}.\n\n"},{"names":["size"],"info":"Set the video size. This option is for video only. For the syntax of this\noption, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault and minimum resolution is @code{640x480}.\n\n"},{"names":["meter"],"info":"Set the EBU scale meter. Default is @code{9}. Common values are @code{9} and\n@code{18}, respectively for EBU scale meter +9 and EBU scale meter +18. Any\nother integer value between this range is allowed.\n\n"},{"names":["metadata"],"info":"Set metadata injection. If set to @code{1}, the audio input will be segmented\ninto 100ms output frames, each of them containing various loudness information\nin metadata. All the metadata keys are prefixed with @code{lavfi.r128.}.\n\nDefault is @code{0}.\n\n"},{"names":["framelog"],"info":"Force the frame logging level.\n\nAvailable values are:\n@item info\ninformation logging level\n@item verbose\nverbose logging level\n\nBy default, the logging level is set to @var{info}. If the @option{video} or\nthe @option{metadata} options are set, it switches to @var{verbose}.\n\n"},{"names":["peak"],"info":"Set peak mode(s).\n\nAvailable modes can be cumulated (the option is a @code{flag} type). Possible\nvalues are:\n@item none\nDisable any peak mode (default).\n@item sample\nEnable sample-peak mode.\n\nSimple peak mode looking for the higher sample value. It logs a message\nfor sample-peak (identified by @code{SPK}).\n@item true\nEnable true-peak mode.\n\nIf enabled, the peak lookup is done on an over-sampled version of the input\nstream for better peak accuracy. It logs a message for true-peak.\n(identified by @code{TPK}) and true-peak per frame (identified by @code{FTPK}).\nThis mode requires a build with @code{libswresample}.\n\n"},{"names":["dualmono"],"info":"Treat mono input files as \"dual mono\". If a mono file is intended for playback\non a stereo system, its EBU R128 measurement will be perceptually incorrect.\nIf set to @code{true}, this option will compensate for this effect.\nMulti-channel input files are not affected by this option.\n\n"},{"names":["panlaw"],"info":"Set a specific pan law to be used for the measurement of dual mono files.\nThis parameter is optional, and has a default value of -3.01dB.\n\n"},{"names":["target"],"info":"Set a specific target level (in LUFS) used as relative zero in the visualization.\nThis parameter is optional and has a default value of -23LUFS as specified\nby EBU R128. However, material published online may prefer a level of -16LUFS\n(e.g. for use with podcasts or video platforms).\n\n"},{"names":["gauge"],"info":"Set the value displayed by the gauge. Valid values are @code{momentary} and s\n@code{shortterm}. By default the momentary value will be used, but in certain\nscenarios it may be more useful to observe the short term value instead (e.g.\nlive mixing).\n\n"},{"names":["scale"],"info":"Sets the display scale for the loudness. Valid parameters are @code{absolute}\n(in LUFS) or @code{relative} (LU) relative to the target. This only affects the\nvideo output, not the summary or continuous log output.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nReal-time graph using @command{ffplay}, with a EBU scale meter +18:\n@example\nffplay -f lavfi -i \"amovie=input.mp3,ebur128=video=1:meter=18 [out0][out1]\"\n@end example\n\n@item\nRun an analysis with @command{ffmpeg}:\n@example\nffmpeg -nostats -i input.mp3 -filter_complex ebur128 -f null -\n@end example\n@end itemize\n\n"}]},{"filtergroup":["interleave","ainterleave"],"info":"\nTemporally interleave frames from several inputs.\n\n@code{interleave} works with video inputs, @code{ainterleave} with audio.\n\nThese filters read frames from several inputs and send the oldest\nqueued frame to the output.\n\nInput streams must have well defined, monotonically increasing frame\ntimestamp values.\n\nIn order to submit one frame to output, these filters need to enqueue\nat least one frame for each input, so they cannot work in case one\ninput is not yet terminated and will not receive incoming frames.\n\nFor example consider the case when one input is a @code{select} filter\nwhich always drops input frames. The @code{interleave} filter will keep\nreading from that input, but it will never be able to send new frames\nto output until the input sends an end-of-stream signal.\n\nAlso, depending on inputs synchronization, the filters will drop\nframes in case one input receives more frames than the other ones, and\nthe queue is already filled.\n\nThese filters accept the following options:\n\n","options":[{"names":["nb_inputs","n"],"info":"Set the number of different inputs, it is 2 by default.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nInterleave frames belonging to different streams using @command{ffmpeg}:\n@example\nffmpeg -i bambi.avi -i pr0n.mkv -filter_complex \"[0:v][1:v] interleave\" out.avi\n@end example\n\n@item\nAdd flickering blur effect:\n@example\nselect='if(gt(random(0), 0.2), 1, 2)':n=2 [tmp], boxblur=2:2, [tmp] interleave\n@end example\n@end itemize\n\n"}]},{"filtergroup":["metadata","ametadata"],"info":"\nManipulate frame metadata.\n\nThis filter accepts the following options:\n\n","options":[{"names":["mode"],"info":"Set mode of operation of the filter.\n\nCan be one of the following:\n\n@item select\nIf both @code{value} and @code{key} is set, select frames\nwhich have such metadata. If only @code{key} is set, select\nevery frame that has such key in metadata.\n\n@item add\nAdd new metadata @code{key} and @code{value}. If key is already available\ndo nothing.\n\n@item modify\nModify value of already present key.\n\n@item delete\nIf @code{value} is set, delete only keys that have such value.\nOtherwise, delete key. If @code{key} is not set, delete all metadata values in\nthe frame.\n\n@item print\nPrint key and its value if metadata was found. If @code{key} is not set print all\nmetadata values available in frame.\n\n"},{"names":["key"],"info":"Set key used with all modes. Must be set for all modes except @code{print} and @code{delete}.\n\n"},{"names":["value"],"info":"Set metadata value which will be used. This option is mandatory for\n@code{modify} and @code{add} mode.\n\n"},{"names":["function"],"info":"Which function to use when comparing metadata value and @code{value}.\n\nCan be one of following:\n\n@item same_str\nValues are interpreted as strings, returns true if metadata value is same as @code{value}.\n\n@item starts_with\nValues are interpreted as strings, returns true if metadata value starts with\nthe @code{value} option string.\n\n@item less\nValues are interpreted as floats, returns true if metadata value is less than @code{value}.\n\n@item equal\nValues are interpreted as floats, returns true if @code{value} is equal with metadata value.\n\n@item greater\nValues are interpreted as floats, returns true if metadata value is greater than @code{value}.\n\n@item expr\nValues are interpreted as floats, returns true if expression from option @code{expr}\nevaluates to true.\n\n@item ends_with\nValues are interpreted as strings, returns true if metadata value ends with\nthe @code{value} option string.\n\n"},{"names":["expr"],"info":"Set expression which is used when @code{function} is set to @code{expr}.\nThe expression is evaluated through the eval API and can contain the following\nconstants:\n\n@item VALUE1\nFloat representation of @code{value} from metadata key.\n\n@item VALUE2\nFloat representation of @code{value} as supplied by user in @code{value} option.\n\n"},{"names":["file"],"info":"If specified in @code{print} mode, output is written to the named file. Instead of\nplain filename any writable url can be specified. Filename ``-'' is a shorthand\nfor standard output. If @code{file} option is not set, output is written to the log\nwith AV_LOG_INFO loglevel.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nPrint all metadata values for frames with key @code{lavfi.signalstats.YDIF} with values\nbetween 0 and 1.\n@example\nsignalstats,metadata=print:key=lavfi.signalstats.YDIF:value=0:function=expr:expr='between(VALUE1,0,1)'\n@end example\n@item\nPrint silencedetect output to file @file{metadata.txt}.\n@example\nsilencedetect,ametadata=mode=print:file=metadata.txt\n@end example\n@item\nDirect all metadata to a pipe with file descriptor 4.\n@example\nmetadata=mode=print:file='pipe\\:4'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["perms","aperms"],"info":"\nSet read/write permissions for the output frames.\n\nThese filters are mainly aimed at developers to test direct path in the\nfollowing filter in the filtergraph.\n\nThe filters accept the following options:\n\n","options":[{"names":["mode"],"info":"Select the permissions mode.\n\nIt accepts the following values:\n@item none\nDo nothing. This is the default.\n@item ro\nSet all the output frames read-only.\n@item rw\nSet all the output frames directly writable.\n@item toggle\nMake the frame read-only if writable, and writable if read-only.\n@item random\nSet each output frame read-only or writable randomly.\n\n"},{"names":["seed"],"info":"Set the seed for the @var{random} mode, must be an integer included between\n@code{0} and @code{UINT32_MAX}. If not specified, or if explicitly set to\n@code{-1}, the filter will try to use a good random seed on a best effort\nbasis.\n\nNote: in case of auto-inserted filter between the permission filter and the\nfollowing one, the permission might not be received as expected in that\nfollowing filter. Inserting a @ref{format} or @ref{aformat} filter before the\nperms/aperms filter can avoid this problem.\n\n"}]},{"filtergroup":["realtime","arealtime"],"info":"\nSlow down filtering to match real time approximately.\n\nThese filters will pause the filtering for a variable amount of time to\nmatch the output rate with the input timestamps.\nThey are similar to the @option{re} option to @code{ffmpeg}.\n\nThey accept the following options:\n\n","options":[{"names":["limit"],"info":"Time limit for the pauses. Any pause longer than that will be considered\na timestamp discontinuity and reset the timer. Default is 2 seconds.\n"},{"names":["speed"],"info":"Speed factor for processing. The value must be a float larger than zero.\nValues larger than 1.0 will result in faster than realtime processing,\nsmaller will slow processing down. The @var{limit} is automatically adapted\naccordingly. Default is 1.0.\n\nA processing speed faster than what is possible without these filters cannot\nbe achieved.\n\n@anchor{select}\n"}]},{"filtergroup":["select","aselect"],"info":"\nSelect frames to pass in output.\n\nThis filter accepts the following options:\n\n","options":[{"names":["expr","e"],"info":"Set expression, which is evaluated for each input frame.\n\nIf the expression is evaluated to zero, the frame is discarded.\n\nIf the evaluation result is negative or NaN, the frame is sent to the\nfirst output; otherwise it is sent to the output with index\n@code{ceil(val)-1}, assuming that the input index starts from 0.\n\nFor example a value of @code{1.2} corresponds to the output with index\n@code{ceil(1.2)-1 = 2-1 = 1}, that is the second output.\n\n"},{"names":["outputs","n"],"info":"Set the number of outputs. The output to which to send the selected\nframe is based on the result of the evaluation. Default value is 1.\n\nThe expression can contain the following constants:\n\n@item n\nThe (sequential) number of the filtered frame, starting from 0.\n\n@item selected_n\nThe (sequential) number of the selected frame, starting from 0.\n\n@item prev_selected_n\nThe sequential number of the last selected frame. It's NAN if undefined.\n\n@item TB\nThe timebase of the input timestamps.\n\n@item pts\nThe PTS (Presentation TimeStamp) of the filtered video frame,\nexpressed in @var{TB} units. It's NAN if undefined.\n\n@item t\nThe PTS of the filtered video frame,\nexpressed in seconds. It's NAN if undefined.\n\n@item prev_pts\nThe PTS of the previously filtered video frame. It's NAN if undefined.\n\n@item prev_selected_pts\nThe PTS of the last previously filtered video frame. It's NAN if undefined.\n\n@item prev_selected_t\nThe PTS of the last previously selected video frame, expressed in seconds. It's NAN if undefined.\n\n@item start_pts\nThe PTS of the first video frame in the video. It's NAN if undefined.\n\n@item start_t\nThe time of the first video frame in the video. It's NAN if undefined.\n\n@item pict_type @emph{(video only)}\nThe type of the filtered frame. It can assume one of the following\nvalues:\n@table @option\n@item I\n@item P\n@item B\n@item S\n@item SI\n@item SP\n@item BI\n\n"},{"names":["interlace_type","@emph{(video","only)}"],"info":"The frame interlace type. It can assume one of the following values:\n@item PROGRESSIVE\nThe frame is progressive (not interlaced).\n@item TOPFIRST\nThe frame is top-field-first.\n@item BOTTOMFIRST\nThe frame is bottom-field-first.\n\n"},{"names":["consumed_sample_n","@emph{(audio","only)}"],"info":"the number of selected samples before the current frame\n\n"},{"names":["samples_n","@emph{(audio","only)}"],"info":"the number of samples in the current frame\n\n"},{"names":["sample_rate","@emph{(audio","only)}"],"info":"the input sample rate\n\n"},{"names":["key"],"info":"This is 1 if the filtered frame is a key-frame, 0 otherwise.\n\n"},{"names":["pos"],"info":"the position in the file of the filtered frame, -1 if the information\nis not available (e.g. for synthetic video)\n\n"},{"names":["scene","@emph{(video","only)}"],"info":"value between 0 and 1 to indicate a new scene; a low value reflects a low\nprobability for the current frame to introduce a new scene, while a higher\nvalue means the current frame is more likely to be one (see the example below)\n\n"},{"names":["concatdec_select"],"info":"The concat demuxer can select only part of a concat input file by setting an\ninpoint and an outpoint, but the output packets may not be entirely contained\nin the selected interval. By using this variable, it is possible to skip frames\ngenerated by the concat demuxer which are not exactly contained in the selected\ninterval.\n\nThis works by comparing the frame pts against the @var{lavf.concat.start_time}\nand the @var{lavf.concat.duration} packet metadata values which are also\npresent in the decoded frames.\n\nThe @var{concatdec_select} variable is -1 if the frame pts is at least\nstart_time and either the duration metadata is missing or the frame pts is less\nthan start_time + duration, 0 otherwise, and NaN if the start_time metadata is\nmissing.\n\nThat basically means that an input frame is selected if its pts is within the\ninterval set by the concat demuxer.\n\n\nThe default value of the select expression is \"1\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSelect all frames in input:\n@example\nselect\n@end example\n\nThe example above is the same as:\n@example\nselect=1\n@end example\n\n@item\nSkip all frames:\n@example\nselect=0\n@end example\n\n@item\nSelect only I-frames:\n@example\nselect='eq(pict_type\\,I)'\n@end example\n\n@item\nSelect one frame every 100:\n@example\nselect='not(mod(n\\,100))'\n@end example\n\n@item\nSelect only frames contained in the 10-20 time interval:\n@example\nselect=between(t\\,10\\,20)\n@end example\n\n@item\nSelect only I-frames contained in the 10-20 time interval:\n@example\nselect=between(t\\,10\\,20)*eq(pict_type\\,I)\n@end example\n\n@item\nSelect frames with a minimum distance of 10 seconds:\n@example\nselect='isnan(prev_selected_t)+gte(t-prev_selected_t\\,10)'\n@end example\n\n@item\nUse aselect to select only audio frames with samples number > 100:\n@example\naselect='gt(samples_n\\,100)'\n@end example\n\n@item\nCreate a mosaic of the first scenes:\n@example\nffmpeg -i video.avi -vf select='gt(scene\\,0.4)',scale=160:120,tile -frames:v 1 preview.png\n@end example\n\nComparing @var{scene} against a value between 0.3 and 0.5 is generally a sane\nchoice.\n\n@item\nSend even and odd frames to separate outputs, and compose them:\n@example\nselect=n=2:e='mod(n, 2)+1' [odd][even]; [odd] pad=h=2*ih [tmp]; [tmp][even] overlay=y=h\n@end example\n\n@item\nSelect useful frames from an ffconcat file which is using inpoints and\noutpoints but where the source files are not intra frame only.\n@example\nffmpeg -copyts -vsync 0 -segment_time_metadata 1 -i input.ffconcat -vf select=concatdec_select -af aselect=concatdec_select output.avi\n@end example\n@end itemize\n\n"}]},{"filtergroup":["sendcmd","asendcmd"],"info":"\nSend commands to filters in the filtergraph.\n\nThese filters read commands to be sent to other filters in the\nfiltergraph.\n\n@code{sendcmd} must be inserted between two video filters,\n@code{asendcmd} must be inserted between two audio filters, but apart\nfrom that they act the same way.\n\nThe specification of commands can be provided in the filter arguments\nwith the @var{commands} option, or in a file specified by the\n@var{filename} option.\n\nThese filters accept the following options:\n","options":[{"names":["commands","c"],"info":"Set the commands to be read and sent to the other filters.\n"},{"names":["filename","f"],"info":"Set the filename of the commands to be read and sent to the other\nfilters.\n\n@subsection Commands syntax\n\nA commands description consists of a sequence of interval\nspecifications, comprising a list of commands to be executed when a\nparticular event related to that interval occurs. The occurring event\nis typically the current frame time entering or leaving a given time\ninterval.\n\nAn interval is specified by the following syntax:\n@example\n@var{START}[-@var{END}] @var{COMMANDS};\n@end example\n\nThe time interval is specified by the @var{START} and @var{END} times.\n@var{END} is optional and defaults to the maximum time.\n\nThe current frame time is considered within the specified interval if\nit is included in the interval [@var{START}, @var{END}), that is when\nthe time is greater or equal to @var{START} and is lesser than\n@var{END}.\n\n@var{COMMANDS} consists of a sequence of one or more command\nspecifications, separated by \",\", relating to that interval. The\nsyntax of a command specification is given by:\n@example\n[@var{FLAGS}] @var{TARGET} @var{COMMAND} @var{ARG}\n@end example\n\n@var{FLAGS} is optional and specifies the type of events relating to\nthe time interval which enable sending the specified command, and must\nbe a non-null sequence of identifier flags separated by \"+\" or \"|\" and\nenclosed between \"[\" and \"]\".\n\nThe following flags are recognized:\n@item enter\nThe command is sent when the current frame timestamp enters the\nspecified interval. In other words, the command is sent when the\nprevious frame timestamp was not in the given interval, and the\ncurrent is.\n\n@item leave\nThe command is sent when the current frame timestamp leaves the\nspecified interval. In other words, the command is sent when the\nprevious frame timestamp was in the given interval, and the\ncurrent is not.\n\nIf @var{FLAGS} is not specified, a default value of @code{[enter]} is\nassumed.\n\n@var{TARGET} specifies the target of the command, usually the name of\nthe filter class or a specific filter instance name.\n\n@var{COMMAND} specifies the name of the command for the target filter.\n\n@var{ARG} is optional and specifies the optional list of argument for\nthe given @var{COMMAND}.\n\nBetween one interval specification and another, whitespaces, or\nsequences of characters starting with @code{#} until the end of line,\nare ignored and can be used to annotate comments.\n\nA simplified BNF description of the commands specification syntax\nfollows:\n@example\n@var{COMMAND_FLAG} ::= \"enter\" | \"leave\"\n@var{COMMAND_FLAGS} ::= @var{COMMAND_FLAG} [(+|\"|\")@var{COMMAND_FLAG}]\n@var{COMMAND} ::= [\"[\" @var{COMMAND_FLAGS} \"]\"] @var{TARGET} @var{COMMAND} [@var{ARG}]\n@var{COMMANDS} ::= @var{COMMAND} [,@var{COMMANDS}]\n@var{INTERVAL} ::= @var{START}[-@var{END}] @var{COMMANDS}\n@var{INTERVALS} ::= @var{INTERVAL}[;@var{INTERVALS}]\n@end example\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSpecify audio tempo change at second 4:\n@example\nasendcmd=c='4.0 atempo tempo 1.5',atempo\n@end example\n\n@item\nTarget a specific filter instance:\n@example\nasendcmd=c='4.0 atempo@@my tempo 1.5',atempo@@my\n@end example\n\n@item\nSpecify a list of drawtext and hue commands in a file.\n@example\n# show text in the interval 5-10\n5.0-10.0 [enter] drawtext reinit 'fontfile=FreeSerif.ttf:text=hello world',\n [leave] drawtext reinit 'fontfile=FreeSerif.ttf:text=';\n\n# desaturate the image in the interval 15-20\n15.0-20.0 [enter] hue s 0,\n [enter] drawtext reinit 'fontfile=FreeSerif.ttf:text=nocolor',\n [leave] hue s 1,\n [leave] drawtext reinit 'fontfile=FreeSerif.ttf:text=color';\n\n# apply an exponential saturation fade-out effect, starting from time 25\n25 [enter] hue s exp(25-t)\n@end example\n\nA filtergraph allowing to read and process the above command list\nstored in a file @file{test.cmd}, can be specified with:\n@example\nsendcmd=f=test.cmd,drawtext=fontfile=FreeSerif.ttf:text='',hue\n@end example\n@end itemize\n\n@anchor{setpts}\n"}]},{"filtergroup":["setpts","asetpts"],"info":"\nChange the PTS (presentation timestamp) of the input frames.\n\n@code{setpts} works on video frames, @code{asetpts} on audio frames.\n\nThis filter accepts the following options:\n\n","options":[{"names":["expr"],"info":"The expression which is evaluated for each frame to construct its timestamp.\n\n\nThe expression is evaluated through the eval API and can contain the following\nconstants:\n\n@item FRAME_RATE, FR\nframe rate, only defined for constant frame-rate video\n\n@item PTS\nThe presentation timestamp in input\n\n@item N\nThe count of the input frame for video or the number of consumed samples,\nnot including the current frame for audio, starting from 0.\n\n@item NB_CONSUMED_SAMPLES\nThe number of consumed samples, not including the current frame (only\naudio)\n\n@item NB_SAMPLES, S\nThe number of samples in the current frame (only audio)\n\n@item SAMPLE_RATE, SR\nThe audio sample rate.\n\n@item STARTPTS\nThe PTS of the first frame.\n\n@item STARTT\nthe time in seconds of the first frame\n\n@item INTERLACED\nState whether the current frame is interlaced.\n\n@item T\nthe time in seconds of the current frame\n\n@item POS\noriginal position in the file of the frame, or undefined if undefined\nfor the current frame\n\n@item PREV_INPTS\nThe previous input PTS.\n\n@item PREV_INT\nprevious input time in seconds\n\n@item PREV_OUTPTS\nThe previous output PTS.\n\n@item PREV_OUTT\nprevious output time in seconds\n\n@item RTCTIME\nThe wallclock (RTC) time in microseconds. This is deprecated, use time(0)\ninstead.\n\n@item RTCSTART\nThe wallclock (RTC) time at the start of the movie in microseconds.\n\n@item TB\nThe timebase of the input timestamps.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nStart counting PTS from zero\n@example\nsetpts=PTS-STARTPTS\n@end example\n\n@item\nApply fast motion effect:\n@example\nsetpts=0.5*PTS\n@end example\n\n@item\nApply slow motion effect:\n@example\nsetpts=2.0*PTS\n@end example\n\n@item\nSet fixed rate of 25 frames per second:\n@example\nsetpts=N/(25*TB)\n@end example\n\n@item\nSet fixed rate 25 fps with some jitter:\n@example\nsetpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'\n@end example\n\n@item\nApply an offset of 10 seconds to the input PTS:\n@example\nsetpts=PTS+10/TB\n@end example\n\n@item\nGenerate timestamps from a \"live source\" and rebase onto the current timebase:\n@example\nsetpts='(RTCTIME - RTCSTART) / (TB * 1000000)'\n@end example\n\n@item\nGenerate timestamps by counting samples:\n@example\nasetpts=N/SR/TB\n@end example\n\n@end itemize\n\n"}]},{"filtergroup":["setrange"],"info":"\nForce color range for the output video frame.\n\nThe @code{setrange} filter marks the color range property for the\noutput frames. It does not change the input frame, but only sets the\ncorresponding property, which affects how the frame is treated by\nfollowing filters.\n\n","options":[{"names":["range"],"info":"Available values are:\n\n@item auto\nKeep the same color range property.\n\n@item unspecified, unknown\nSet the color range as unspecified.\n\n@item limited, tv, mpeg\nSet the color range as limited.\n\n@item full, pc, jpeg\nSet the color range as full.\n\n"}]},{"filtergroup":["settb","asettb"],"info":"\nSet the timebase to use for the output frames timestamps.\nIt is mainly useful for testing timebase configuration.\n\nIt accepts the following parameters:\n\n","options":[{"names":["expr","tb"],"info":"The expression which is evaluated into the output timebase.\n\n\nThe value for @option{tb} is an arithmetic expression representing a\nrational. The expression can contain the constants \"AVTB\" (the default\ntimebase), \"intb\" (the input timebase) and \"sr\" (the sample rate,\naudio only). Default value is \"intb\".\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nSet the timebase to 1/25:\n@example\nsettb=expr=1/25\n@end example\n\n@item\nSet the timebase to 1/10:\n@example\nsettb=expr=0.1\n@end example\n\n@item\nSet the timebase to 1001/1000:\n@example\nsettb=1+0.001\n@end example\n\n@item\nSet the timebase to 2*intb:\n@example\nsettb=2*intb\n@end example\n\n@item\nSet the default timebase value:\n@example\nsettb=AVTB\n@end example\n@end itemize\n\n"}]},{"filtergroup":["showcqt"],"info":"Convert input audio to a video output representing frequency spectrum\nlogarithmically using Brown-Puckette constant Q transform algorithm with\ndirect frequency domain coefficient calculation (but the transform itself\nis not really constant Q, instead the Q factor is actually variable/clamped),\nwith musical tone scale, from E0 to D#10.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. It must be even. For the syntax of this option,\ncheck the @ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{1920x1080}.\n\n"},{"names":["fps","rate","r"],"info":"Set the output frame rate. Default value is @code{25}.\n\n"},{"names":["bar_h"],"info":"Set the bargraph height. It must be even. Default value is @code{-1} which\ncomputes the bargraph height automatically.\n\n"},{"names":["axis_h"],"info":"Set the axis height. It must be even. Default value is @code{-1} which computes\nthe axis height automatically.\n\n"},{"names":["sono_h"],"info":"Set the sonogram height. It must be even. Default value is @code{-1} which\ncomputes the sonogram height automatically.\n\n"},{"names":["fullhd"],"info":"Set the fullhd resolution. This option is deprecated, use @var{size}, @var{s}\ninstead. Default value is @code{1}.\n\n"},{"names":["sono_v","volume"],"info":"Specify the sonogram volume expression. It can contain variables:\n@item bar_v\nthe @var{bar_v} evaluated expression\n@item frequency, freq, f\nthe frequency where it is evaluated\n@item timeclamp, tc\nthe value of @var{timeclamp} option\nand functions:\n@item a_weighting(f)\nA-weighting of equal loudness\n@item b_weighting(f)\nB-weighting of equal loudness\n@item c_weighting(f)\nC-weighting of equal loudness.\nDefault value is @code{16}.\n\n"},{"names":["bar_v","volume2"],"info":"Specify the bargraph volume expression. It can contain variables:\n@item sono_v\nthe @var{sono_v} evaluated expression\n@item frequency, freq, f\nthe frequency where it is evaluated\n@item timeclamp, tc\nthe value of @var{timeclamp} option\nand functions:\n@item a_weighting(f)\nA-weighting of equal loudness\n@item b_weighting(f)\nB-weighting of equal loudness\n@item c_weighting(f)\nC-weighting of equal loudness.\nDefault value is @code{sono_v}.\n\n"},{"names":["sono_g","gamma"],"info":"Specify the sonogram gamma. Lower gamma makes the spectrum more contrast,\nhigher gamma makes the spectrum having more range. Default value is @code{3}.\nAcceptable range is @code{[1, 7]}.\n\n"},{"names":["bar_g","gamma2"],"info":"Specify the bargraph gamma. Default value is @code{1}. Acceptable range is\n@code{[1, 7]}.\n\n"},{"names":["bar_t"],"info":"Specify the bargraph transparency level. Lower value makes the bargraph sharper.\nDefault value is @code{1}. Acceptable range is @code{[0, 1]}.\n\n"},{"names":["timeclamp","tc"],"info":"Specify the transform timeclamp. At low frequency, there is trade-off between\naccuracy in time domain and frequency domain. If timeclamp is lower,\nevent in time domain is represented more accurately (such as fast bass drum),\notherwise event in frequency domain is represented more accurately\n(such as bass guitar). Acceptable range is @code{[0.002, 1]}. Default value is @code{0.17}.\n\n"},{"names":["attack"],"info":"Set attack time in seconds. The default is @code{0} (disabled). Otherwise, it\nlimits future samples by applying asymmetric windowing in time domain, useful\nwhen low latency is required. Accepted range is @code{[0, 1]}.\n\n"},{"names":["basefreq"],"info":"Specify the transform base frequency. Default value is @code{20.01523126408007475},\nwhich is frequency 50 cents below E0. Acceptable range is @code{[10, 100000]}.\n\n"},{"names":["endfreq"],"info":"Specify the transform end frequency. Default value is @code{20495.59681441799654},\nwhich is frequency 50 cents above D#10. Acceptable range is @code{[10, 100000]}.\n\n"},{"names":["coeffclamp"],"info":"This option is deprecated and ignored.\n\n"},{"names":["tlength"],"info":"Specify the transform length in time domain. Use this option to control accuracy\ntrade-off between time domain and frequency domain at every frequency sample.\nIt can contain variables:\n@item frequency, freq, f\nthe frequency where it is evaluated\n@item timeclamp, tc\nthe value of @var{timeclamp} option.\nDefault value is @code{384*tc/(384+tc*f)}.\n\n"},{"names":["count"],"info":"Specify the transform count for every video frame. Default value is @code{6}.\nAcceptable range is @code{[1, 30]}.\n\n"},{"names":["fcount"],"info":"Specify the transform count for every single pixel. Default value is @code{0},\nwhich makes it computed automatically. Acceptable range is @code{[0, 10]}.\n\n"},{"names":["fontfile"],"info":"Specify font file for use with freetype to draw the axis. If not specified,\nuse embedded font. Note that drawing with font file or embedded font is not\nimplemented with custom @var{basefreq} and @var{endfreq}, use @var{axisfile}\noption instead.\n\n"},{"names":["font"],"info":"Specify fontconfig pattern. This has lower priority than @var{fontfile}. The\n@code{:} in the pattern may be replaced by @code{|} to avoid unnecessary\nescaping.\n\n"},{"names":["fontcolor"],"info":"Specify font color expression. This is arithmetic expression that should return\ninteger value 0xRRGGBB. It can contain variables:\n@item frequency, freq, f\nthe frequency where it is evaluated\n@item timeclamp, tc\nthe value of @var{timeclamp} option\nand functions:\n@item midi(f)\nmidi number of frequency f, some midi numbers: E0(16), C1(24), C2(36), A4(69)\n@item r(x), g(x), b(x)\nred, green, and blue value of intensity x.\nDefault value is @code{st(0, (midi(f)-59.5)/12);\nst(1, if(between(ld(0),0,1), 0.5-0.5*cos(2*PI*ld(0)), 0));\nr(1-ld(1)) + b(ld(1))}.\n\n"},{"names":["axisfile"],"info":"Specify image file to draw the axis. This option override @var{fontfile} and\n@var{fontcolor} option.\n\n"},{"names":["axis","text"],"info":"Enable/disable drawing text to the axis. If it is set to @code{0}, drawing to\nthe axis is disabled, ignoring @var{fontfile} and @var{axisfile} option.\nDefault value is @code{1}.\n\n"},{"names":["csp"],"info":"Set colorspace. The accepted values are:\n@item unspecified\nUnspecified (default)\n\n@item bt709\nBT.709\n\n@item fcc\nFCC\n\n@item bt470bg\nBT.470BG or BT.601-6 625\n\n@item smpte170m\nSMPTE-170M or BT.601-6 525\n\n@item smpte240m\nSMPTE-240M\n\n@item bt2020ncl\nBT.2020 with non-constant luminance\n\n\n"},{"names":["cscheme"],"info":"Set spectrogram color scheme. This is list of floating point values with format\n@code{left_r|left_g|left_b|right_r|right_g|right_b}.\nThe default is @code{1|0.5|0|0|0.5|1}.\n\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nPlaying audio while showing the spectrum:\n@example\nffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt [out0]'\n@end example\n\n@item\nSame as above, but with frame rate 30 fps:\n@example\nffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt=fps=30:count=5 [out0]'\n@end example\n\n@item\nPlaying at 1280x720:\n@example\nffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt=s=1280x720:count=4 [out0]'\n@end example\n\n@item\nDisable sonogram display:\n@example\nsono_h=0\n@end example\n\n@item\nA1 and its harmonics: A1, A2, (near)E3, A3:\n@example\nffplay -f lavfi 'aevalsrc=0.1*sin(2*PI*55*t)+0.1*sin(4*PI*55*t)+0.1*sin(6*PI*55*t)+0.1*sin(8*PI*55*t),\n asplit[a][out1]; [a] showcqt [out0]'\n@end example\n\n@item\nSame as above, but with more accuracy in frequency domain:\n@example\nffplay -f lavfi 'aevalsrc=0.1*sin(2*PI*55*t)+0.1*sin(4*PI*55*t)+0.1*sin(6*PI*55*t)+0.1*sin(8*PI*55*t),\n asplit[a][out1]; [a] showcqt=timeclamp=0.5 [out0]'\n@end example\n\n@item\nCustom volume:\n@example\nbar_v=10:sono_v=bar_v*a_weighting(f)\n@end example\n\n@item\nCustom gamma, now spectrum is linear to the amplitude.\n@example\nbar_g=2:sono_g=2\n@end example\n\n@item\nCustom tlength equation:\n@example\ntc=0.33:tlength='st(0,0.17); 384*tc / (384 / ld(0) + tc*f /(1-ld(0))) + 384*tc / (tc*f / ld(0) + 384 /(1-ld(0)))'\n@end example\n\n@item\nCustom fontcolor and fontfile, C-note is colored green, others are colored blue:\n@example\nfontcolor='if(mod(floor(midi(f)+0.5),12), 0x0000FF, g(1))':fontfile=myfont.ttf\n@end example\n\n@item\nCustom font using fontconfig:\n@example\nfont='Courier New,Monospace,mono|bold'\n@end example\n\n@item\nCustom frequency range with custom axis using image file:\n@example\naxisfile=myaxis.png:basefreq=40:endfreq=10000\n@end example\n@end itemize\n\n"}]},{"filtergroup":["showfreqs"],"info":"\nConvert input audio to video output representing the audio power spectrum.\nAudio amplitude is on Y-axis while frequency is on X-axis.\n\n","options":[{"names":["size","s"],"info":"Specify size of video. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault is @code{1024x512}.\n\n"},{"names":["mode"],"info":"Set display mode.\nThis set how each frequency bin will be represented.\n\nIt accepts the following values:\n@item line\n@item bar\n@item dot\nDefault is @code{bar}.\n\n"},{"names":["ascale"],"info":"Set amplitude scale.\n\nIt accepts the following values:\n@item lin\nLinear scale.\n\n@item sqrt\nSquare root scale.\n\n@item cbrt\nCubic root scale.\n\n@item log\nLogarithmic scale.\nDefault is @code{log}.\n\n"},{"names":["fscale"],"info":"Set frequency scale.\n\nIt accepts the following values:\n@item lin\nLinear scale.\n\n@item log\nLogarithmic scale.\n\n@item rlog\nReverse logarithmic scale.\nDefault is @code{lin}.\n\n"},{"names":["win_size"],"info":"Set window size. Allowed range is from 16 to 65536.\n\nDefault is @code{2048}\n\n"},{"names":["win_func"],"info":"Set windowing function.\n\nIt accepts the following values:\n@item rect\n@item bartlett\n@item hanning\n@item hamming\n@item blackman\n@item welch\n@item flattop\n@item bharris\n@item bnuttall\n@item bhann\n@item sine\n@item nuttall\n@item lanczos\n@item gauss\n@item tukey\n@item dolph\n@item cauchy\n@item parzen\n@item poisson\n@item bohman\nDefault is @code{hanning}.\n\n"},{"names":["overlap"],"info":"Set window overlap. In range @code{[0, 1]}. Default is @code{1},\nwhich means optimal overlap for selected window function will be picked.\n\n"},{"names":["averaging"],"info":"Set time averaging. Setting this to 0 will display current maximal peaks.\nDefault is @code{1}, which means time averaging is disabled.\n\n"},{"names":["colors"],"info":"Specify list of colors separated by space or by '|' which will be used to\ndraw channel frequencies. Unrecognized or missing colors will be replaced\nby white color.\n\n"},{"names":["cmode"],"info":"Set channel display mode.\n\nIt accepts the following values:\n@item combined\n@item separate\nDefault is @code{combined}.\n\n"},{"names":["minamp"],"info":"Set minimum amplitude used in @code{log} amplitude scaler.\n\n\n"}]},{"filtergroup":["showspatial"],"info":"\nConvert stereo input audio to a video output, representing the spatial relationship\nbetween two channels.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{512x512}.\n\n"},{"names":["win_size"],"info":"Set window size. Allowed range is from @var{1024} to @var{65536}. Default size is @var{4096}.\n\n"},{"names":["win_func"],"info":"Set window function.\n\nIt accepts the following values:\n@item rect\n@item bartlett\n@item hann\n@item hanning\n@item hamming\n@item blackman\n@item welch\n@item flattop\n@item bharris\n@item bnuttall\n@item bhann\n@item sine\n@item nuttall\n@item lanczos\n@item gauss\n@item tukey\n@item dolph\n@item cauchy\n@item parzen\n@item poisson\n@item bohman\n\nDefault value is @code{hann}.\n\n"},{"names":["overlap"],"info":"Set ratio of overlap window. Default value is @code{0.5}.\nWhen value is @code{1} overlap is set to recommended size for specific\nwindow function currently used.\n\n@anchor{showspectrum}\n"}]},{"filtergroup":["showspectrum"],"info":"\nConvert input audio to a video output, representing the audio frequency\nspectrum.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{640x512}.\n\n"},{"names":["slide"],"info":"Specify how the spectrum should slide along the window.\n\nIt accepts the following values:\n@item replace\nthe samples start again on the left when they reach the right\n@item scroll\nthe samples scroll from right to left\n@item fullframe\nframes are only produced when the samples reach the right\n@item rscroll\nthe samples scroll from left to right\n\nDefault value is @code{replace}.\n\n"},{"names":["mode"],"info":"Specify display mode.\n\nIt accepts the following values:\n@item combined\nall channels are displayed in the same row\n@item separate\nall channels are displayed in separate rows\n\nDefault value is @samp{combined}.\n\n"},{"names":["color"],"info":"Specify display color mode.\n\nIt accepts the following values:\n@item channel\neach channel is displayed in a separate color\n@item intensity\neach channel is displayed using the same color scheme\n@item rainbow\neach channel is displayed using the rainbow color scheme\n@item moreland\neach channel is displayed using the moreland color scheme\n@item nebulae\neach channel is displayed using the nebulae color scheme\n@item fire\neach channel is displayed using the fire color scheme\n@item fiery\neach channel is displayed using the fiery color scheme\n@item fruit\neach channel is displayed using the fruit color scheme\n@item cool\neach channel is displayed using the cool color scheme\n@item magma\neach channel is displayed using the magma color scheme\n@item green\neach channel is displayed using the green color scheme\n@item viridis\neach channel is displayed using the viridis color scheme\n@item plasma\neach channel is displayed using the plasma color scheme\n@item cividis\neach channel is displayed using the cividis color scheme\n@item terrain\neach channel is displayed using the terrain color scheme\n\nDefault value is @samp{channel}.\n\n"},{"names":["scale"],"info":"Specify scale used for calculating intensity color values.\n\nIt accepts the following values:\n@item lin\nlinear\n@item sqrt\nsquare root, default\n@item cbrt\ncubic root\n@item log\nlogarithmic\n@item 4thrt\n4th root\n@item 5thrt\n5th root\n\nDefault value is @samp{sqrt}.\n\n"},{"names":["fscale"],"info":"Specify frequency scale.\n\nIt accepts the following values:\n@item lin\nlinear\n@item log\nlogarithmic\n\nDefault value is @samp{lin}.\n\n"},{"names":["saturation"],"info":"Set saturation modifier for displayed colors. Negative values provide\nalternative color scheme. @code{0} is no saturation at all.\nSaturation must be in [-10.0, 10.0] range.\nDefault value is @code{1}.\n\n"},{"names":["win_func"],"info":"Set window function.\n\nIt accepts the following values:\n@item rect\n@item bartlett\n@item hann\n@item hanning\n@item hamming\n@item blackman\n@item welch\n@item flattop\n@item bharris\n@item bnuttall\n@item bhann\n@item sine\n@item nuttall\n@item lanczos\n@item gauss\n@item tukey\n@item dolph\n@item cauchy\n@item parzen\n@item poisson\n@item bohman\n\nDefault value is @code{hann}.\n\n"},{"names":["orientation"],"info":"Set orientation of time vs frequency axis. Can be @code{vertical} or\n@code{horizontal}. Default is @code{vertical}.\n\n"},{"names":["overlap"],"info":"Set ratio of overlap window. Default value is @code{0}.\nWhen value is @code{1} overlap is set to recommended size for specific\nwindow function currently used.\n\n"},{"names":["gain"],"info":"Set scale gain for calculating intensity color values.\nDefault value is @code{1}.\n\n"},{"names":["data"],"info":"Set which data to display. Can be @code{magnitude}, default or @code{phase}.\n\n"},{"names":["rotation"],"info":"Set color rotation, must be in [-1.0, 1.0] range.\nDefault value is @code{0}.\n\n"},{"names":["start"],"info":"Set start frequency from which to display spectrogram. Default is @code{0}.\n\n"},{"names":["stop"],"info":"Set stop frequency to which to display spectrogram. Default is @code{0}.\n\n"},{"names":["fps"],"info":"Set upper frame rate limit. Default is @code{auto}, unlimited.\n\n"},{"names":["legend"],"info":"Draw time and frequency axes and legends. Default is disabled.\n\nThe usage is very similar to the showwaves filter; see the examples in that\nsection.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nLarge window with logarithmic color scaling:\n@example\nshowspectrum=s=1280x480:scale=log\n@end example\n\n@item\nComplete example for a colored and sliding spectrum per channel using @command{ffplay}:\n@example\nffplay -f lavfi 'amovie=input.mp3, asplit [a][out1];\n [a] showspectrum=mode=separate:color=intensity:slide=1:scale=cbrt [out0]'\n@end example\n@end itemize\n\n"}]},{"filtergroup":["showspectrumpic"],"info":"\nConvert input audio to a single video frame, representing the audio frequency\nspectrum.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{4096x2048}.\n\n"},{"names":["mode"],"info":"Specify display mode.\n\nIt accepts the following values:\n@item combined\nall channels are displayed in the same row\n@item separate\nall channels are displayed in separate rows\nDefault value is @samp{combined}.\n\n"},{"names":["color"],"info":"Specify display color mode.\n\nIt accepts the following values:\n@item channel\neach channel is displayed in a separate color\n@item intensity\neach channel is displayed using the same color scheme\n@item rainbow\neach channel is displayed using the rainbow color scheme\n@item moreland\neach channel is displayed using the moreland color scheme\n@item nebulae\neach channel is displayed using the nebulae color scheme\n@item fire\neach channel is displayed using the fire color scheme\n@item fiery\neach channel is displayed using the fiery color scheme\n@item fruit\neach channel is displayed using the fruit color scheme\n@item cool\neach channel is displayed using the cool color scheme\n@item magma\neach channel is displayed using the magma color scheme\n@item green\neach channel is displayed using the green color scheme\n@item viridis\neach channel is displayed using the viridis color scheme\n@item plasma\neach channel is displayed using the plasma color scheme\n@item cividis\neach channel is displayed using the cividis color scheme\n@item terrain\neach channel is displayed using the terrain color scheme\nDefault value is @samp{intensity}.\n\n"},{"names":["scale"],"info":"Specify scale used for calculating intensity color values.\n\nIt accepts the following values:\n@item lin\nlinear\n@item sqrt\nsquare root, default\n@item cbrt\ncubic root\n@item log\nlogarithmic\n@item 4thrt\n4th root\n@item 5thrt\n5th root\nDefault value is @samp{log}.\n\n"},{"names":["fscale"],"info":"Specify frequency scale.\n\nIt accepts the following values:\n@item lin\nlinear\n@item log\nlogarithmic\n\nDefault value is @samp{lin}.\n\n"},{"names":["saturation"],"info":"Set saturation modifier for displayed colors. Negative values provide\nalternative color scheme. @code{0} is no saturation at all.\nSaturation must be in [-10.0, 10.0] range.\nDefault value is @code{1}.\n\n"},{"names":["win_func"],"info":"Set window function.\n\nIt accepts the following values:\n@item rect\n@item bartlett\n@item hann\n@item hanning\n@item hamming\n@item blackman\n@item welch\n@item flattop\n@item bharris\n@item bnuttall\n@item bhann\n@item sine\n@item nuttall\n@item lanczos\n@item gauss\n@item tukey\n@item dolph\n@item cauchy\n@item parzen\n@item poisson\n@item bohman\nDefault value is @code{hann}.\n\n"},{"names":["orientation"],"info":"Set orientation of time vs frequency axis. Can be @code{vertical} or\n@code{horizontal}. Default is @code{vertical}.\n\n"},{"names":["gain"],"info":"Set scale gain for calculating intensity color values.\nDefault value is @code{1}.\n\n"},{"names":["legend"],"info":"Draw time and frequency axes and legends. Default is enabled.\n\n"},{"names":["rotation"],"info":"Set color rotation, must be in [-1.0, 1.0] range.\nDefault value is @code{0}.\n\n"},{"names":["start"],"info":"Set start frequency from which to display spectrogram. Default is @code{0}.\n\n"},{"names":["stop"],"info":"Set stop frequency to which to display spectrogram. Default is @code{0}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nExtract an audio spectrogram of a whole audio track\nin a 1024x1024 picture using @command{ffmpeg}:\n@example\nffmpeg -i audio.flac -lavfi showspectrumpic=s=1024x1024 spectrogram.png\n@end example\n@end itemize\n\n"}]},{"filtergroup":["showvolume"],"info":"\nConvert input audio volume to a video output.\n\n","options":[{"names":["rate","r"],"info":"Set video rate.\n\n"},{"names":["b"],"info":"Set border width, allowed range is [0, 5]. Default is 1.\n\n"},{"names":["w"],"info":"Set channel width, allowed range is [80, 8192]. Default is 400.\n\n"},{"names":["h"],"info":"Set channel height, allowed range is [1, 900]. Default is 20.\n\n"},{"names":["f"],"info":"Set fade, allowed range is [0, 1]. Default is 0.95.\n\n"},{"names":["c"],"info":"Set volume color expression.\n\nThe expression can use the following variables:\n\n@item VOLUME\nCurrent max volume of channel in dB.\n\n@item PEAK\nCurrent peak.\n\n@item CHANNEL\nCurrent channel number, starting from 0.\n\n"},{"names":["t"],"info":"If set, displays channel names. Default is enabled.\n\n"},{"names":["v"],"info":"If set, displays volume values. Default is enabled.\n\n"},{"names":["o"],"info":"Set orientation, can be horizontal: @code{h} or vertical: @code{v},\ndefault is @code{h}.\n\n"},{"names":["s"],"info":"Set step size, allowed range is [0, 5]. Default is 0, which means\nstep is disabled.\n\n"},{"names":["p"],"info":"Set background opacity, allowed range is [0, 1]. Default is 0.\n\n"},{"names":["m"],"info":"Set metering mode, can be peak: @code{p} or rms: @code{r},\ndefault is @code{p}.\n\n"},{"names":["ds"],"info":"Set display scale, can be linear: @code{lin} or log: @code{log},\ndefault is @code{lin}.\n\n"},{"names":["dm"],"info":"In second.\nIf set to > 0., display a line for the max level\nin the previous seconds.\ndefault is disabled: @code{0.}\n\n"},{"names":["dmc"],"info":"The color of the max line. Use when @code{dm} option is set to > 0.\ndefault is: @code{orange}\n\n"}]},{"filtergroup":["showwaves"],"info":"\nConvert input audio to a video output, representing the samples waves.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{600x240}.\n\n"},{"names":["mode"],"info":"Set display mode.\n\nAvailable values are:\n@item point\nDraw a point for each sample.\n\n@item line\nDraw a vertical line for each sample.\n\n@item p2p\nDraw a point for each sample and a line between them.\n\n@item cline\nDraw a centered vertical line for each sample.\n\nDefault value is @code{point}.\n\n"},{"names":["n"],"info":"Set the number of samples which are printed on the same column. A\nlarger value will decrease the frame rate. Must be a positive\ninteger. This option can be set only if the value for @var{rate}\nis not explicitly specified.\n\n"},{"names":["rate","r"],"info":"Set the (approximate) output frame rate. This is done by setting the\noption @var{n}. Default value is \"25\".\n\n"},{"names":["split_channels"],"info":"Set if channels should be drawn separately or overlap. Default value is 0.\n\n"},{"names":["colors"],"info":"Set colors separated by '|' which are going to be used for drawing of each channel.\n\n"},{"names":["scale"],"info":"Set amplitude scale.\n\nAvailable values are:\n@item lin\nLinear.\n\n@item log\nLogarithmic.\n\n@item sqrt\nSquare root.\n\n@item cbrt\nCubic root.\n\nDefault is linear.\n\n"},{"names":["draw"],"info":"Set the draw mode. This is mostly useful to set for high @var{n}.\n\nAvailable values are:\n@item scale\nScale pixel values for each drawn sample.\n\n@item full\nDraw every sample directly.\n\nDefault value is @code{scale}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nOutput the input file audio and the corresponding video representation\nat the same time:\n@example\namovie=a.mp3,asplit[out0],showwaves[out1]\n@end example\n\n@item\nCreate a synthetic signal and show it with showwaves, forcing a\nframe rate of 30 frames per second:\n@example\naevalsrc=sin(1*2*PI*t)*sin(880*2*PI*t):cos(2*PI*200*t),asplit[out0],showwaves=r=30[out1]\n@end example\n@end itemize\n\n"}]},{"filtergroup":["showwavespic"],"info":"\nConvert input audio to a single video frame, representing the samples waves.\n\n","options":[{"names":["size","s"],"info":"Specify the video size for the output. For the syntax of this option, check the\n@ref{video size syntax,,\"Video size\" section in the ffmpeg-utils manual,ffmpeg-utils}.\nDefault value is @code{600x240}.\n\n"},{"names":["split_channels"],"info":"Set if channels should be drawn separately or overlap. Default value is 0.\n\n"},{"names":["colors"],"info":"Set colors separated by '|' which are going to be used for drawing of each channel.\n\n"},{"names":["scale"],"info":"Set amplitude scale.\n\nAvailable values are:\n@item lin\nLinear.\n\n@item log\nLogarithmic.\n\n@item sqrt\nSquare root.\n\n@item cbrt\nCubic root.\n\nDefault is linear.\n\n"},{"names":["draw"],"info":"Set the draw mode.\n\nAvailable values are:\n@item scale\nScale pixel values for each drawn sample.\n\n@item full\nDraw every sample directly.\n\nDefault value is @code{scale}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nExtract a channel split representation of the wave form of a whole audio track\nin a 1024x800 picture using @command{ffmpeg}:\n@example\nffmpeg -i audio.flac -lavfi showwavespic=split_channels=1:s=1024x800 waveform.png\n@end example\n@end itemize\n\n"}]},{"filtergroup":["sidedata","asidedata"],"info":"\nDelete frame side data, or select frames based on it.\n\nThis filter accepts the following options:\n\n","options":[{"names":["mode"],"info":"Set mode of operation of the filter.\n\nCan be one of the following:\n\n@item select\nSelect every frame with side data of @code{type}.\n\n@item delete\nDelete side data of @code{type}. If @code{type} is not set, delete all side\ndata in the frame.\n\n\n"},{"names":["type"],"info":"Set side data type used with all modes. Must be set for @code{select} mode. For\nthe list of frame side data types, refer to the @code{AVFrameSideDataType} enum\nin @file{libavutil/frame.h}. For example, to choose\n@code{AV_FRAME_DATA_PANSCAN} side data, you must specify @code{PANSCAN}.\n\n\n"}]},{"filtergroup":["spectrumsynth"],"info":"\nSynthesize audio from 2 input video spectrums, first input stream represents\nmagnitude across time and second represents phase across time.\nThe filter will transform from frequency domain as displayed in videos back\nto time domain as presented in audio output.\n\nThis filter is primarily created for reversing processed @ref{showspectrum}\nfilter outputs, but can synthesize sound from other spectrograms too.\nBut in such case results are going to be poor if the phase data is not\navailable, because in such cases phase data need to be recreated, usually\nit's just recreated from random noise.\nFor best results use gray only output (@code{channel} color mode in\n@ref{showspectrum} filter) and @code{log} scale for magnitude video and\n@code{lin} scale for phase video. To produce phase, for 2nd video, use\n@code{data} option. Inputs videos should generally use @code{fullframe}\nslide mode as that saves resources needed for decoding video.\n\n","options":[{"names":["sample_rate"],"info":"Specify sample rate of output audio, the sample rate of audio from which\nspectrum was generated may differ.\n\n"},{"names":["channels"],"info":"Set number of channels represented in input video spectrums.\n\n"},{"names":["scale"],"info":"Set scale which was used when generating magnitude input spectrum.\nCan be @code{lin} or @code{log}. Default is @code{log}.\n\n"},{"names":["slide"],"info":"Set slide which was used when generating inputs spectrums.\nCan be @code{replace}, @code{scroll}, @code{fullframe} or @code{rscroll}.\nDefault is @code{fullframe}.\n\n"},{"names":["win_func"],"info":"Set window function used for resynthesis.\n\n"},{"names":["overlap"],"info":"Set window overlap. In range @code{[0, 1]}. Default is @code{1},\nwhich means optimal overlap for selected window function will be picked.\n\n"},{"names":["orientation"],"info":"Set orientation of input videos. Can be @code{vertical} or @code{horizontal}.\nDefault is @code{vertical}.\n\n","examples":"@subsection Examples\n\n@itemize\n@item\nFirst create magnitude and phase videos from audio, assuming audio is stereo with 44100 sample rate,\nthen resynthesize videos back to audio with spectrumsynth:\n@example\nffmpeg -i input.flac -lavfi showspectrum=mode=separate:scale=log:overlap=0.875:color=channel:slide=fullframe:data=magnitude -an -c:v rawvideo magnitude.nut\nffmpeg -i input.flac -lavfi showspectrum=mode=separate:scale=lin:overlap=0.875:color=channel:slide=fullframe:data=phase -an -c:v rawvideo phase.nut\nffmpeg -i magnitude.nut -i phase.nut -lavfi spectrumsynth=channels=2:sample_rate=44100:win_func=hann:overlap=0.875:slide=fullframe output.flac\n@end example\n@end itemize\n\n"}]},{"filtergroup":["split","asplit"],"info":"\nSplit input into several identical outputs.\n\n@code{asplit} works with audio input, @code{split} with video.\n\nThe filter accepts a single parameter which specifies the number of outputs. If\nunspecified, it defaults to 2.\n\n@subsection Examples\n\n@itemize\n@item\nCreate two separate outputs from the same input:\n@example\n[in] split [out0][out1]\n@end example\n\n@item\nTo create 3 or more outputs, you need to specify the number of\noutputs, like in:\n@example\n[in] asplit=3 [out0][out1][out2]\n@end example\n\n@item\nCreate two separate outputs from the same input, one cropped and\none padded:\n@example\n[in] split [splitout1][splitout2];\n[splitout1] crop=100:100:0:0 [cropout];\n[splitout2] pad=200:200:100:100 [padout];\n@end example\n\n@item\nCreate 5 copies of the input audio with @command{ffmpeg}:\n@example\nffmpeg -i INPUT -filter_complex asplit=5 OUTPUT\n@end example\n@end itemize\n\n","options":[]},{"filtergroup":["zmq","azmq"],"info":"\nReceive commands sent through a libzmq client, and forward them to\nfilters in the filtergraph.\n\n@code{zmq} and @code{azmq} work as a pass-through filters. @code{zmq}\nmust be inserted between two video filters, @code{azmq} between two\naudio filters. Both are capable to send messages to any filter type.\n\nTo enable these filters you need to install the libzmq library and\nheaders and configure FFmpeg with @code{--enable-libzmq}.\n\nFor more information about libzmq see:\n@url{http://www.zeromq.org/}\n\nThe @code{zmq} and @code{azmq} filters work as a libzmq server, which\nreceives messages sent through a network interface defined by the\n@option{bind_address} (or the abbreviation \"@option{b}\") option.\nDefault value of this option is @file{tcp://localhost:5555}. You may\nwant to alter this value to your needs, but do not forget to escape any\n':' signs (see @ref{filtergraph escaping}).\n\nThe received message must be in the form:\n@example\n@var{TARGET} @var{COMMAND} [@var{ARG}]\n@end example\n\n@var{TARGET} specifies the target of the command, usually the name of\nthe filter class or a specific filter instance name. The default\nfilter instance name uses the pattern @samp{Parsed_<filter_name>_<index>},\nbut you can override this by using the @samp{filter_name@@id} syntax\n(see @ref{Filtergraph syntax}).\n\n@var{COMMAND} specifies the name of the command for the target filter.\n\n@var{ARG} is optional and specifies the optional argument list for the\ngiven @var{COMMAND}.\n\nUpon reception, the message is processed and the corresponding command\nis injected into the filtergraph. Depending on the result, the filter\nwill send a reply to the client, adopting the format:\n@example\n@var{ERROR_CODE} @var{ERROR_REASON}\n@var{MESSAGE}\n@end example\n\n@var{MESSAGE} is optional.\n\n@subsection Examples\n\nLook at @file{tools/zmqsend} for an example of a zmq client which can\nbe used to send commands processed by these filters.\n\nConsider the following filtergraph generated by @command{ffplay}.\nIn this example the last overlay filter has an instance name. All other\nfilters will have default instance names.\n\n@example\nffplay -dumpgraph 1 -f lavfi \"\ncolor=s=100x100:c=red [l];\ncolor=s=100x100:c=blue [r];\nnullsrc=s=200x100, zmq [bg];\n[bg][l] overlay [bg+l];\n[bg+l][r] overlay@@my=x=100 \"\n@end example\n\nTo change the color of the left side of the video, the following\ncommand can be used:\n@example\necho Parsed_color_0 c yellow | tools/zmqsend\n@end example\n\nTo change the right side:\n@example\necho Parsed_color_1 c pink | tools/zmqsend\n@end example\n\nTo change the position of the right side:\n@example\necho overlay@@my x 150 | tools/zmqsend\n@end example\n\n\n@c man end MULTIMEDIA FILTERS\n\n@chapter Multimedia Sources\n@c man begin MULTIMEDIA SOURCES\n\nBelow is a description of the currently available multimedia sources.\n\n","options":[]},{"filtergroup":["amovie"],"info":"\nThis is the same as @ref{movie} source, except it selects an audio\nstream by default.\n\n@anchor{movie}\n","options":[]}]