diff --git a/docs/PsychAlpha.md b/docs/PsychAlpha.md index d7c6501e..c224ae8b 100644 --- a/docs/PsychAlpha.md +++ b/docs/PsychAlpha.md @@ -12,7 +12,6 @@ so use them at your own risk. [MeasureLuminancePrecision](MeasureLuminancePrecision) - Automatically determine visual output luminance precision. [PrintStruct](PrintStruct) - Print structure & cell array to any depth [PropixxImageUndistortionThrowaway](PropixxImageUndistortionThrowaway) - Experimental demo for [VPixx](VPixx) [ProPixx](ProPixx) projector fast mode. - [ShowHDRDemo](ShowHDRDemo) - Incomplete demo of how to show HDR images. diff --git a/docs/PsychKinect.md b/docs/PsychKinect.md index 6e5a331d..dceea9d2 100644 --- a/docs/PsychKinect.md +++ b/docs/PsychKinect.md @@ -61,7 +61,7 @@ specified by the given 'kinect' handle. Recycle 'oldkobject' so save memory and resources if 'oldkobject' is provided. Otherwise create a new object. -Do not use within creen('BeginOpenGL', window); and [Screen](Screen)('EndOpenGL', +Do not use within [Screen](Screen)('BeginOpenGL', window); and [Screen](Screen)('EndOpenGL', window); calls, as 2D mode is needed. @@ -77,7 +77,7 @@ During a work-loop you could also pass 'kobject' to the next [PsychKinect](PsychKinect)('CreateObject', ...); call as 'oldkobject' to recycle it for reasons of computational efficiency. -Do not use within creen('BeginOpenGL', window); and [Screen](Screen)('EndOpenGL', +Do not use within [Screen](Screen)('BeginOpenGL', window); and [Screen](Screen)('EndOpenGL', window); calls, as 2D mode is needed. diff --git a/docs/Screen-OpenMovie.md b/docs/Screen-OpenMovie.md index 33645c49..fece984d 100644 --- a/docs/Screen-OpenMovie.md +++ b/docs/Screen-OpenMovie.md @@ -175,10 +175,9 @@ chooses audio output and parameters, based on your system and user settings. Most often this is what you want. Sometimes you may want to have more control over outputs, e.g., if your system has multiple sound cards installed and you want to route audio output to a specific card and output connector. Example use -of the parameter: 'AudioSink=pulseaudiosink device=MyCardsOutput1' would use the -Linux pulseaudiosink plugin to send sound data to the output named -'MyCardsOutput1' via the [PulseAudio](PulseAudio) sound server commonly used on Linux desktop -systems. +of the parameter: 'AudioSink=pulsesink device=MyCardsOutput1' would use the +Linux pulsesink plugin to send sound data to the output named 'MyCardsOutput1' +via the [PulseAudio](PulseAudio) sound server commonly used on Linux desktop systems. If you set a [Screen](Screen)() verbosity level of 4 or higher, [Screen](Screen)() will print out the actually used audio output at the end of movie playback on operating systems which support this. This can help debugging issues with audio routing if you diff --git a/docs/Screen-OpenVideoCapture.md b/docs/Screen-OpenVideoCapture.md index 99ce1470..6143084b 100644 --- a/docs/Screen-OpenVideoCapture.md +++ b/docs/Screen-OpenVideoCapture.md @@ -76,20 +76,20 @@ elapsed time since start of capture, or recording time in movie), instead of the default time base, which is regular [GetSecs](GetSecs)() time. A setting of 128 will force use of a videorate converter in pure live capture mode. By default the videorate converter is only used if video recording is -active. The converter makes sure that video is recorded (or delivered) at -exactly the requested capture framerate, even if the system isn't really capable -of maintaining that framerate: If the video source (camera) delivers frames at a -too low framerate, the converter will insert duplicated frames to boost up -effective framerate. If the source delivers more frames than the engine can -handle (e.g., system overload or video encoding too slow) the converter will -drop frames to reduce effective framerate. Slight fluctuations are compensated -by adjusting the capture timestamps. This mechanism guarantees a constant -framerate in recorded video as well as the best possible audio-video sync and -smoothness of video, given system constraints. The downside may be that the -recorded content and returned timestamps don't reflect the true timing of -capture, but a beautified version. In pure live capture, rate conversion is off -by default to avoid such potential confounds in the timestamps. Choose this -options carefully. +active, unless flag 8192 is specified. The converter makes sure that video is +recorded (or delivered) at exactly the requested capture framerate, even if the +system isn't really capable of maintaining that framerate: If the video source +(camera) delivers frames at a too low framerate, the converter will insert +duplicated frames to boost up effective framerate. If the source delivers more +frames than the engine can handle (e.g., system overload or video encoding too +slow) the converter will drop frames to reduce effective framerate. Slight +fluctuations are compensated by adjusting the capture timestamps. This mechanism +guarantees a constant framerate in recorded video as well as the best possible +audio-video sync and smoothness of video, given system constraints. The downside +may be that the recorded content and returned timestamps don't reflect the true +timing of capture, but a beautified version. In pure live capture, rate +conversion is off by default to avoid such potential confounds in the +timestamps. Choose this options carefully. A setting of 256 in combined video live capture and video recording mode will restrict video framerate conversion to the recorded videostream, but provide mostly untampered true timing to the live capture. By default, framerate @@ -108,7 +108,8 @@ A setting of 4096 requests to apply some performance optimizations (the setting of filter-caps). This can hurt if a videocapture device refuses to work, with some error message about ''check your filtered caps, if any.''. By default, if the flag is omitted, some performance loss will be present, but capture will be -more robust with problematic cameras. +more robust with problematic cameras. A setting of 8192 requests to avoid any +use of videorate converters, not even for recording (see above). 'captureEngineType' This optional parameter allows selection of the video capture engine to use for this video source. Allowable values are currently 1 diff --git a/docs/TextInOffscreenWindowTest.md b/docs/TextInOffscreenWindowTest.md index f2fe5aa5..ec2d3554 100644 --- a/docs/TextInOffscreenWindowTest.md +++ b/docs/TextInOffscreenWindowTest.md @@ -6,23 +6,6 @@ Compare text drawn into an offscreen window to text drawn into an onscreen window. -[Screen](Screen) currently fails this test; [Screen](Screen)('DrawText') draws slightly -differently into onscreen and offscreen windows. Visual examination of -the difference between onscreen and offscreen text shows that the -onscreen texts is larger by about two pixels. - -Until this is resolved, we recommend not mixing onscreen and offscreen -text rendering if you require exact matching between text. - -We don't yet know why this fails. The problem seems to have to do with -differences between how textures are rendered into onscreen and offscreen -windows. That could be either because of a bug in [Screen](Screen), or because the -[OpenGL](OpenGL) software renderer used for offscreen windows rasterizes textures -slightly differently than the hardware renderer used for onscreen -windows. We don't yet have a way to compare textures in onscreen and -offscreen windows indepently of drawing text because 'DrawTexture' still -uses [OpenGL](OpenGL) extensions not provided in offscreen contexts. -