diff --git a/docs/PsychCV-April3DSettings.md b/docs/PsychCV-April3DSettings.md new file mode 100644 index 00000000..e584b210 --- /dev/null +++ b/docs/PsychCV-April3DSettings.md @@ -0,0 +1,31 @@ +# [PsychCV('April3DSettings')](PsychCV-April3DSettings) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +[glProjectionMatrix, camCalib, tagSize, minD, maxD] = PsychCV('April3DSettings' [, camCalib][, tagSize][, minD][, maxD]); + +Return current 3D marker pose reconstruction and 3D rendering parameters, +optionally change them. +For 6-[DoF](DoF) 3D marker tag pose computation, and for return of [OpenGL](OpenGL) compliant +rendering matrices, information about the physical size of april tag markers, +[OpenGL](OpenGL) near and far clipping planes, andthe intrinsic optical parameters of the +camera that captures the marker images are needed. +These parameters can be changed anytime with this subfunction. Following +settings are available: +'camCalib' Intrinsic camera parameters vector: [cx, cy, fx, fy]. (cx,cy) is +sensor center in pixels, (fx,fy) is focal length in pixels. +'tagSize' Size of an April tag in meters, ie. length of one side of the square +tag. +'minD' Near clipping distance for [OpenGL](OpenGL) frustum computation - Affects +'glProjectionMatrix' matrix only. +'maxD' Far clipping distance for [OpenGL](OpenGL) frustum computation - Affects +'glProjectionMatrix' matrix only. + +The optionally returned 'glProjectionMatrix', based on these parameters, can be +used as [OpenGL](OpenGL) GL\_PROJECTION\_MATRIX to render 3D objects superimposed and +aligned with the markers in the video input image from the camera, for +debugging, diagnostics, or AR / mixed reality applications. + + + +###See also: +[AprilDetectMarkers](PsychCV-AprilDetectMarkers) diff --git a/docs/PsychCV-AprilDetectMarkers.md b/docs/PsychCV-AprilDetectMarkers.md new file mode 100644 index 00000000..c8fb373f --- /dev/null +++ b/docs/PsychCV-AprilDetectMarkers.md @@ -0,0 +1,78 @@ +# [PsychCV('AprilDetectMarkers')](PsychCV-AprilDetectMarkers) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +[detectedMarkers] = PsychCV('AprilDetectMarkers'[, markerSubset=all][, infoType=all]); + +Detect apriltags in the current video image, return information about them. + +Analyzes the current video image, stored in the internal input image buffer, and +tries to detect the apriltag markers in them. For all detected tags, their 2D +center, their 2D corners, and their 3D position and orientation is computed, +using the provided camera calibration. The return argument 'detectedMarkers' is +an array of structs, with one struct for each successfully detected tag. The +struct contains info about the identity of the tag, confidence values for the +reliability of detection, and the estimated 6-[DoF](DoF) 3D position and 3D pose of the +marker tag in 3D space, relative to the cameras reference frame and origin. +Please note that, in general, 3D pose estimates are not as reliable and accurate +as the 2D detection of markers. Furthermore, the 3D orientation of the marker is +way less well defined and accurate than the 3D position of the markers center. +Often the estimated 3D orientation may be outright rubbish! +You can use [PsychCV](PsychCV)('AprilSettings'); to tune various parameters related to the +2D marker detection, including the use of multiple processing threads for higher +performance. +For 6-[DoF](DoF) 3D estimation, you need to provide camera intrinsic parameters and the +size of the tags via [PsychCV](PsychCV)('April3DSettings'). +If you don't want to detect all tags, but only a subset, then pass a list of +candidate tag id's via the list 'markerSubset', to reduce your codes complexity +and computation time. +To further reduce computation time, you can ask for only a subset of information +by providing 'infoType'. By default all information is returned at highest +quality and robustness with longest computation time: +- 2D marker detection data is always returned. +- A value of +1 will return 3D pose. +Omitting the value +1 from 'infoType' will avoid 3D pose estimation. + +The returned structs contain the following fields: +'Id' The decoded apriltag id. Hamming code error correction is used for +decoding. +'MatchQuality' A measure of the quality of the binary decoding process. This is +what the apriltag library calls decision\_margin. Higher numbers roughly indicate +better decodes. It is a reasonable measure of detection quality only for small +tags, mostly meaningless for bigger tags. +'HammingErrorBits' Number of error bits corrected. Smaller numbers are better. +'Corners2D' A 2-by-4 matrix with the 2D pixel coordinates of the detected +corners of the tag in the input image, each column representing one corner [x ; +y]. These always wrap counter-clock wise around the tag. +'Center2D' A vector with the 2D pixel coordinates of the estimated center of the +tag in the input image. +'PoseError' If 3D pose estimation was used, the object space error of the +returned pose. +'T' A 3-component [x ; y; z] translation vector which encodes the estimated 3D +location of the center of the tag in space, in units of meters, relative to the +origin of the camera. +'R' A 3x3 rotation matrix, encoding the estimated pose of the tag, relative to +the cameras reference frame. Convention is that the tag itself lies in the x-y +plane of its local reference frame, and the positive z-axis sticks out of the +tags surface like a surface normal vector. +'TransformMatrix' A 4x4 transformation matrix representing position and +orientation all in one, for convenience. Simply the product [TransformMatrix](TransformMatrix) = T +\* R, extended to a 4x4 format. This represents pose relative to the cameras +origin, x-axis to the right, y-axis down, z-axis along the looking direction aka +optical axis. +'ModelViewMatrix' A 4x4 RHS transformation matrix, directly usable for 3D [OpenGL](OpenGL) +rendering of objects in the tags local reference frame. It can be used directly +as GL\_MODELVIEW\_MATRIX for rendering 3D content on top of the tag in the video +image, or right-multiplied to the active GL\_MODELVIEW\_MATRIX to represent the +tags 6 [DoF](DoF) pose relative to the 3D [OpenGL](OpenGL) cameras origin. You need to use the +GL\_PROJECTION\_MATRIX returned by matrix = [PsychCV](PsychCV)('April3DSettings'); for +rendering superimposed to images from the camera that captured the april tags. +This matrix is a rotated version of 'TransformMatrix', rotated 180 degrees +around the x-axis for [OpenGL](OpenGL) compatibility, as apriltag has x-axis to the right, +y-axis down, z-axis along optical looking direction axis, whereas [OpenGL](OpenGL) has its +x-axis to the right, y-axis up, and the negative z-axis along optical looking +direction axis / viewing direction, ie. 180 degrees rotated. + + + +###See also: +[AprilInitialize](PsychCV-AprilInitialize) [AprilSettings](PsychCV-AprilSettings) [April3DSettings](PsychCV-April3DSettings) diff --git a/docs/PsychCV-AprilInitialize.md b/docs/PsychCV-AprilInitialize.md new file mode 100644 index 00000000..e529bdd1 --- /dev/null +++ b/docs/PsychCV-AprilInitialize.md @@ -0,0 +1,52 @@ +# [PsychCV('AprilInitialize')](PsychCV-AprilInitialize) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +[inputImageMemBuffer] = PsychCV('AprilInitialize', tagFamilyName, imgWidth, imgHeight, imgChannels [, imgFormat][, maxNrTags]); + +Initialize apriltag prior to first use. + +Apriltag markers are loaded for the given apriltag family 'tagFamilyName'. +The following tag families are currently supported: + tag36h11 + tag25h9 + tag16h5 + tagCircle21h7 + tagCircle49h12 - Use maxNrTags to restrict size! + tagCustom48h12 - Use maxNrTags to restrict size! + tagStandard41h12 + tagStandard52h13 - Use maxNrTags to restrict size! + +Internal video image memory buffers are set up for input images of size +'imgWidth' x 'imgHeight' pixels, with 'imgChannels' mono or color channels (1 +MONO, 3 RGB, or 4 RGBA are valid settings, but 1 for MONO is the most efficient +choice for minimal computation time). For 3 or 4 channels, the input color +format 'imgFormat' (or a default setting if 'imgFormat' is omitted) defines +color channel ordering for the input pixel bytes. 'imgFormat' can be one of: +RGB = 1, BGR = 2, BGRA = 4 (default for 4 channels), ARGB = 7, MONO = 6. Other +pixel formats are not supported for input images, but these are the ones +provided by Psychtoolbox's various video capture engines for different video +sources and settings. For 1 channel mono/grayscale or 3 channel RGB color +content, you don't need to specify 'imgFormat' as the chosen default will always +work, but for 4 channel content with alpha channel, you may have to specify +'imgFormat' if your machine does not return BGRA ordered pixels, but ARGB +ordered pixels, or marker detection on 4 channel content may fail or perform +poorly. +When using a tag family with many potential tags, you can limit the number of +tags to use to the first 'maxNrTags' tags if you specify 'maxNrTags'. This is +important for certain large tag families, as using the tag family at its full +capacity may consume a lot of memory and take a very long time to initialize! +Then apriltags is initialized, and a memory buffer handle 'inputImageMemBuffer' +to the internal video memory input buffer is returned. + +You should pass this handle to Psychtoolbox functions for videocapture to +acquire video images from the scene containing the tags, and to store that input +image inside PsychCV's video buffer. + +After this step, you can commence the actual tracking operations by calls to +[Screen](Screen)()'s video capture engine and the [PsychCV](PsychCV)('AprilDetectMarkers', ...); +subfunction. + + + +###See also: +[AprilShutdown](PsychCV-AprilShutdown) [AprilDetectMarkers](PsychCV-AprilDetectMarkers) [AprilSettings](PsychCV-AprilSettings) [April3DSettings](PsychCV-April3DSettings) diff --git a/docs/PsychCV-AprilSettings.md b/docs/PsychCV-AprilSettings.md new file mode 100644 index 00000000..6245da24 --- /dev/null +++ b/docs/PsychCV-AprilSettings.md @@ -0,0 +1,37 @@ +# [PsychCV('AprilSettings')](PsychCV-AprilSettings) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +[nrThreads, imageDecimation, quadSigma, refineEdges, decodeSharpening, criticalRadAngle, deglitch, maxLineFitMse, minWhiteBlackDiff, minClusterPixels, maxNMaxima] = PsychCV('AprilSettings' [, nrThreads][, imageDecimation][, quadSigma][, refineEdges][, decodeSharpening][, criticalRadAngle][, deglitch][, maxLineFitMse][, minWhiteBlackDiff][, minClusterPixels][, maxNMaxima]); + +Return current tracker parameters, optionally change tracker parameters. +These settings are set to reasonable defaults at startup, but can be changed +anytime with this subfunction. Most of these settings define tradeoffs between +computation time aka tracking speed and quality/robustness of tracking. +Following settings are available: +'nrThreads' Number of processing threads to use for speeding up operation. +Default is 1 for single-threaded operation. +'imageDecimation' 1 = Process full image. \> 1 = Only work on resolution +decimated image for higher speed at lower precision. Default is 2. +'quadSigma' How much blurring (values \> 0) or sharpening (values < 0) to apply +to input images to reduce noise. Default is 0 for none. +'refineEdges' 1 = Perform edge refinement on detected edges (cheap, and the +default), 0 = Use simpler strategy. +'decodeSharpening' How much sharpening should be done to decoded images? Can +help small tags. Default is 0.25. +'criticalRadAngle' How close pairs of edges can be to straight before rejection. +0 = Don't reject. Default is 10 degrees. +'deglitch' Should the thresholded image be deglitched (1) or not (0)? Only +useful for very noisy images. Default is 0 for false. +'maxLineFitMse' When fitting lines to contours, what is the maximum mean squared +error allowed? For rejecting contours far from quad shape. Default is 10.0. +'minWhiteBlackDiff' How much brighter (in pixel values 0 - 255) must white +pixels be than black pixels? Default is 5. +'minClusterPixels' Reject quads containing less than this number of pixels. +Default is 5. +'maxNMaxima' How many corner candidates to consider when segmenting a group of +pixels into a quad. Default is 10. + + + +###See also: +[AprilDetectMarkers](PsychCV-AprilDetectMarkers) diff --git a/docs/PsychCV-AprilShutdown.md b/docs/PsychCV-AprilShutdown.md new file mode 100644 index 00000000..4762bd9d --- /dev/null +++ b/docs/PsychCV-AprilShutdown.md @@ -0,0 +1,14 @@ +# [PsychCV('AprilShutdown')](PsychCV-AprilShutdown) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +PsychCV('AprilShutdown'); + +Shut down apriltag after use, release all resources. +The memory buffer handle 'inputImageMemBuffer' returned by a prior call to +inputImageMemBuffer = [PsychCV](PsychCV)('AprilInitialize', ...); will be invalid after +this shutdown call and must not be used anymore, or Psychtoolbox will crash! + + + +###See also: +[AprilInitialize](PsychCV-AprilInitialize) diff --git a/docs/PsychCV-CopyMatrixToMemBuffer.md b/docs/PsychCV-CopyMatrixToMemBuffer.md new file mode 100644 index 00000000..4bc662aa --- /dev/null +++ b/docs/PsychCV-CopyMatrixToMemBuffer.md @@ -0,0 +1,14 @@ +# [PsychCV('CopyMatrixToMemBuffer')](PsychCV-CopyMatrixToMemBuffer) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +PsychCV('CopyMatrixToMemBuffer', matrix, memBufferPtr); + +Copies a Matlab/Octave uint8 or double matrix 'matrix' into a C memory buffer, +whose memory (void\*) pointer is encoded as a double scalar value 'memBufferPtr'. +The target buffer must be of sufficient size, no checking is performed! This is +essentially a C memcpy() operation, so use with caution, or Matlab/Octave will +crash! + + +###See also: + diff --git a/docs/PsychCV-DescribeModuleFunctionsHelper.md b/docs/PsychCV-DescribeModuleFunctionsHelper.md new file mode 100644 index 00000000..094341a9 --- /dev/null +++ b/docs/PsychCV-DescribeModuleFunctionsHelper.md @@ -0,0 +1,15 @@ +# [PsychCV('DescribeModuleFunctionsHelper')](PsychCV-DescribeModuleFunctionsHelper) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +subfunctionNames = Modulename('DescribeModuleFunctionsHelper' [, mode] [, subfunctionName]); + +Return a cell array of strings naming all subfunctions supported by this module +if the optional 'subfunctionName' argument is omitted. If 'subfunctionName' is a +string with a valid subfunction name for the module, and 'mode' is 1, return a 3 +element cell array of strings which describe the detailed syntax, help and see +also strings for that function - the text you'd get for +Modulename('subfunctionName?'); + + +###See also: + diff --git a/docs/PsychCV-Verbosity.md b/docs/PsychCV-Verbosity.md new file mode 100644 index 00000000..39bcca47 --- /dev/null +++ b/docs/PsychCV-Verbosity.md @@ -0,0 +1,14 @@ +# [PsychCV('Verbosity')](PsychCV-Verbosity) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +oldlevel = PsychCV('Verbosity' [,level]); + +Set level of verbosity for error/warning/status messages. 'level' optional, new +level of verbosity. 'oldlevel' is the old level of verbosity. The following +levels are supported: 0 = Shut up. 1 = Print errors, 2 = Print also warnings, 3 += Print also some info, 4 = Print more useful info (default), \>5 = Be very +verbose (mostly for debugging the driver itself). + + +###See also: + diff --git a/docs/PsychCV-Version.md b/docs/PsychCV-Version.md new file mode 100644 index 00000000..d368863b --- /dev/null +++ b/docs/PsychCV-Version.md @@ -0,0 +1,10 @@ +# [PsychCV('Version')](PsychCV-Version) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV).{mex*} subfunction + +struct=PsychCV('Version') + +return the version of [PsychCV](PsychCV) in a struct + + +###See also: + diff --git a/docs/PsychCV.md b/docs/PsychCV.md index f69e2f1e..2c8f2dd6 100644 --- a/docs/PsychCV.md +++ b/docs/PsychCV.md @@ -1,25 +1,43 @@ # [PsychCV](PsychCV) -##### >[Psychtoolbox](Psychtoolbox)>[PsychBasic](PsychBasic) +##### [Psychtoolbox](Psychtoolbox)>[PsychCV](PsychCV) -[PsychCV](PsychCV) is a MEX file for computer-vision applications. [PsychCV](PsychCV) has -many functions; type "[PsychCV](PsychCV)" for a list: - [PsychCV](PsychCV) +PsychCV - Helper module for miscellaneous functionality related to image processing and/or computer vision: -Please note that [PsychCV](PsychCV) is only supported on recent platforms, e.g., -Matlab versions 7.4 (R2007a) and later or Octave-3 on Windows, and only -on Intel based Macintosh computers under OS/X but not on [PowerPC](PowerPC) -machines. +Copyright 2008 - 2024 Mario Kleiner. Licensed under MIT license. For potential statically included libraries, +see licenses below. This module employs various different 3rd party software, so here are the credits for those parts: + +The 'Aprilxxx' subfunctions for April tag tracking are implemented by use of the apriltag library from: +https://april.eecs.umich.edu/software/apriltag and https://github.com/AprilRobotics/apriltag . +The apriltag library is licensed under BSD-2 clause license. See Psychtoolbox main License.txt file for details. + + +General information and settings: + +version = PsychCV('[Version](PsychCV-Version)'); +oldlevel = PsychCV('[Verbosity](PsychCV-Verbosity)' [,level]); + +Helper functions for memory buffer copies: + +PsychCV('[CopyMatrixToMemBuffer](PsychCV-CopyMatrixToMemBuffer)', matrix, memBufferPtr); + +Support for the apriltag 2D/3D april tag marker tracking library: + +[inputImageMemBuffer] = PsychCV('[AprilInitialize](PsychCV-AprilInitialize)', tagFamilyName, imgWidth, imgHeight, imgChannels [, imgFormat][, maxNrTags]); +[detectedMarkers] = PsychCV('[AprilDetectMarkers](PsychCV-AprilDetectMarkers)'[, markerSubset=all][, infoType=all]); +PsychCV('[AprilShutdown](PsychCV-AprilShutdown)'); +[nrThreads, imageDecimation, quadSigma, refineEdges, decodeSharpening, criticalRadAngle, deglitch, maxLineFitMse, minWhiteBlackDiff, minClusterPixels, maxNMaxima] = PsychCV('[AprilSettings](PsychCV-AprilSettings)' [, nrThreads][, imageDecimation][, quadSigma][, refineEdges][, decodeSharpening][, criticalRadAngle][, deglitch][, maxLineFitMse][, minWhiteBlackDiff][, minClusterPixels][, maxNMaxima]); +[glProjectionMatrix, camCalib, tagSize, minD, maxD] = PsychCV('[April3DSettings](PsychCV-April3DSettings)' [, camCalib][, tagSize][, minD][, maxD]); + PsychCV is a MEX file for computer-vision applications. PsychCV has + many functions; type "PsychCV" for a list: + PsychCV + + Please note that PsychCV is only supported on recent platforms, e.g., + Matlab versions 7.4 (R2007a) and later or Octave-3 on Windows, and only + on Intel based Macintosh computers under OS/X but not on PowerPC + machines. + + - -
- Path   Retrieve current version from GitHub | View changelog -
-
- Psychtoolbox/PsychBasic/PsychCV.m -
- diff --git a/docs/PsychImaging.md b/docs/PsychImaging.md index 9a6113d3..b5ae1e82 100644 --- a/docs/PsychImaging.md +++ b/docs/PsychImaging.md @@ -854,28 +854,28 @@ actions: \* 'EnableNative16BitFramebuffer' Enable 16 bpc, 64 bpp framebuffer on some supported setups. This asks to enable a framebuffer with a color depth of 16 bpc for up to 65535 levels of intensity - per red, green and blue channel, ie. 48 bits = 2^48 different colors. Currently, as of November 2021, + per red, green and blue channel, ie. 48 bits = 2^48 different colors. Currently, as of November 2024, this mode of operation is only supported on Linux/X11 when using the open-source amdgpu-kms driver on modern AMD GCN 1.1+ graphics cards [3]. On suitable setups, this will establish a 16 bpc framebuffer which packs 3 \* 16 bpc = 48 bit color info into 64 bpp pixels and the gpu's display engine will scan out that framebuffer at 16 bpc. However, effective output precision is further limited to < 16 bpc by - your display, video cable and specific model of graphics card. As of November 2021, the maximum effective + your display, video cable and specific model of graphics card. As of November 2024, the maximum effective output precision is limited to at most 12 bpc (= 4096 levels of red, green and blue) by the graphics card, and this precision is only attainable on AMD graphics cards of the so called "Sea Islands" (cik) family - (aka [GraphicsCore](GraphicsCore) Next GCN 1.1 or later) or any later models when used with the amdgpu-kms display driver. + (aka [GraphicsCore](GraphicsCore) Next GCN 1.1 or later), or any later models when used with the amdgpu-kms display driver. Older AMD cards of type GCN 1.0 "Southern Islands" or earlier won't work, as they only support at most 10 bpc overall output precision. Please note that actual 12 bpc output precision can only be attained on certain display devices and - software configurations. As of November 2021, the following hardware + software combos have been + software configurations. As of November 2024, the following hardware + software combos have been verified with a CRS [ColorCal2](ColorCal2) colorimeter to provide 12 bpc per color channel precision: - The Apple [MacBookPro](MacBookPro) 2017 15 inch with its built-in 10 bit Retina display, running under Ubuntu Linux - 20.04 with Linux 5.14, as well as with a HDR-10 monitor via [DisplayPort](DisplayPort) and also via HDMI. As those - displays are 10 bit only, the 12 bit precision was attained via spatial dithering by the gpu. + 20.04 with Linux 5.14 or later kernels, as well as with a HDR-10 monitor via [DisplayPort](DisplayPort) and also via + HDMI. As those displays are 10 bit only, the 12 bit precision was attained via spatial dithering by the gpu. - - AMD Ryzen 2400G with AMD [RavenRidge](RavenRidge) integrated graphics chip with a HDR-10 monitor via [DisplayPort](DisplayPort) and + - AMD Ryzen 5 2400G with AMD [RavenRidge](RavenRidge) integrated graphics chip with a HDR-10 monitor via [DisplayPort](DisplayPort) and also via HDMI. As that display is 10 bit only, the 12 bit precision was attained via spatial dithering by the gpu. @@ -896,27 +896,29 @@ actions: gpu's like Polaris and Vega. Later versions only support AMD Navi and later with RDNA graphics architecture. - 2. You will need to install Linux kernel 5.14, which is currently not shipping in any Ubuntu release, - as of November 2021. A way to manually install it on Ubuntu 20.04-LTS is described on the following - web page via the "mainline" helper software: + 2. You will need at least Linux kernel 5.14 or later versions. A way to manually install it on + Ubuntu 20.04-LTS is described on the following web page via the "mainline" helper software: https://ubuntuhandbook.org/index.php/2020/08/mainline-install-latest-kernel-ubuntu-linux-mint - Ubuntu 22.04-LTS should ship with a suitable kernel by default in April 2022. + Ubuntu 22.04-LTS and later versions already have a suitable Linux 5.15 or later kernel by default + since April 2022. - 3. If you are using an AMD Polaris gpu or later then you are done. + 3. If you are using an AMD Polaris gpu (aka Radeon 400 series) or later, then you are done. + Polaris was introduced in summer 2016. If you are using an old "Sea Islands" / "[GraphicsCore](GraphicsCore) Next 1.1" / "GCN 1.1" gpu, you must reboot your machine with Linux kernel boot settings that select the new amdgpu kms driver and AMD [DisplayCore](DisplayCore), instead of the old radeon kms driver that would be used by default. This requires adding the following parameters to the kernel boot command line: "radeon.cik\_support=0 amdgpu.cik\_support=1 amdgpu.dc=1" - Additionally you would need a custom amdvlk driver, as AMD's current official AMDVLK driver does + Additionally you would need a custom AMDVLK driver, as AMD's current official AMDVLK driver does no longer support pre-Polaris gpu's. We won't provide this driver for free at the moment, so please enquire for potential paid support options on the Psychtoolbox user forum. On AMD you can verify actual output bit depth for an output XXX by typing this command into a terminal - window, assuming your AMD graphics card is the first or only gpu in the system, ie. has index 0: + window, assuming your AMD graphics card is the first or only gpu in the system, ie. has index 0. If your + card has a different index, then replace the 0 below with that index: sudo cat /sys/kernel/debug/dri/0/XXX/output\_bpc @@ -924,6 +926,17 @@ actions: sudo cat /sys/kernel/debug/dri/0/eDP-1/output\_bpc + Note also that currently Psychtoolbox by itself will usually limit video output precision to 10 bpc, + even if a display is connected that claims to support 12 bpc or more, so 12 bpc content is actually + displayed by the graphics card using spatial dithering down to 10 bpc. Why? Because at least all 10 + bpc capable HDMI displays are required by the HDMI standard to also "support" 12 bpc output precision. + However, most displays do not actually support 12 bpc, but just fake 12 bpc support. By forcing actual + output precision down to 10 bpc on such "pretend 12 bpc" displays, we can still achieve a good approximation + of 12 bpc output via gpu hardware dithering. If by chance you are the lucky owner of a true 12 bpc capable + display device, you can override Psychtoolbox 10 bpc choice by executing the xrandr command in a terminal, + or via a system('xrandr ...'); call from Octave/Matlab. E.g., the following xrandr command would enable + true 12 bpc output without dithering on HDMI output 0: xrandr --output HDMI-0 --set 'max bpc' 12 + Once the above one-time setup is done, adding the following command to your script will enable the 16 bpc framebuffer with up to 12 bpc effective output precision: diff --git a/docs/Screen-FrameOval.md b/docs/Screen-FrameOval.md index 874af38b..206a8742 100644 --- a/docs/Screen-FrameOval.md +++ b/docs/Screen-FrameOval.md @@ -1,7 +1,7 @@ # [Screen('FrameOval')](Screen-FrameOval) ##### [Psychtoolbox](Psychtoolbox)>[Screen](Screen).{mex*} subfunction -Screen('FrameOval', windowPtr [,color] [,rect] [,penWidth] [,penHeight] [,penMode]); +Screen('FrameOval', windowPtr [, color] [, rect] [, penWidth] [, penHeight] [, penMode]); Draw the outline of an oval inscribed in "rect". "color" is the clut index (scalar or [r g b] triplet) that you want to poke into each pixel; default diff --git a/docs/VideoRecordingDemo.md b/docs/VideoRecordingDemo.md index 5ae68829..381bbae1 100644 --- a/docs/VideoRecordingDemo.md +++ b/docs/VideoRecordingDemo.md @@ -1,7 +1,7 @@ # [VideoRecordingDemo](VideoRecordingDemo) ##### >[Psychtoolbox](Psychtoolbox)>[PsychDemos](PsychDemos) -[VideoRecordingDemo](VideoRecordingDemo)(moviename [, codec=0] [, withsound=1] [, showit=1] [, windowed=1]) +[VideoRecordingDemo](VideoRecordingDemo)(moviename [, codec=0][, withsound=1][, showit=1][, windowed=1][, deviceId=0]) Demonstrates simple video capture and recording to a movie file. @@ -60,6 +60,8 @@ recording on lower end machines. the top-left corner of the screen, instead of fullscreen. Windowed display is the default. +'deviceId' Optional deviceIndex of the video capture device. Defaults to +0 for the default video capture device. Tip on Linux: If you have an exotic camera which only delivers video in non-standard video formats, and Psychtoolbox does not handle this automatically, but aborts with