Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v21.0.0 Release Todos #1722

Open
15 tasks
zmerp opened this issue Jul 3, 2023 · 4 comments
Open
15 tasks

v21.0.0 Release Todos #1722

zmerp opened this issue Jul 3, 2023 · 4 comments
Labels
enhancement New feature or request release Relative to release branches or future releases

Comments

@zmerp
Copy link
Member

zmerp commented Jul 3, 2023

To be done not earlier than just before releasing v21.0.0 stable, or after there was a devXX bump.

  • Remove ClientControlPacket::VideoErrorReport
  • Merge ControlSocket with StreamSocket
  • Refactor packet protocol
    • Make prefix data little-endian
    • Make packet length contain itself (add 4 bytes to the count)
    • Remove extra 4 bytes to the max shard length
  • Send combined eye gaze with separate field
  • Send eyes already in local head space.
  • Use a json string in place of the VideoStreamingCapabilities packet.
  • Make multimodal input protocol default
  • Send velocity for skeletal hand tracking
  • [ ] Use mesh foveated rendering deferred. Needs protocol support to negotiate disabling FFE.
  • Add static controller offsets on the client and make parameter exposed by dashboard default to [0,0,0]
  • Make all stream header packets extensible with a Vec<u8> (don't use json). Values can only be appended and not removed. Alternatively investigate CapnProto or Flatbuffers which support protocol extensions natively.
  • Include DecoderConfig packet in the video frame header, to avoid having to request another IDR and resend the DecoderConfig two times.
  • Use mutually exclusive tracking sources (e.g. 2 eyes or combined gaze, fb or htc face tracking, fb or pico body tracking), and wrap it with Option.
@zmerp zmerp added the release Relative to release branches or future releases label Jul 3, 2023
@stale stale bot added the stale label Aug 7, 2023
@alvr-org alvr-org deleted a comment from stale bot Aug 8, 2023
@stale stale bot removed the stale label Aug 8, 2023
@zmerp zmerp added the enhancement New feature or request label Aug 8, 2023
@shinyquagsire23 shinyquagsire23 pinned this issue Nov 2, 2024
@shinyquagsire23
Copy link
Contributor

Adding some of my own additions:

  • Headset and controller types default to Automatic instead of Quest 2, driven by the client
    • I think it'd be neat to allow a headsets to send controller data and display a single tracked gamepad instead, maybe. Or at least differentiate between Joy-Con and whatever SLAM controllers might exist on AVP. Also, maybe differentiate controllers vs the Logitech stylus on Quest headsets.
  • Write-only SteamVR settings JSON, separate from session.json
  • (Tangential to mesh FFR) Allow padding at the edges of the video stream so that Weird Resolutions can be used?
  • Event signaling from OpenVR -> client, to allow passthrough to be driven by SteamVR
    • ie SteamVR sees a settings change, checks if passthrough was turned on, tells client to activate passthrough shaders
  • Extra video streams for alpha channel and/or depth or motion vectors? I'm less certain about this now, I think SteamVR composites all apps into a single layer.
  • Change headset FPS at runtime based on app performance?
    • Would need to handle the possibility of the client not being able to switch, due to passthrough constraints or whatever else.
  • Per-frame view transforms that don't cause SteamVR to hiccup, maybe?
    • Ocular parallax for instance can be simulated as a per-frame IPD change
    • Vision Pro also has 'comfort options' to introduce (I believe, not certain) a software-based vertical IPD (vertical offsets might also exist in other headsets as a display calibration thing, worth verifying)
  • Chaperone APIs for the client maybe

@zmerp
Copy link
Member Author

zmerp commented Nov 2, 2024

@shinyquagsire23 Your points are either don't need to be protocol breaks (they can be protocol extensions) or may take too long to be included in v21 assuming you haven't started working on it. For FFR i'm thinking we should leave it out, because we need to have android + windows + linux implementations which will take a long time. The idea is to first switch both windows and linux to wgpu like the client so it will be far easier to do all at once

@shinyquagsire23
Copy link
Contributor

Yeah I'm mostly just spitballing on things that could be marginally improved if a protocol break is already happening. ie headset info is only sent as a single string and it gets kinda iffy to change that back to usable information on the streamer, IMO it could split out info a bit more to help futureproofing (ie A Quest 3 could split as manufacturer=Meta type=Quest subtype=Quest 3, so if the Quest 4 comes out the streamer could match on Meta and Quest and then the subtype could have aesthetic fallbacks targeted at Quests).

@zmerp
Copy link
Member Author

zmerp commented Nov 2, 2024

@shinyquagsire23 I understand what you're saying, but we can still have a universal generic fallback. it's not a big deal, to avoid that the user should have both client and server on the same version. So, let's also keep the display name as string, passing it as a Platform variable could fail deserialization. On the server side we should try to match one by one the strings.

Indeed we could make things clearer and rename display_name to platform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request release Relative to release branches or future releases
Projects
None yet
Development

No branches or pull requests

2 participants