Oculus Rift

Would you be interested in Oculus Rift support in Panda3D?

  • Yes
  • No
  • Don’t care.

0 voters

I’ll be receiving one of the Oculus Rift developer kits sometime in the next month. I’m pretty excited to try and integrate it into my ‘game’ code.google.com/p/stableorbit which uses panda3d.

Would anyone else be interested in me writing a plugin for Panda3D to support the rift?

I prefer python over c++. Hopefully that’s ok. Does anyone have any suggestions on how I should go about it?

oculusvr.com

Awesome!

Unfortunately, it will have to be in C++. It is not possible to write such plugins for Panda3D in Python.
I’m willing to help you out if you’re stuck, though.

I appreciate it. It’s been about 10 years since I last touched any C++. Any suggestions on where I should start?

I don’t know, this depends on the kind of APIs that Oculus Rift provides. I’d have to know more about the APIs (perhaps see some developer resources) before I can say where you should start.

So I received my dev kit. and I’ve been thoroughly enjoying it. I’d like to get the official SDK working with panda3d but they have yet to release the Linux version.

In the meantime, I’ve started to try and get stereo rendering working. So far, I’ve:

set the resolution and “side-by-side-stereo 1” in config.prc

Now I need to apply a barrel shader to each DisplayRegion to get the distortion right.

Here’s an example of a glsl texture shader for barrel distortion:

github.prideout.net/barrel-distortion/

I’m trying to achieve something like this: img15.imageshack.us/img15/299/sc … 646629.png

would I then apply it to the camera node or each DisplayRegion?

I’m sorry if I’m way off mark here. Thanks for the help!

at the moment I’m trying to get it working for the ‘Roaming Ralph’ tutorial. any suggestions would be helpful.

panda3d.org/manual/index.php … ge_Filters

Appears to be what I’m looking for. now to get it working

I plan on getting one of these. Does it come with anything that you can try out of the box, like a demo? I could only imagine how much fun it would be to work with this in Panda3d, so I’d be interested to see how your plugin attempt goes. I could help out as well, but that would still be a couple months away at least (they’re estimating a June delivery date right now.)

One of the things I always wondered was how you could simulate large environments in a small room; I didn’t even really consider this idea:

http://www.newscientist.com/article/dn23321-virtual-reality-creates-infinite-maze-in-a-single-room.html

@cslos77: I believe that the downloadable SDK comes with a few demos.

I’ve been able to use the NonlinearImager with a distortion lens to get an output similar to the ones you see in the Oculus demos. This approach does not require shaders, and is probably therefore the best way to implement it. We might want to find a better way to integrate this into Panda3D so that all this setup is not necessary; perhaps there could be a special DisplayRegion implementation that transparently uses render-to-texture in order to apply the distortion effect in postprocessing.

I would love to add full support for the Oculus Rift to Panda3D, but the lack of developer kit makes me unable to test my efforts. Failing that, I’ll do everything I can to help others with their effort to get it working. :slight_smile:

Hi all,

I’ve managed to get the rift working nicely with Panda now. Will polish it up a bit and pop the code on github. C++ only at the moment but I’m planning to add the Python interfaces.

To ensure the distortion was correct, I used the calculations in the Rift SDK document and put them into two matrix lenses (after having transposed from y up to z up). Have both the normal distortion and distortion plus chromatic abberation shaders working and intend to test performance with the distortion offsets rendered as a displacement lookup in another texture.

Here’s a screeny of the output so far (without chromatic abberation offsets):-

.

Hey, welcome to the forums! :slight_smile:

Excellent work! I’d love to see your code. How did you implement the distortion filter? Did you use a shader, a lookup texture, or a deformed card?

To make the Rift integration more simple and seamless, I’ve added support for stereo FBOs in the past week. With my modifications, you can now create a FilterManager object and it will automatically work on both left and right regions simultaneously. (I haven’t committed my work yet, though.) Are you presently using two modified FilterManager objects or have you set up your own postprocessing buffers?

Personally I’ve already got the main C++ interfaces for the integration done (device enumeration etc.), I haven’t worked on the distortion yet, so your efforts would perfectly complement my own. The tracker code is also done and integrated with the existing TrackerNode system. However, I can’t test it until a week and a half, when I can access my devkit. (It has been shipped to a neighbour while I am on vacation.)

My intent is to implement it in C++ all around so that it’s not just available to Python users, and to create a seamless interface so that people can easily open a window on the Rift with automatic set up of distortion filters. I also intend on providing a shaderless solution using a card with deformed UVs (a la NonlinearImager) or a lookup texture.
It is important for the integration to work seamlessly with the existing FilterManager. The problem with applying it as a FilterManager shader is that other effects will happen after the distortion, when they are supposed to happen before; and god forbid when the user creates a FilterManager object of his own. This applies to the distortion shader, anyway; I haven’t looked into the chromatic abberation compensation shader, so it may well work fine in FilterManager.

The way I see it, there are two ways I could seamlessly implement the window set-up into the Panda codebase:

  • Provide a neat make_window() on the HMD object that creates a window of the right size on the appropriate display with all the filters set up. Relatively easy to implement.
  • Provide a special BufferedDisplayRegion that people can use to apply the filters and distortion. Tricker to implement, yet more powerful and generally useful.
    I’m personally leaning toward the former, since it is more integrated and automatic; but there is something to say for the latter which gives the user more control, though requiring a more intimate understanding of the system. I’d love to get the input of others on this; or perhaps there’s a better option I’m missing.

Glad to join and happy to contribute! Great to hear you’ve got most of the hardware integration in place. Is your work in progress in the development snapshots? My plan was to post my C++ implementation on github first then add Python interfaces. Can share using something else if it’s easier/preferred.

Given the distortion is non-linear, and deformed geometry seemed too much of a loose approximation, I’m using a shader similar to the rift SDK to warp the input texture coordinates to the source texture buffer. The chromatic aberration compensation is the same shader with a few extra parameters to generate a texture lookup per channel. I’ve also implemented a lookup texture but need to find out more on shader profiling as while the shader is much simpler, I’m interested to know how much latency is introduced by the extra texture lookups (2 vs 1 for normal and 5 vs 3 for chromatic aberration). Would be interesting to test over a range of graphics cards.

Given I’m new to Panda, I couldn’t get the stereo channel regions/cameras to work quite right and hadn’t even come across the NonlinearImager and TrackerNode. I’m currently rendering the scene to square display regions in a 2048x1024 texture buffer, each with a separate camera and matrix lens. The projection matrix is calculated for the rift FOV and lens/pupil offset per eye and to correct for the destination aspect ratio. The cameras are translated to the eye offset with no convergence. Same shader used per eye but with different offset parameters. The shaders are attached to two cards side by side in front of an orthographic camera (so now having looked, it seems I rolled my own filter manager there). Just using the sensor fusion data to rotate the root camera node.

On a side note, I’m surprised the rift SDK shader doesn’t have separate terms for the lens and pupil offset as given the lens centre is fixed, I would have thought having a much different inter-pupillary distance e.g. children, would have a dramatic effect on the distortion required.

Would be good to integrate better with the existing Panda stereo channel classes and help where I can with your rift support. Have a good vacation and look forward to collaborating when you get back!

My work in progress (little as it is) isn’t committed anywhere yet; I have no ability to test and as such I don’t want to commit something when the entire design might turn out to be wrong. It’s mostly cosmetic high-level interfaces for now. It’s wrapped using Interrogate, like the rest of Panda.
It’d be great if you could share your work on github or bitbucket or e-mail or whatever you prefer.

I’m not entirely convinced that using a deformed card will be too much of a loose approximation; it would all depend on the tessellation. One could have a grid of 1280 by 800 vertices at which it’ll be guaranteed to be an exact match (but you’ll have an insane number of vertices). I’l have to run some tests and compare it pixel-by-pixel with the shader results to see how much resolution I would need for it to be a near-exact copy. In the worst case, I might keep it as fallback approach for devices that don’t have shaders. (Not that anyone with a Rift would have business with such a graphics card…)

I’m also curious why you are using a matrix lens - is it because the calculations were easily available in the Rift documentation and didn’t want to bother with Panda’s normal stereo lens, or does the projection require parameters that are not available in the regular Panda lens?

If you have any questions, don’t hesitate to ask!

OK, I tried my method of using a card with distorted texture coordinates to avoid having to use a shader. It works quite well. I set up a shader that shows the difference in texture coordinates between both approaches so that I can see how much the error with my approach is. With a card of only 53x66 vertices (total 6760 triangles), I get the result as attached to this post. The effect is greatly exaggerated because you otherwise can’t see anything.
I configured the shader to scale the result by twice the screen resolution, so that the colours clip when the difference exceeds more than half a pixel. As you can see, they don’t, which means that the result will be identical to the shader-based approach at this resolution. (I also tried with the parameters for the consumer version with the same result).

However, as you can see, the error is larger near the edges, which means that this approach could be improved by distorting the card vertices instead of the texture coordinates, which should theoretically give an even error across the board with less vertices used. I will need to find the inverse of the warp function for that, though.

By the way, you mentioned using a lookup texture to speed up the calculation. I experimented with something like that, but the problem is that the texture values are in the range 0-255, which is not enough for properly representing the texture values, and yields scaling artifacts as a result. It could work if you used the other channels of the textures to store the low bits of the coordinates though, but then the question is whether that calculation will be faster than the HmdWarp function. I doubt it.


Hi, I’ve been away myself so just getting back to coding. Slowly getting my head around the Panda way of doing things!

The card with distorted texture coordinates looks good, particularly given the triangle count.

Yes, you need the precision for the lookup. I too doubt it’s an improvement over the basic shader however with a more accurate warp calculation it might win over. I found an interesting discussion on the Oculus forums https://developer.oculusvr.com/forums/viewtopic.php?f=20&t=353.

The matrix lenses were purely for my own convenience and understanding, not through any deficiency of Panda (that I know of). The Rift SDK StereoConfig class can generate the appropriate matrices, and having now found the y_to_z_up_mat() you could use it directly.

Looking at the TrackerNode, is the VrpnClient the best example of how to implement a client device?

Yes, my implementation uses classes that derive from ClientBase (corresponding to OVR::DeviceManager) and ClientTrackerDevice (corresponding to OVR::SensorDevice). If you want, I can clean up my tracker code over the weekend and share it so that you don’t have to do double work.

I did end up implementing the reverse distorted card approach, but I failed to derive the inverse barrel distortion function and instead went for an approximation. The approximation is a bit slower to compute (this is only a setup cost that is still negligible), but even with the approximation, I did reduce the number of required vertices to 1280 (2418 triangles) without loss of detail. This is about three times less than the other approach, with the distorted UVs.
So, I think this is an effective shader-free approach for applying distortion in a manner that is still pixel-perfect. I haven’t run performance test against the shader approach, however.

Please do share your device code. I have a very basic hardware interface at the moment and it would be good to put effort into helping build a ‘proper’ device implementation.

I’ve posted my initial code on github.com/wamonite/Pandrift . It works but there is plenty of room for improvement that I’m aiming to fix up quickly.

For reference, there is an implementation of the distortion inverse in Util/Util_Render_Stereo.cpp in the Rift SDK.

Hy Guys,

I’m completely new to Panda but I got the interesting task to bring RIFT support in an Panda based environment for some psychological experiments called SNAP.
Thx for your work so far - starting an integration hack on my own would be end up in a mess…

However, Is the code you mentioned above available in the latest devel build?

Not yet, sorry. I’ve been so overwhelmingly and unexpectedly busy that I haven’t been able to dedicate much time to it. I’ll try to find some time soon.

Hy,

don’t hurry, take your time! I will try to find a more naive approach. For the first steps, we won’t need some elegant and high performance solutions.

What do you think about rendering the scene twice into 2 GraphicsBuffer and Transform the output through the barrel distortion pixel shader, projected in 2 DisplayRegions. Done all manually in the application script, written in python.

Are there some special issues which might lead into struggle?

Yes, that makes sense. There’s an alternative approach, though, if you’re feeling adventurous. As part of my developments on a seamless Rift integration, I’ve recently checked in code to the GraphicsBuffer system that allows you to use a single buffer that has one attachment per eye, though, though it’s still a bit unused and experimental. It uses Panda’s multiview textures, so it uses one buffer and one texture object so that the stereo setup is virtually seamless. To take advantage of this feature, use setStereo(True) in the FramebufferProperties.

Then, all you’d need to do is apply that texture to a fullscreen quad (with the pixel shader applied) and flip the switch to enable side-by-side stereo in Panda (which automatically divides your window up in two regions), and it will automatically choose the right texture view to apply while rendering the respective display region.