Hi there, I’m just starting to convert a program formerly just written in plain python/opengl over to panda3d and I’m trying to figure out a good way to use webcam images as textures.
In my previous app I used the python VideoCapture module to get Python Imaging Library (PIL) images and then converted them directly to textures. I’ve tried to do a similar thing in panda first using PNMImages setXelVal to convert a PIL image to a PNMImage then used that for the texture. This method was far too slow (2-3seconds/frame!), so I started poking around and playing with Texture.SetRamImage, but again copying from the pil image into the PTArray required by setramimage was far too slow (still 1-2secs)…
I used to be able to get multiple cameras giving 15+ frames/sec using the same pil calls. So is there a more direct way to get pixel data into a texture?? Is there a direct way to get pixel data out of a texture as well??
Also I’ve noticed there is a OpenCVTexture in panda which seems like it should let you use a webcam, but I haven’t for the life of me been able to figure it out. Any tips on using this?
to set up a texture from the default web cam, as seen by the open CV library.
Of course, you should also be able to copy the textures in one pixel at a time, via Texture::set_ram_image(), if you do it in C/C++ and not in Python–this is basically what the OpenCVTexture code is doing. Having Python touch each pixel is going to be far too expensive, though.
In the past I haven’t had problems using tostring and fromstring in pil and pygame to pass back and forth webcame images at a decent rate…is there a way to write pixeldata in bulk to a PTAUchar? I’ve just been using the pushBack method, and I think that might be where a lot of my slowdown is coming from.
Cheers!
Hmm, yes, if we had a setString() or some similar method on PTAUchar, you could move the pixels through Python easily. That would be a useful addition. Sorry to say, it doesn’t exist yet in the current release of Panda.
I’m also not sure why the OpenCVTexture is failing. I’ve never actually used the library to display a webcam image before, so I don’t know if there’s a problem in the code; but we’re following the advertised interface. I don’t know whether the guy who originally implemented OpenCVTexture ever saw it work with a webcam or not. But I do know it works just fine to play an .avi file, and it’s almost exactly the same interface. Do you get any error messages to the console? I suppose you can try different parameters to fromCamera(), e.g. fromCamera(0), fromCamera(1), to see if naming a specific webcam by index helps.
Yeah, it seems to be playing avis just fine (quite well in fact). I have an interactive interpreter hooked up to panda, and when I try fromCamera(0) it returns 1 and changes the sphere black. If I do fromCamera(1) nothing happens and it returns 0.
It’s strange, because it seems like fromCamera(0) almost works, but not quite.
That’s a shame… I guess I may have to hold off on the port for a bit…
Just uploaded some code that can use WebcamVideo or OpenCVTexture to create a card…hopefully this
is useful to youze all. I know I have struggled with this problem over the years…looks pretty simple now. ;]
I integrated OpenCV and Panda3D together and was able to set webcam video as a Panda3D texture with a normal frame rate. You can see my code in this Panda3D thread: Reading from Webcam to display as a background