Performance with a lot of particles.

That code you link simply creates a new copy of a model (“smiley” in this case, which itself has about 1000 vertices) in a different node for each point position. It’s certainly not a good way to render a cloud of points. (It creates one vertex data and one geom for each “point”.)

If you want one-pixel points, the way to do this is to put as many as you can into as few vertex datas as you can. With 30 million points, that will be more than one vertex data, probably more than a handful, depending on your graphics card.

One easy way to optimize the referenced code better is to call:

render.clearModelNodes()
render.flattenStrong()

at the end. This will flatten the models into a handful of vertex datas. You’ve still got 1000 times the vertices that you’re actually seeing, though. It would be better to use a model that is just a single point. Better still to create the points yourself using low-level calls, or write them all to an egg file and load that file, instead of doing all of this load-and-copy for each point.

The bottom line is, if your graphics card can handle it, Panda can handle sending the graphics commands to your graphics card. But you do need to use a bit of foresight when structuring your scene.

David