purpose of panda3d.py

I was wondering… what problem does panda3d.py module loader solves? It just loads binary modules and serves them. Essentially same would be achieved if we had panda3d folder on PYTHONPATH and appropriately named binary modules with .pyd extensions in them. Of course it merges few modules together - cosmetics.

Thing is - imo panda3d.py creates more problems than it solves. One of core python values is rapid development. And panda3d.py takes half of that from user by disrupting proper support for code completion. In eclipse it can be mitigated a little with predef files, but pycharm does not bend to my will yet.

So i started experimenting. Made symbolic links of each dll that panda3d.py imports linking them to same filename except with .pyd extension. So far so good - now i can import those modules directly and see full code completion. With some python magic i came up with my own p3d module that replaces panda3d and does same job. Except it now shows code completion in pycharm with ease. It still needs some guidance for supporting code completion for new builtins but thats not panda’s fault.

So - maybe it is a good idea to do away with panda3d.py magic and just import things official way? We all know python gives us a shotgun and it is our responsibility to not shoot ourselves in the foot with it. In this instance i believe it is exactly what happened - blown off feet.

I am attaching my poc module loader. Can not wait to hear everyone’s opinions.
p3d.zip (4.63 KB)

i am sorry, i fail to see how on earth it could be related to what i am saying.

i am also posting better version of my drop-in replacement of panda3d.py. it seems to work the same except IDE is no longer confused about code completion. be nice if rdb could voice his opinion :slight_smile:
panda3d.zip (5.84 KB)

Rationale is here:
panda3d.org/blog/?p=22

In my mind it’s actually more of a preparation for the next step: instead of putting all the Python bindings into the C++ dlls, put them in separate .pyd files named “core.pyd” and “physx.pyd” and “bullet.pyd” and whatnot, and then putting those into one “panda3d” directory. In fact, I was planning on doing that soon.

I’m surprised to hear that simply symlinking .pyd to .dll works, though, and I’m especially surprised that this makes code completion work correctly. I always assumed that code completion tools simply couldn’t handle extension modules at all.

I’m curious: does code completion still work if you, instead of a bullet/init.py, create a bullet.pyd symlink right in the panda3d directory? If so, then we wouldn’t need to generate those individual subdirectories at all.

Wait a second. You have a preloader module that imports pandac.PandaModules. Like panda3d.py, this module has additional magic to preload the .dlls before importing the Python modules. If your approach works, then why would you still need to import pandac.PandaModules?

problem with symlinking existing dlls to panda3d is changing module name. i tested with libp3direct.dll and renamed it to direct.pyd. problem is python expecting exported initdirect(), but dll providing initlibp3direct(). Code completion for imported module libp3direct works just fine though. Whole purpose of direct/init.py and others was to seamlessly rename native module to name that is exposed by panda3d by default.

thing with preloader - i was just emulating what original panda3d.py did. without stuff in preloader (by the way purpose of preloader is to hide whatever is imported in there) app throws errors like “Attempt to register type TypedWritable more than once!”. the way i see it is pandac.PandaModules still doing the heavy lifting of loading stuff but rest of whats inside my panda3d module just conveniently exposes rest of stuff.

regarding code completion: i do not know for a fact, but im making an educated guess here. IDE probably imports native extension and enumerates exported symbols somehow. problem with panda3d.py is that native module names are imported using strings. As imported module name is really obvious only at runtime IDE can not know what native module is actually returned under the different name.

what you planned sounds good. way better than this band-aid of mine :slight_smile: i refrained from moving around dlls as i think i noticed some stuff using those dlls that are at the same time python extensions.

OK, then perhaps when interrogate_module generates the Python module bindings it should generate an initcore() method, and compile that into a core.pyd that links in with the actual libpanda and libpandaexpress libraries. This is a bit nicer than using a symlink (which won’t work on some filesystems and can give trouble when archiving). I’ll see what I can do about implementing this into makepanda.

Thanks for looking into all of this!

Update: I’ve just checked in a number of fixes that change the way the Python bindings are built. In particular, it obsoletes panda3d.py, and instead builds a majority of the Python bindings into neat little panda3d/core.pyd, panda3d/bullet.pyd, etc.

I’d be happy to know if code completion is now working in the latest amd64 buildbot build.

hey that worked great right out of the box. pycharm was able to generate rather detailed skeletons!

maybe i could probably stress this matter a little more… this is just an idea:
now in generated skeletons i see this:

def compressFile(*args, **kwargs): # real signature unknown
    """
    C++ Interface:
    compress_file(const Filename source, const Filename dest, int compression_level)
    ....

That probably is info from docstrings in native modules right? Both best python IDEs (pycharm and pydev) support typehinting. So docstring could start with typehints like this:

def compressFile(*args, **kwargs): # real signature unknown
    """
    :param source: str
    :param dest: str
    :param compression_level: int
    C++ Interface:
    compress_file(const Filename source, const Filename dest, int compression_level)
    ....

method signature is most likely known at time of docstring generation so maybe it would not be a big deal to add this.

however on their own typehints are useless because at least pycharm does not pick up method signature. real names instead of *args, **kwargs should be present. ill snoop around those things and see if i can find out if anything can be done about this.

Excellent.

I didn’t know about the type-hinting. I would be fine with implementing that, but many methods have multiple overloads. For instance, NodePath.setPos can be called with a variety of different sets of parameters. How do you think that could be handled?

hmm but how these overloads are exposed to python? since python does not support cpp-ish overloading is it exposed to python api as one func call wtih bunch of kwargs? If so - IDEs have fair support for epydoc so something like this could be used:

@keyword thing: int

but it would yield little use really. it would probably be just a little better than what we have now - cpp function declarations. due to absence of parameter names ide wont provide autocompletion. are you aware of any way to sort of expose function signature from native modules? i know we can do it in python3 (pep 3107). maybe it would even be wiser to leave this after py3 happens for panda.

Yes, Panda considers which overload to call based on the types of the args and kwargs.

Could we perhaps convince these tools that they are multiple functions that happen to have the same name?

i think probably not… IDEs peek into native modules by enumerating what they “export”. since panda3d exports only a single method for handling all overloads so i suppose we cant even avoid *args, **kwards for overloaded functions. in perfect world we could somehow define all overloads with their parameter types in docstring and hope for IDEs to support it but afaik that is not possible. I could not find a way to do this. It could probably be useful feature request for IDE developers though. Maybe big project having docstrings with such info would make them consider adding support for that kind of thing.

That’s a shame. We could of course do it only for the methods that don’t have overloads, but I’d hate to half-ass it.

On top of that, we have parameter coercion, which makes the whole matter more complicated. This means that you are allowed to pass a string instead of a Filename object, a tuple instead of a Vec3, etc.

A bit of background information on extension methods, from someone who once has written Python extension modules by hand (many years ago).

Extension methods have only a very limited set of possible signatures in C.
The signature is picked by choosing from very few flags, e. g. METH_VARARGS or METH_NOARGS. METH_VARARGS for example means that the extension method has the following signature:

static PyObject *foo(PyObject *self, PyObject *args)

How an extension method handles the parameters (PyObject args - a python list or tuple is a PyObject too, so only one parameter on the C level !) is up to the C implementation of the extension method. Usually it involves calling the Python C-API Function PyArg_ParseTuple or similar methods.

So without knowing the C source code and without using conventions on how the docstring is annotated there is no way to tell what parameters an extension methods wants.

About overloaded methods: currently the docstring for overloaded methods is created by (1) listing the C++ signatures for all C++ methods with the same name, and (2) listing the documentation blocks for all C++ methods. Interrogate does a good job here in my opinion, by merging information about all overloads in one docstring.

This means there are only two ways to go:

1.) Pick a convention on how to annotate parameters etc. within docstrings, and then re-write the documentation blocks in all extension methods. We would not neet to tinker with interrogate, at least not much. I think this is not possible. We would need a hundred volunteers each willing to spend a few month on this.

2.) Tweak interrogate so it generates annotated text for all extension methods based on the C++ signaturs (and the same mappings used by the Panda3D doxygen documentation). Might be possible, but I guess this is still a lot of work, and requires someone who is familiar with how interrogate stores the information extracted from the C++ files.

Panda3D uses METH_VARARGS | METH_KEYWORDS for all exposed methods, which means there’s an additional “kwargs” dictionary passed to the method.

Option 2) is in fact not all that difficult to implement, since interrogate already contains code for this, and I’d be up for the job. The only problem is that there does not seem to be a formal syntax for annotating method overloads, which means that only one variant of, say, NodePath.setPos will show up. Since Panda3D uses method overloading for many methods, this would cause a lot of confusion.

There is no standard but there are at least few wanna-be ones. Most notably sphinx and epydoc. Problem is - they are aimed at fully defined functions, not f(*args, **kwargs) that are obtained from compiled modules.

Since there is no 100% solution atm we still can do best thing available. Maybe something along these lines?

"""
(some_arg: Vec3, other_arg: int) -> float
    @param some_arg: Some vector somewhere in vastness of space.
    @param other_arg: And an integer beceause we love integers.
(some_arg: int, other_arg: int) -> float
    @param some_arg: Some int because this is overload.
    @param other_arg: And an integer beceause we love integers.

This function returns magic float that is produced by using provided aruments in nuclear fusion reactor. This is odd example of function description because im bad at thinking of fake descriptions for imaginary functions.
"""

This bit is based on epydoc syntax, provides info on two overloads. Function signatures are in python3 function annotation format so that right away gives better chance at support from IDEs later on.

Additionally PyCharm people propose their way of defining more complex types in docstrings (see Type syntax section). It is a part of bigger effort to provide a standard for python skeletons that would aid autocompletion. Maybe those could be used. Things like list[Foo] or dict[Foo, Bar] seem especially useful and self-explainatory.

But is this syntax recognised by any tools?

Also, keep in mind that each overload usually has a different docstring, which is not reflected in your example.

No… nothing recognizes this. Main culprit is fact that there is no way for skeleton generator to know parameter names. And those multiple overload definitions are just made up by me. For each overload we can put description under it’s parameters then.

Or… I have another idea. Both PyDev and PyCharm can “pull” autocompletion information from sort of stub files. In PyDev they are *.predef files that are added in settings. We even worked out some basic things for it here. PyCharm generates those stubs itself and stores in it’s settings directory. I am thinking maybe it would be worth investing time in writing a predef file generator for IDEs to use. Remember when i asked for doxygen xml output? Thats what i wanted to do back then. But there still is problem with overloads. I tested PyCharm - added new function with the same name to skeleton file that aids autocompletion. IDE picks up only later function. First one is nowhere to be seen. To make them all appear overload names have to be different. But probably its not best idea to litter api with things like function(), function1(), function2() etc… Im out of ideas