Psychological Experiments and Game Engines

I’m new to this site, and I’m trying to find out what is the best game engine/programming language to use to create a relatively simply 2D game that can be used to test various hypotheses about how we use language. I hope to be doing this as part of a PhD, and at the moment I’m trying to find out how feasible the whole thing might be. Experience wise, I’ve used C, C++, Java and Python, and I’d be most comfortable on Java and Python, with Python being my preference for this project. This will be my first time programming a game.

The mechanics of this game will be relatively simple (at least at first), and will essentially reduce to point and click. What’s going on is more involved, but topographically point and click, along with drag and drop, is more or less all the player will be doing. It would also be necessary to record these point and clicks in time, so that there would be a spreadsheet somewhere that lists the separate objects and the time at which they were selected. One other thing that is absolutely necessary is that it looks good, because we’re trying to make these experiments as appealing as possible to people. So graphics wise I’d like to use a faux 3D effect, in a 2D environment, like they had in new super mario brothers. In terms of graphics, I’d be aiming for at a minimum the quality you’d get in the DS version of that game: http://www.youtube.com/watch?v=V7IybLIBaBQ In terms of presentation this will initially be a single screen displayed to players, where they can do their point and click, but in time I hope to expand this into a side scroller or a top down format. Something else I hope to do would be the ability to access different parts of the wider game world with a click on an object during game play. The game will be initially designed for use on Windows, but in future we hope to port it to mobile devices. Any advice would be appreciated!

Hi, and welcome to the forums!

It seems like an interesting project. One thing that’s is not really clear from your description is whether you intend to use 2-D images or 3-D models to represent your game world. These two approaches require radically different approaches to game design.

I’m not really sure what kind of specific advice you are looking for, but I’m convinced that the people here are more than willing to answer any questions you might have.

Good luck!

Sorry I was abit vague there. I was trying to list functionality that would be required in the hope of getting some feedback about what engines would provide that functionality, while avoiding discussing what the game would be doing, because it takes awhile to describe what we’re trying to do, and I didn’t want to turn people off with a long post. I figure I might aswell bite the bullet on that one though, so apologies in advance.

The basic idea behind this game is to turn the procedures used in a particular branch of psychology into a game, with a view to speeding up the experiments and making them more appealing to people. The general outline of these procedures is to train a series of relations between stimuli. The stimuli are usually nonsense syllables or abstract shapes. So your stimuli could be YIG, FDT and a red square. The relations that are trained between these stimuli are called relational frames. It gets pretty complex pretty fast, so I’ll keep it simple by simply listing some of the relations that are trained, such as equivalence (A=B), comparison (A<B), distinction (A NOT B), opposition, hierarchy, spatial and temporal. E.g. you might train a relation of equivalence between YIG and the red square, and one of distinction between YIG and FDT. This series of relations is called a relational network.

When you train a relational network, a series of derived relations arise, in addition to the trained ones. e.g. in the relational network I just talked about, where a relation of equivalence has been established between YIG and the red square, and one of distinction between YIG and FDT, one derived relation would be that of distinction between YIG and the red square. If you represent YIG by A, FDT by B and the red square by C, you can see that even though only 2 relations were trained – YIG (A) --> red square © and YIG (A) --> FDT (B) – a further four were derived - B --> A, C --> B, C --> A and C —> B. Here --> represents a relation in general, and not any particular relation.

Something else they do is to train contextual cues. A contextual cue can be thought of as something in your environment that has a meaning for you. Probably the most obvious example is language. So e.g. if you see the word dread in a particular context, let’s say a book about a pirate called Roberts, depending on who you are, you might have an unpleasant feeling in the pit of your stomach, or you might start to expect a monologue on the merits of land wars in Asia. This is the general case. In the context of the work I will be doing, contextual cues will be initially meaningless stimuli, such as #### or &&&& (although they could be anything). In procedures we’re hoping to emulate within the context of the game, participants in the experiments are trained to associate these stimuli with relations such as same, or different, or before, or after, or greater than, or opposite. E.g. #### could come to represent same and &&&& after.

What we’re trying to do is use the patterns of responding (more or less what we normally call thinking) that have been established by the training, to guide players actions in a game. These patterns are the contextual cues and relational networks. This is kind of cool because we’re operating in an entirely abstract level, where relations between novel stimuli, and contextual cues that have been explicitly taught to people are used to guide choices, rather than the way relations are normally presented to people - i.e. using natural language words e.g. the rules of a game. Doing it this way ensures that you have created an entirely artificial verbal environment, and this is very important when it comes to control, and shaping people’s verbal responding (what they do in response to objects in the environment that have a meaning for them) in ways that can give you useful information. Specifically what we want to look at is how rules (particular configurations of the contextual cues combined with relational networks) affect how people respond to changed circumstances. E.g. if we give people a rule in this way, and then change the state of the game so that the rule no longer reflects what is going on, how will people respond? Would there be a difference between how they respond compared to a group that wasn’t given that rule, and instead had to intuitively make their way through the game? We also want to look at how people with psychopathologies, such as anxiety or depression, perform in these studies. It’s starting to look as if excessive rule following is a large part of these conditions, and looking at how people respond to these situations might shed some light on why this is so. So at the moment we’re thinking of using these trained relations as a guide that the player has to refer to when playing the game, e.g. correctly deriving the correct relation might open a box, or a contextual cue of before could prompt a given action.

In terms of game complexity, we’re looking to keep it simple, and hope to start off with a simple 2D game that reflects the training and testing procedures, and if that’s successful, ramp up the complexity so we can do experiments into rule following. So I suppose what I’m asking is would Panda be a good engine to use to develop something like this, and if not, what engines might be?