Psychological Experiments and Game Engines

Sorry I was abit vague there. I was trying to list functionality that would be required in the hope of getting some feedback about what engines would provide that functionality, while avoiding discussing what the game would be doing, because it takes awhile to describe what we’re trying to do, and I didn’t want to turn people off with a long post. I figure I might aswell bite the bullet on that one though, so apologies in advance.

The basic idea behind this game is to turn the procedures used in a particular branch of psychology into a game, with a view to speeding up the experiments and making them more appealing to people. The general outline of these procedures is to train a series of relations between stimuli. The stimuli are usually nonsense syllables or abstract shapes. So your stimuli could be YIG, FDT and a red square. The relations that are trained between these stimuli are called relational frames. It gets pretty complex pretty fast, so I’ll keep it simple by simply listing some of the relations that are trained, such as equivalence (A=B), comparison (A<B), distinction (A NOT B), opposition, hierarchy, spatial and temporal. E.g. you might train a relation of equivalence between YIG and the red square, and one of distinction between YIG and FDT. This series of relations is called a relational network.

When you train a relational network, a series of derived relations arise, in addition to the trained ones. e.g. in the relational network I just talked about, where a relation of equivalence has been established between YIG and the red square, and one of distinction between YIG and FDT, one derived relation would be that of distinction between YIG and the red square. If you represent YIG by A, FDT by B and the red square by C, you can see that even though only 2 relations were trained – YIG (A) --> red square © and YIG (A) --> FDT (B) – a further four were derived - B --> A, C --> B, C --> A and C —> B. Here --> represents a relation in general, and not any particular relation.

Something else they do is to train contextual cues. A contextual cue can be thought of as something in your environment that has a meaning for you. Probably the most obvious example is language. So e.g. if you see the word dread in a particular context, let’s say a book about a pirate called Roberts, depending on who you are, you might have an unpleasant feeling in the pit of your stomach, or you might start to expect a monologue on the merits of land wars in Asia. This is the general case. In the context of the work I will be doing, contextual cues will be initially meaningless stimuli, such as #### or &&&& (although they could be anything). In procedures we’re hoping to emulate within the context of the game, participants in the experiments are trained to associate these stimuli with relations such as same, or different, or before, or after, or greater than, or opposite. E.g. #### could come to represent same and &&&& after.

What we’re trying to do is use the patterns of responding (more or less what we normally call thinking) that have been established by the training, to guide players actions in a game. These patterns are the contextual cues and relational networks. This is kind of cool because we’re operating in an entirely abstract level, where relations between novel stimuli, and contextual cues that have been explicitly taught to people are used to guide choices, rather than the way relations are normally presented to people - i.e. using natural language words e.g. the rules of a game. Doing it this way ensures that you have created an entirely artificial verbal environment, and this is very important when it comes to control, and shaping people’s verbal responding (what they do in response to objects in the environment that have a meaning for them) in ways that can give you useful information. Specifically what we want to look at is how rules (particular configurations of the contextual cues combined with relational networks) affect how people respond to changed circumstances. E.g. if we give people a rule in this way, and then change the state of the game so that the rule no longer reflects what is going on, how will people respond? Would there be a difference between how they respond compared to a group that wasn’t given that rule, and instead had to intuitively make their way through the game? We also want to look at how people with psychopathologies, such as anxiety or depression, perform in these studies. It’s starting to look as if excessive rule following is a large part of these conditions, and looking at how people respond to these situations might shed some light on why this is so. So at the moment we’re thinking of using these trained relations as a guide that the player has to refer to when playing the game, e.g. correctly deriving the correct relation might open a box, or a contextual cue of before could prompt a given action.

In terms of game complexity, we’re looking to keep it simple, and hope to start off with a simple 2D game that reflects the training and testing procedures, and if that’s successful, ramp up the complexity so we can do experiments into rule following. So I suppose what I’m asking is would Panda be a good engine to use to develop something like this, and if not, what engines might be?