I really wanted to take this course AFTER Nature of Code (Dan Shiffman’s signature course which covers the simulation of physical systems and moves from there to intentional agents and neural networks) as an overview and primer. Unfortunately fate and the class-selection algorithm conspired against me – ironic, since my main reason for studying at ITP is to study algorithmic decision-making.
So now I am embarked on the process of trying to sweep up as much general knowledge about algorithms as I can, while also building a specialized agent-based model that is far beyond my current programming capabilities.
The gateway to algorithmic wizardry was recommended to me by Heather — M. Tim Jones’ Artificial Intelligence: A Systems Approach. This week I skimmed the book to get the lay of the land before diving in for an in-depth review. I also worked through the first chapter on Uniformed Search. As I learn more about these, I’ll come back and fill in some details about them.
Here are the algorithms it covers:
Some of these are familiar to me in a vague way from various readings I’ve done over the years, but I’m excited to review them in direct relation to one another, with many gaps filled in.
Algorithms in a Social Science Context
Peter Darch recommended The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. This book proposes a variant on the Prisoner’s Dilemma which has the effect of encouraging cooperative behavior in certain circumstances. (The metaphor is in which circumstances individual hunters will choose to hunt alone (to catch a rabbit which they get to keep for themselves) or hunt in a group (to catch a stag which they must share with other hunters). Increasing the number of hunters increases the chances of catching a stag, but also increases the number whom it must be shared among)). Adjusting the parameters (chance of catching each animal, caloric value of each animal (or portion)) results in certain circumstances in which it is rational for individual hunters to team up. Skyrms then demonstrates some agent-based models that reflect this dynamic. I am still thinking about hat parts of this approach I can use in my work.
Programming Challenges
I am well behind on the programming. My first task is to expand my previous ABM so that I can use it as a statistical benchmark. I struggled for quite a while to figure out how to make Processing emulate a web form (I even considered importing the parameters into Processing from an actual web form). Dan Shiffman turned me on to a Processing library called controlP5 that provides this functionality.
My goal is to complete the statistical version by next week and to have built a basic spacial version that looks more like a classic ABM, with agents interacting in space on a grid.
Conceptual Challenges
I am still caught up in the question of how to define the fitness function. The version I have now (minimizing the Bayesian Regret of the population as a whole measured across abstact “issues”) is interest as a way of directly comparing different models. But there is no functionality to this measure. The law is partly about constraining the range of citizen action – so I am still looking for an elegant way to use a single character string to represent the scope of agent behaviors, and allowing the ‘law’ to represent contraints in those behaviors.
This is converging somewhere in the neighborhood of modeling the Tragedy of the Commons…possibly. There could be a renewable resource (or a few), agents which harvest that resource, trade, possibly steal or kill, and have a happiness measure based on some combination of wealth and whether they have been abused by other agents. Hmm.
One problem I keep bumping up against is that ABMs are designed to model emergent behaviors by agents acting locally. But I’m trying to explore some dynamics associated with political activity, which (for some people at least) means thinking about the larger society in which one operates – exactly what agents aren’t supposed to do. But could they? Does it just deepen the model to provide some capacity to for the agents to think globally (and is it technically possible….). I’m still working out the limits of this approach, no to mention exactly what I’m trying to do.