logo

Weeks Four and Five Update

I really wanted to take this course AFTER Nature of Code (Dan Shiffman’s signature course which covers the simulation of physical systems and moves from there to intentional agents and neural networks) as an overview and primer. Unfortunately fate and the class-selection algorithm conspired against me – ironic, since my main reason for studying at ITP is to study algorithmic decision-making.

So now I am embarked on the process of trying to sweep up as much general knowledge about algorithms as I can, while also building a specialized agent-based model that is far beyond my current programming capabilities.

Algorithms in General

The gateway to algorithmic wizardry was recommended to me by Heather — M. Tim Jones’ Artificial Intelligence: A Systems Approach. This week I skimmed the book to get the lay of the land before diving in for an in-depth review. I also worked through the first chapter on Uniformed Search. As I learn more about these, I’ll come back and fill in some details about them.

Here are the algorithms it covers:

  • Uninformed Search
    • Depth-First Search: Begin at the starting node of a decision tree and follow one branch as deep as it goes. Then back up and follow the next branch, continuing until a solution node is found. Not optimal (does not necessarily find the shortest path to the goal node) and only complete if there are no loops in the decision tree; otherwise it can get cycle in a loop and never find a solution. Potentially very slow if the graph is very deep.
    • Depth-Limited Search: As above, but the algorithm determines an arbitrary maximum depth to search. Not complete – the goal node may be deeper than the maximum depth, and not optimal – returns the first successful path it finds even if there is a shorter path.
    • Iterative Deepening Search: Performs a depth-limited search of depth 1, then depth 2, etc. until it reaches a solution. Complete and optimal in unweighted graphs. It is the preferred algorithm when the depth of the solution is not known. Same time complexity as DFS and DLS – it can potentially search every node in the graph before finding the solution.
    • Breadth-First Search: Search the graph from root node in order of distance from the root. Different from IDS in that each node that is searched must be stored – so it has a higher spacial complexity (higher memory requirements). They also differ in how they handle the queue of discovered but unsearched nodes – IDS uses a LIFO (Last-In-First-Out) implementation and BFS uses a FIFO (First-In-First-Out) implementation. I’m not sure what the significance is in terms of efficiency. BFS is complete and optimal. Preferred if the solution is known to be close to the starting node.
    • Bidirectional Search: Performs two breadth-first searches simultaneously, one from the starting node and one from the goal node. When both searches find a common node, the path from start to goal can be reconstructed. Complete and optimal, but only possible if the goal node is known in advance. Advantage is that the time and space complexity are half an ordinary BFS, since it only needs to search half the depth of the graph.
    • Uniform-Cost Search: A method for finding the path with the lowest cost on a weighted graph. From the starting node, evaluate the path costs of each connected edge. Put them in a priority queue from least cost to highest. Follow the path at the front of the queue and again evaluate the costs from that node, adding to the accumulated cost of the current path. Then reorder the priority queue (least to highest cost), pick the lowest cost path, and repeat. The solution is found when the path at the top of the priority queue contains a solution. Optimal and complete if the path costs are non-negative. Time and space complexity are the same as BFS.
  • Informed Search
    • Best-First Search
    • A* Search
    • Hill-Climbing Search
    • Simulated Annealing
    • Tabu Search
  • Constraint-Satisfaction Algorithms
    • Generate and Test
    • Backtracking
    • Forward Checking and Look Ahead
    • Min-Conflicts Search
  • AI and Games
    • Minimax Algorithm
      • Minimax with Alpha-Beta Pruning
  • Knowledge Representation
  • Machine Learning
    • Decision Trees
    • Nearest Neighbor Learning
  • Evolutionary Computation
    • Genetic Algorithms
    • Genetic Programming
    • Evolutionary Strategies
    • Differential Evolution
    • Particle Swarm Optimization
  • Neural Networks
    • Supervised Neural Networks
      • Least-Mean-Squares
      • Back Propagation
      • Probabilistic Neural Networks
    • Unsupervised Neural Networks
      • Hebbian learning
      • Simple Competitive Learning
      • k-Means Clustering
      • Adaptive Resonance Theory
      • Hopfield auto-associative model
  • Intelligent Agents
  • Biologically Inspired and Hybrid Models
    • Artificial Immune Systems
    • Simulated Evolution
    • Lindenmayer Systems
    • Fuzzy Logic
    • Genetically Evolved Neural Networks
    • Ant Colony Optimization

Some of these are familiar to me in a vague way from various readings I’ve done over the years, but I’m excited to review them in direct relation to one another, with many gaps filled in.

Algorithms in a Social Science Context

Peter Darch recommended The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. This book proposes a variant on the Prisoner’s Dilemma which has the effect of encouraging cooperative behavior in certain circumstances. (The metaphor is in which  circumstances individual hunters will choose to hunt alone (to catch a rabbit which they get to keep for themselves) or hunt in a group (to catch a stag which they must share with other hunters). Increasing the number of hunters increases the chances of catching a stag, but also increases the number whom it must be shared among)). Adjusting the parameters (chance of catching each animal, caloric value of each animal (or portion)) results in certain circumstances in which it is rational for individual hunters to team up. Skyrms then demonstrates some agent-based models that reflect this dynamic. I am still thinking about hat parts of this approach I can use in my work.

Programming Challenges

I am well behind on the programming. My first task is to expand my previous ABM so that I can use it as a statistical benchmark. I struggled for quite a while to figure out how to make Processing emulate a web form (I even considered importing the parameters into Processing from an actual web form). Dan Shiffman turned me on to a Processing library called controlP5 that provides this functionality.

My goal is to complete the statistical version by next week and to have built a basic spacial version that looks more like a classic ABM, with agents interacting in space on a grid.

Conceptual Challenges

I am still caught up in the question of how to define the fitness function. The version I have now (minimizing the Bayesian Regret of the population as a whole measured across abstact “issues”) is interest as a way of directly comparing different models. But there is no functionality to this measure. The law is partly about constraining the range of citizen action – so I am still looking for an elegant way to use a single character string to represent the scope of agent behaviors, and allowing the ‘law’ to represent contraints in those behaviors.

This is converging somewhere in the neighborhood of modeling the Tragedy of the Commons…possibly. There could be a renewable resource (or a few), agents which harvest that resource, trade, possibly steal or kill, and have a happiness measure based on some combination of wealth and whether they have been abused by other agents. Hmm.

One problem I keep bumping up against is that ABMs are designed to model emergent behaviors by agents acting locally. But I’m trying to explore some dynamics associated with political activity, which (for some people at least) means thinking about the larger society in which one operates – exactly what agents aren’t supposed to do. But could they? Does it just deepen the model to provide some capacity to for the agents to think globally (and is it technically possible….). I’m still working out the limits of this approach, no to mention exactly what I’m trying to do.

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Post Details

Posted: February 23, 2012

By:

Comments: 0

Seo