Game Designer & Programmer


AI Build 3

Since my last AI post, the project design has shifted a few times. Work on this project has been frustrating because every time I do work on the build, the project essentially shifts, causing me to have to essentially restart. Luckily the work I redo is pretty fast and painless. Anyways, the design/topic has gone in some interesting directions. 

The project initially started as a study into if I could create an AI that "learned" from the player and got better over time. I tried to apply this in its most basic form to Rock/Paper/Scissors by measuring how many times the player and the AI would win a game. I assigned values to the actions of picking rock/paper/scissors and altered them according to the actions the player took. The AI evaluated these values and made its decisions based on the highest value.

I then tried to expand on this approach by making the values probabilities and giving the AI the ability to pick it's move off of all of the probabilities, not just what had the highest value.

The problem with this application was a fundamental flaw in how rock/paper/scissors works. Even if the AI recognized that the player picked on option over the other, or recognized patterns in the choices, rock/paper/scissors still has a, at its core, an equal chance of any of the options being picked. This results in the inability to get a consistent means of guessing the players next input. The player to AI win-rate was usually close to 1:1. Beyond that, the AI was abysmally easy to beat if the player recognized how the AI worked. 

After that iteration, the project design took some weird turns that resulted in me not being able to get a build out for a while. With the pretense that I wouldn't make any more progress with a rock/paper/scissors application, attempts were made to "gamify" the project a bit more in an effort to introduce more player agency. The idea was (and still is) that player behaviors can be more accurately recorded if the players have a bit more control over what they do. In a random environments like r/p/s where who wins is largely up to chance, individual play behaviors aren't formed and can't be evaluated. The first attempt at adding more agency was to put the probability evaluation of the AI into the context of a state machine. All of a sudden my project went from "develop an AI that learns from the player" to "develop a fighting game AI." I developed a state machine that had 4 states: forward, backward, crouching, and neutral. Each state had a set of actions the player/AI could take that won, lost, or tied with other actions in the machine. After the development of the state machine, it was universally agreed that my project had shifted too far our of scope and too far away from the initial idea. The next order of business was to lower the complexity of the project. The design had to shift again to something that had the simplicity of my initial project but enough player agency that it wasn't just a random guessing game.

Another idea was to add time into the mix; maybe the AI needed to work in a runtime environment to get the results I wanted. Up until now, the project did all of it's evaluations at a locked interval; the user would make a selection and then lock in their choice. This idea was that having the player play against the AI at runtime would add the agency I needed, but this idea was scrapped for the same reason the last one was. 

 The current idea is an attempt to go just one level higher than base rock/paper/scissors, and the project objective has now shifted from "make a learning AI" to "make an AI that is difficult". The current iteration includes a "tell." Basically, to give the player more agency and more to consider so that they actually have to make an informed decision, rather than a random choice, the player and AI will select a choice that the opponent will be able to "see". This week, the player "tell" was implemented, and the AI changes it's evaluation based on what the player "shows" the AI.

The "tell" alters the choice probabilities before the AI makes it's evaluation.

The problem with this is the probabilities are shifting by my arbitrary definitions. I need to figure out how to give all of the control to the AI. Moving forward I plan to implement the AI tell -- a UI element that will update at runtime that shows what the AI wants to show you. The player will then have to take that information and use it to decide what choice to make. 

Build below.