AI Build 4
Between my last build and this build, I realized that I was approaching the AI evaluation from the wrong direction. Ultimately, the "why" is the same; decide which choice the player will make. The "how" was a bit off, however. Up until this point, I have been approaching the evaluation as a series of reactions the AI makes based on the actions that the player takes. However, since the player can ultimately make any choice they want, this method is very unreliable and easy to beat. I never stopped to consider evaluating the reactions the player has to the actions the AI makes.
This week, as I implemented the AI tell, I completely rewrote the evaluation stage. Now, instead of the AI altering the probabilities of the players next action based off of just the players tell and previous choices, the AI now alters the probabilities based off of its own tell. Basically, the AI isn't just guessing what action the player will take, it's guessing what action the player will take in reaction to seeing a certain "tell". The probabilities that each action (rock/paper/scissors) have are no longer just the probability of being chosen, but rather the probability of being chosen when a tell is shown.
This evaluation replaced the player tell evaluation, however, since the two evaluations were acting independently of each other. I'm still curious, however, as to how players react to having a tell, so I added a toggle to turn that functionality on/off. I also created a new evaluation that runs when the player tell is enabled. This evaluation evaluated the player and AI tells together. The probability for each action is no longer the probability that the action will occur when a certain tell is shown, but rather it's now the probability that an action will occur when a certain player tell is up at the same time as a certain AI tell. There are nine unique player/AI tell match-ups and each match alters the action probabilities differently.
I am happy with this project, but there is a lot that I think could've been better and I think it boils down to implementation. I think I approached this as a developer rather than as a player. In other words, some of my implementation got really convoluted and unnecessarily complex because I knew how the AI worked, so I was trying to program it to account for the player knowing how to beat it (???). A good example of this is in the player tell evaluation code from my last post. I'm also thinking that, in the future if I was to continue with this project, I'd do away with the probabilities approach. Assigning probabilities is a "correct" way of doing it, but I end up wishing that my AI made a definitive choice for what to play instead of still having a chance of picking any of the options. Even though my implementation here was the right way to do it, I'd like to eliminate any randomness from the project. I think that this would result in a more polished and robust system that would prove much more "life-like".
This is the end of this project, for now. This was a cool study into AI methods and a fresh break from the AI work I usually do -- movement AI, pathfinding, behavior trees/state machines. Hopefully I can return to this project in the future and try to apply it in a more flushed out context such as a fighting game bot. For now, project files are on my Github and can be downloaded here.
Below is my final documentation on the project.