Artificial intelligence implementation has so far been predominantly a programmer-centric task, often times resulting in complex systems that may not even properly portray believable human intelligence. This work investigates the use of case-based reasoning (CBR) and human-expert demonstration to see if the development of artificial intelligence can stem from observing actual human behavior. Many systems outside the game industry take advantage of case-based reasoning, such as customer support systems, molecular biology, and even the NASA space shuttle program. CBR has also made its way inside the game industry in projects such as real-time-strategy games. For this particular artifact, the paper focuses on using AI Sandbox, a capture-the-flag based artificial intelligence engine, and jColibri, an open-source case-based reasoning framework, as a platform to test whether human-expert-based AI can outperform traditional finite state machine AI and whether it can actually cost less development time and effort in the process.