Basics
Each chat line that comes across, take it and break into an array of words, with a property for who said it, and what the punctuation was at the end. Now we know who said what, and how.
Each statement is analyzed, and the bot either "knows" what the person is talking about (has a response that is appropriate to the statement) or has no clue and has to wing it with some kind of wacky statement or conversation prompter thinger.
How do I know if someone's talking to me?
They use my name or a nickname.
If they're specifically addressing someone else, then I don't need to pipe in. So this means that I have to know everyone's name and preferably all of their possible nicknames.
It's not always important if someone is talking to me or not. If someone makes a general statement to the room (or with no name qualifier), I am eligible to respond.
Please note - If I always respond, I'll get annoying. Randomize the chance to respond to general statements, with a tweakable percentage.
Random responses are really important in creating the illusion of intelligence. My affirmative response can't always be "Yes." every single time. Create tables of possible responses by scenario, and randomly select one. Affirmatives, negations, inquisitions, exclamations, etc...
Predefined responses - the easiest part
If there's an established request/response, respond accordingly. For example, Greetings:
Someone says something from my predefined table of possible greetings. "Hi JoeBot!" I select a random response from that same table and reply. "Why hello there Fred."
Another example would be an inside joke:
Someone says "Well", I reply with "That's a deep subject..."
Note that this can also get really annoying. Randomize so that chance of response is less than 100%, with tweakable %.
Perform a task - Next easiest part
Like a dog performing tricks
Joebot, what's 4 times 3? (calculate the math)
Just a matter of coding the functions, and then calling them when the appropriate keywords are detected. The complexity of including the task is directly proportional to the task at hand.
Intermediate
Last Three Words, Parroting
Saw this somewhere on TV when I was kid, that you can "talk" to someone in a language you don't understand by just repeating the last three words of their previous response as a question. It naturally prompts someone to reply again, and that's one of the goals of a good AI, right? To keep someone engaged.
Hey JoeBot, what's going with the weather today?
"The weather today?"
Yeah, you know, it's pouring out there.
"Pouring out there?"
(ad infinitum until someone gets bored)
Last three words is not always effective. Some sentences don't lend themselves to this format. Also becomes extremely annoying if used repeatedly - you shatter the illusion of intelligence and realize that it's just something repeating you like a parrot. Best approach to fix this is to mix the Last Three Words responses in as one of your "Exception" or "default" responses - JoeBot doesn't know how else to respond, so he has to say *something*. Mix other stuff in, like random facts, inquisition ("What?" "Huh?") or just statements that can act as conversation stimulant.
Laughing in reponse to something funny
Detection of something funny is the problem. Note to self - If you solve the problem of a computer being able to detect humor, you win.
For now, detect keyword in the "humor" table ("LOL!" "haha" etc) and randomly select a different one from the same table, reply with that.
Conversation starters versus reply mode
Instead of just being a dummy and sitting waiting for a response, put a timer in to initiate conversations.
Every (x) number of ticks, do a % check to see if you should say something. Come up with a list of things to say to get people's attention. This failed miserably for JoeBot - it just made people come back to the chat window to see who was talking, and then it was just JoeBot quoting how many hamsters are exported out of Namibia every second, which frustrated people.
Replaced with "Joebot has gone AFK" or "JoeBot yawns" or other things to make it look like there's a person at the keyboard.
Moods
Store an integer containing mood. Positive numbers are happy, negative are not happy. 0 is neutral.
Based on people saying certain things, bump mood one way or the other. Insults make it go down, jokes and laughing make it go up. Compliments make it go way up. Asking joebot to complete a task makes the mood go up slightly because he feels useful.
Included a call to the Mood function in many places. Mood affects other functions, like affirmative responses or negative responses or just about anything else. If JoeBot's greeted and he's really happy, he will act really happy. If someone tells a joke and you're not happy, ignore them or make a snide comment.
Obviously there are a lot more moods than just happy or angry on an opposing linear scale but it seems to be fairly convincing.
Advanced
PREDICTION! This seems to be the key!!
Until now, JoeBot has been a parrot. If we can predict what someone is thinking, or expecting, or wanting, then the AI should be a lot more convincing, right?
First implementation - Learning
If someone tells JoeBot that the Sky is Blue, then he stores a key value pair with Sky as the key and Blue as the value. Next time someone talks about the sky, he knows it's blue. Just a matter of coming up with an appropriate response from there. He should also remember who told him the sky was blue, and also any other things that the Sky is. (The sky is blue. I also know the sky is falling, and cloudy.)
Need to implement - Backwards references (The sky is blue. What other things do I know are blue?)
Need to implement - Conflicting statements - The sky is blue. Someone tells me the sky is red. Now what?
Need to implement - In/definite articles. If someone says The Sky is blue, i shouldn't just tie "Sky" to "Blue" but maybe "The Sky" to "Blue". Even better, map "Sky" to "Blue" and be able to tell which article to use.... gah
CONTEXT! I need to put in context detection. Maybe the sky is blue on a normal day, but today it's red. Or just during sunset. What is the context? Does it repeat? (Sunset is every day) Does it expire? (It's just red today)
AMAZEMENT! - JoeBot needs to be able to store amazement at each fact. What's a better term for this....?
Scenario:
Magician puts his hand into his hat
Magician pulls out a rabbit
Joebot should be able to learn (based on context) that when it's a magician putting his hand into a hat, he will pull out a rabbit. Store the confidence level at, say, a default 50%.
Next time he reaches into the hat, assume 50% chance of a rabbit coming out. The lower the confidence level, the higher the amazement... right?
Next time the magician puts his hand into his hat
Magician pulls out a rabbit
Bump up confidence again. Maybe add 50% of the confidence value? Math probably doesn't work out well but let's try it. Now JoeBot is 75% confident of the result. It's less amazing.
Next time the magician puts his hand into his hat
JoeBot has a confidence of greater than a certain threshhold (74%?) and so he says "I bet you have a rabbit in that hat")
Magician pulls out a rabbit
Everyone thinks JoeBot is a genius
Amazement is when someone sees something they don't expect, and they have a profound experience, whether is it positive or negative, and they LEARN something about that experience.
Next time the magician puts his hand into his hat
JoeBot tells everyone there's a rabbit coming out again because confidence is now like 87% or something
Magician pulls out a turkey.
JoeBot is amazed! Joebot makes an exclamatory reply, and more importantly, LEARNS. The possibility of the action (Someone puts hand into hat) combined with the context (it's a magician doing this action) now has two possible outcomes, a rabbit, and a turkey. The confidence level of the rabbit outcome should probably come down quite a bit, and a new one is added to the database for turkey, with the default 50%.
Next time the magician puts his hand into his hat
What's JoeBot's thought process?
It's probably a rabbit.
It could be a turkey, but probably not.
-- Seems like we need something else here. Maybe the first action after an "amazement" includes a complete destablization of the data???? This is a good idea
JoeBot basically says:
It could be a rabbit, it could be a turkey, or who knows, it could be something totally different!
He's expecting the unexpected during this attempt. His amazement at a third outcome would be significantly lower than when it was a turkey. Now if it's a rabbit again, then increase the potential amazement factor could climb up a little for the next attempt, because he's more confident that it's one of his known outcomes.