Statisticians are evil

I’ve made it my life’s goal to replace all statisticians with cute little robot bunnies. Watch the following video for a demo of my first prototype.

I developed a server in Prolog for the Nabaztag:tag bunny and hooked it up with a dialogue system I created during my masters. It uses an unofficial Google API for speech recognition and generation. It’s quite slow sometimes because of the poor Wi-Fi connection, the inefficient polling of the Nabaztag and the speech recognition. I have some ideas though for improving its speed. Read on for a transcript of the dialogue with comments.

This dialogue demonstrates how a student or researcher might use the system to find out which statistical technique is most appropriate for the research question at hand.

Bunny Hello. How can I help you?
User Which statistical test should I use?

This interaction starts with the system greeting the user. The initial goal that the system has, is to find out how it can help the user. Once the system learns that the user needs help in determining an appropriate statistical technique it loads a relevant script and starts executing it. The initial goal is now completed and the new goal becomes finding out which statistical technique the user should use.

Bunny Let’s see. Okay. How many dependent variables are there?
User What is a dependent variable?

The system responds with “Let’s see” and “Okay” indicating that it has understood the question and has loaded a script to answer it. To answer the user’s question, the system first has to ask a series of questions, such as “How many dependent variables are there?”, “Are these continuous or categorical variables?”, etc. The first action in this script is to find out how many dependent variables there are, so the system asks this question first. When the system asks a question it also adds a new goal, namely that of finding an answer to the question it has just asked.

However, instead of answering this question, a clarification question is asked by the user. This causes the system to try to answer this question first before continuing.

Bunny Okay. A dependent variable is the outcome or effect variable. How many dependent variables are there?

The system outputs “Okay.” to indicate it has understood the question. Because the system’s knowledge base contains a fact that answers this question, this fact is output. Now the system considers this question to be answered and continues with the script by re-asking how many dependent variables there are.

User Two.
Bunny Okay. Are the dependent variables continuous?

The user gives an answer and the system considers the question answered. A new fact is added to the system’s knowledge base. Now the second question in the script is asked.

User I mean, there is one dependent variable.

The user actually meant to say there is only one dependent variable. The system is able to revise what was previously answered. The system accepts this new information and retracts the old fact from its knowledge base because it is conflicting. The script for finding out which statistical technique is reloaded which causes the system to now ask another question.

Bunny Okay. Is the dependent variable continuous or categorical?

The next question in the script is whether the dependent variable is continuous or categorical.

User It is continuous.
Bunny Okay. How many independent variables are there?

The next two questions in the script are how many independent variables there are, and if there is only one, whether this is a continuous variable. The systems asks questions one by one, so first it outputs: ”How many independent variables are there?”.

User There is one continuous independent variable.

The system allows the user to provide answers to questions that haven’t been raised yet. In this case the user tells the system that there is one continuous independent variable. This answers the current question but also the next unasked one. The system is able to resolve this question without first asking it.

Bunny Okay. Does the data meet assumptions for parametric tests?

The system has one last question to resolve before it can tell the user which statistical technique to use. Assumptions for parametric tests are that the underlying source populations come from a normal distribution and that one’s measures derive from an equal-interval scale. It makes sense for the user to ask the following related follow-up question.

User How can I check for normality?

This question causes the system to load another script and temporarily abandon the current one.

Bunny Let’s see. Okay. Do you want to check for normality visually or do you want to test for normality?

The system outputs two grounding utterances ”Let’s see.” and ”Okay.” to indicate it has understood the question and is loading a new script. Then it asks the first question on the script it has just loaded.

User I want to test for normality.

One can check for normality by inspecting a graph such as a histogram, P-P plot or Q-Q plot. One can also test for normality by executing a Kolmogorov-Smirnov test or a Shapiro Wilk test. The user wants to use a test.

Bunny Okay. Do you have more than 50 cases?
User Yes, more than 50.

A user might answer the question using some superfluous words or the speech recognition doesn’t completely recognize the spoken input. In this case the system recognizes the word ”yes” but only after ignoring the other input. This means that the system assumes it has understood the user but isn’t entirely sure.

Bunny Okay. You have more than 50 cases. You can test for normality by using the Kolmogorov-Smirnov test. Let’s find out which statistical technique you should use. Let’s see. Does the data meet assumptions for parametric tests?

The system uses grounding feedback (”You have more than 50 cases.”) to let the user know what it has understood. Now it has gathered enough information to conclude and inform the user that it can check for normality using the Kolmogorov-Smirnov test. Then it returns to the issue of finding out which statistical technique is most appropriate for the research question at hand. It reloads the previous script which causes the system to output ”Let’s see”. The next unresolved question in the script deals with finding out if the data meets assumptions for parametric tests.

User Yes
Bunny Okay. You should use Pearson correlation or regression.

The system now informs the user of the appropriate technique to use. All goals are now reached and the system just waits for new input.

User La dee da dee da.
Bunny I don’t understand.

The user says something of which the system can’t make heads nor tails and informs the user of this.

User What time is it?
Bunny Let’s see. Okay. The time is 24 minutes to 11 in the morning.

The user asks what time it is. The system contains a script for answering this question. This script is loaded and the system outputs “Let’s see”. The script is very short but consists of an action that executes some Prolog code that finds out the current time by looking at the system clock, and asserting this fact to its knowledge base. Now the system is able to resolve this question and outputs the time.

User Tell me, if you can, what’s the time in London?
Bunny Do you want to know what the time in London is?

The system has to skip quite some input before it encounters something that matches with something that looks like a question. The recognition rate is therefore quite low which causes the system to explicitly ask what the user wants to know.

User Yes.
Bunny Let’s see. Okay. The time in London is 23 minutes to 10 in the morning.

The system can answer questions about different times in the world by making use of a REST web service. A similar thing as above happens, only now the script doesn’t contain an action to query the system clock, but it makes a call to this web service. 

User And Toronto?
Bunny Let’s see. Okay. The time in Toronto is 23 minutes to 5 in the morning.

”And Toronto?” shows that the system can resolve elliptical questions. Besides talking about statistics and the time, the system can also be taught by providing it with information that it will happily remember.

User I have a Ford Mustang.
Bunny Okay.

The user informs the system of a new fact. This fact is added to the system’s knowledge base.

User What kind of car do I have?
Bunny Okay. I don’t know what kind of car you have.

Although the system knows the user has a Ford Mustang, it doesn’t know that a Ford Mustang is a kind of car. Therefor it doesn’t know how to answer this question.

User A Ford Mustang is a kind of car.
Bunny Okay.

The system is informed of a new fact. Now it does know that a Ford Mustang is a kind of car.

User What kind of car do I have?
Bunny Okay. You have a Ford Mustang.

These two facts combined give the system the ability to derive new facts and answer this question.

User Am I happy?
Bunny Okay. I don’t know if you are happy.

The system doesn’t know whether the user is happy or not. 

User If I have a Ford Mustang then I am happy.
Bunny Okay.

The user cannot only tell facts, it can also tell rules which the system happily adds to it’s knowledge base.

User Am I happy?
Bunny Okay. You are happy.

The system is able to derive new facts from rules and facts it already knows.

So what do you think? Should statisticians start worrying already? Leave your love for robot bunnies in the comments below.

Leave a Reply

Your email address will not be published. Required fields are marked *