AI fails at mimicry

This Technology Review article, “A Turing Test for Computer Game Bots”, demonstrates the vast gulf which exists between computer algorithm and human behavior. The premise of the AI attempt:

Can a computer fool expert gamers into believing it’s one of them? That was the question posed at the second annual BotPrize, a three-month contest that concluded today at the IEEE Computational Symposium on Intelligence and Games in Milan.

The contest challenges programmers to create a software “bot” to control a game character that can pass for human, as judged by a panel of experts. The goal is not only to improve AI in entertainment, but also to fuel advances in non-gaming applications of AI. The BotPrize challenge is a variant of the Turing test, devised by Alan Turing, which challenges a machine to convince a panel of judges that it is a human in a text-only conversation.

But in this contest, even the “text-only conversation” is omitted:

For the contest, in-game chatting was disabled so that bots could be evaluated for their so-called “humanness” by “physical” behavior alone.

Moreover, the virtual playing even further tilted in favor of the bots:

And, to elicit more spontaneity, contestants were given weapons that behaved differently from the ones ordinarily used in the game.

Thus the set of circumstances is heavily stacked for for the bots to succeed:

  1. a pre-determined and finite virtual world,
  2. with algorithmic physics and properties,
  3. an absence of linguistic requirements,
  4. where human and bot “behaviors” are identically limited by the avatars’ game mechanics limits,
  5. and conditions (“weapons that behaved differently…”) to skew human players’ acumen.

So, what was the result?

Each expert judge on the prize panel took turns shooting against two unidentified opponents-one human-controlled, the other a bot created by a contestant. After 10 to 15 minutes, the judge tried to identify the AI. To win the big prize, worth $6,000, a bot had to fool at least 80% of the judges. As in last year’s competition, however, none of the participants was able to pull off this feat.

 

In the Turing Test, including this variant, the requirement is merely successful simulation, which is just mimicry. There is no requirement that the simulacrum embody any of the causal or inferential entailments or organizational properties of the system which it is simulating. Indeed, the Test is specifically designed in a way which intentionally blocks access to investigating those properties.

Now, plainly, if the human mind were algorithmic, the behavior that a human exhibits must, by definition, have a corresponding Turing machine. Further, by definition, such a machine is simulable by another computer algorithm. Indeed, there will be a arbitrarily large number of such simulations. Each of these simulations amounts to merely an exercise in curve-fitting.

It could be argued that the human mind is indeed algorithmic, but that we just don’t know the proper Turing machine, and thus our simulations will be inexact. But this line of argument is vacuous insofar as it carries no evidence of algorithmicity. Since a simulation is devoid of any requirement that the simulacrum embody any of the entailment or organizational properties of the system which it is simulating, even a good simulation can, by definition, entail nothing about what goes on inside of the simulated system.

More succintly: successful curve-fitting entails nothing about the internal workings of the system which generated the fitted curve.

 

This is of course the problem with simulations in general, and why simulations and models are entirely different species, with entirely different epistemological worth. Unlike simulations, models are by definition precisely those systems which provide a corresponding synonymy with the organization of entailments within a modeled system. The details of this correspondence is the Modeling Relation. Epistemologically, a model can answer “why?” questions about the system, because a model is specifically made up of the “becauses” of the system: its entailment organization. On the other hand, a simulation cannot answer any “why?” questions about the system, because a simulation has no enforced synonymy of entailments with the simulated system. The epistemological value of a simulation, therefore, is limited at best to behavior, to curve-fitting. Or,as Louie wrote [1], “simulation describes; models explain”.

 

So, let us suppose that in a year, or ten or twenty, that someone wins the contest. What will that tell us about the internal workings of humans? Nothing, for the epistemological nature of simulation is such that it cannot tell us anything: it cannot answer any of our “why?” questions about the internal workings of humans.

 

I am sure that computer games will become better and better, as they have demonstrably done so from past to present, in their AI capacities. Let us just keep in mind that simulations do not asymptotically approach models: there is no amount of curve-fitting improvements that can cause a simulation to change species into a model.

 

References:

[1] Louie, A.H. 2009. More Than Life Itself: A Synthetic Continuation in Relational Biology. Ontos-Verlag.

 

Bookmark the permalink.

Comments are closed.