An Example Autopig Based on an Emotion Sensor

(These are four email articles related to autopigs and current, known technology.)



To: mcactivism@yahoogroups.com
Date: Fri, 03 Jan 2003 22:08:10 -0500
From: "Allen L. Barker" <alb@datafilter.com>
Subject: ScienceDaily: Designing A Robot That Can Sense Human Emotion
 

This "emotion sensor" is a crude example of the sort of feedback
system that a mind control torture machine might use.  I have
called such machines "autopigs" in my essays such as
     http://www.datafilter.com/mc/autopig.html
     http://www.datafilter.com/mc/mentalFirewalls.html
The technology can be used to help people or to inflict harm and
torture.  Below, the example is given of the robot asking "is there
anything I can do to help?"  Instead of optimizing that, though, it
might be programmed to optimize discomfort, distress, and trauma.  "Is
there anything I can do to induce trauma or re-inflict it when I detect
it?"  Imagine such a machine, say, generating continuous voice-to-skull
phrases and measuring which ones had which emotional effects.
It could then even recombine the syntax of certain phrases in a genetic
algorithm designed to maximize the fitness function of the torture
victim's measured distress.  And later provide the torturer with a list
full of tested and strongly conditioned "psychic driving" phrases.
 
 

Designing A Robot That Can Sense Human Emotion
http://www.sciencedaily.com/releases/2002/12/021216070618.htm

[...]

The project has two basic parts, and both are
ambitious. One is to develop a system that
can accurately detect a person's psychological
state by analyzing the output of a variety of
physiological sensors. The other is to process
this information in real time (as it happens)
and convert it into a form that a computer or
robot can process.

[...]

The Vanderbilt researchers are using an approach
similar to that adopted by voice and
handwriting recognition systems. They are
gathering baseline information about each person
and analyzing it to identify the responses
associated with different mental states. One
advantage that the researchers have is the
recent advances in sensor technology.
"Extremely small, 'wearable' sensors have been
developed that are quite comfortable and
are fast enough for real-time applications,"
says Sarkar.

[...]

He and his research team have since supplemented
their measures of heart rate with
measures of skin conductance (affected by
variations hand sweating) and facial muscle
activity (brow furrowing and jaw clenching).
They were able to combine this information to
produce a series of rules that allow a robot to
respond to information about a person's
emotional state. They have used these to program
a small mobile robot. The robot is initially
given a task of exploring the room. So it begins
moving randomly about on the floor. Then
physiological data of a person experiencing high
anxiety levels is sent to a processor that
detects the anxiety level and instructs the mobile
robot to move to a specific location and
say, "I sense that you are anxious. Is there
anything I can do to help?"

In order to investigate additional psychological
states, Smith has created three simple tasks
- anagram, sound discrimination and math problems
that systematically increase difficulty -
that are designed specifically to make the
performer frustrated or bored. They will be adding
additional sensors, such as electroencephalogram
(EEG) brain wave monitors and additional
measures of cardiovascular activity. The next
challenge that the researchers face is finding
a way to discriminate between high levels of
anxiety and engagement. These two states are
accompanied by physiological responses that are
much closer to each other than either of
them are to low levels of anxiety or engagement.
"This is the really big one," says Smith.


--
Mind Control: TT&P ==> http://www.datafilter.com/mc
Home page: http://www.datafilter.com/alb
Allen Barker
 
 


To: mcactivism@yahoogroups.com
From: "Allen L. Barker" <alb@datafilter.com>
Date: Fri, 24 Jan 2003 16:39:49 -0500
Subject: [mcactivism] Re: Current Tech: Mass Production of Torture, Cruel, Inhumane &
 Degrading Treatment
 
 

The 1) "emotion sensor" combined with a 2) voice-to-skull device, a
3) computer running a simple genetic algorithm, and a 4) voice synthesizer
provides an excellent example of a modern, automated torture device.

Many people are used to hearing about each piece of technology alone, but
have not considered how such pieces can be combined.  The example is so
good because it is 1) simple, 2) closed-loop, 3) fully automated, and
4) technically feasible right now.  It also illustrates the linguistic and
triggering aspects of harassment with "voices," as well as what the harassers
hope is the cover of deniability provided by the abuse of psychiatry.

Once a person understands this example, he or she may then readily be able
to see further basic extensions to such techniques.  Automated "autopig"
harassment and torture systems ("invisible fencing" for human dissidents)
are particularly frightening because so few harassers can control and
manipulate so many human beings.  This allows for the construction of
distributed concentration camps.  That technology is here now, for better
or worse, so bringing it out at least takes away some of the deniability
the abusers of it try to exploit.

Automated harassment systems were even mentioned in a footnote back in
the 1974 Ervin report (http://www.datafilter.com/mc/ervinReportExcerpts.html).
The UCLA Violence Project, also from the early 70s, provides another example.
Monitors in a Nike missile base were going to monitor the brain waves of citizens
and aversively "jolt" them if they made thoughtcrimes.  See, for example,
http://www.datafilter.com/mc/uclaViolenceProject.html.  Everyone knows how
much computer technology has advanced since then, and the so-called nonlethal
technology has advanced several decades since then, also.

Of course an automated "autopig" system does not necessarily *only* use
automation.  You can have one monitor for every n prisoners in the system.
Such monitors can interject realtime, live harassment from time to time,
and update the harassment data file so the software can continue with the
automated harassment/conditioning later.

Consider such a basic, closed-loop system with state-of-the-art, classified
remote sensing technology and state-of-the-art neuroinfluencing technology.
For an idea of some of the current *open* technology in those areas, see
     http://www.datafilter.com/mc/thoughtInference.html
     http://www.datafilter.com/mc/nonlethalWeapons.html
     http://www.datafilter.com/mc/implants.html
Combine such a system with realtime monitors, as sadistic as torturers in
any Paraguayan detention center from the 70s.  Then include damage control
units which try to keep the victims from speaking out about the abuses and
from being believed.  That is the torture that many of the mind control
victims experience right now.  That is the state of "liberty" in the
so-called "free" USA.

It has to be stopped, one way or another, and it is not going to go away by
itself.  If unopposed it is only going to get more entrenched and that much
more difficult a yoke for our children to have to try to free themselves from.
If ignored it will only expand to more and more victims.  It is not going to
get *easier* to oppose than it is now.

For more information about my observations of such systems, see this series
of articles:

"Models of Synthetic Telepathy" -- The basic model structure written out,
plus general comments.  The later articles work from this model structure.
http://www.datafilter.com/alb/modelsOfSyntheticTelepathy.html

"Surreptitious Acoustic Signal Modulation, Voice Projection,
and Direct Brain Interface"  -- Ways that harassing audio voices can be
technologically sent to victims, plus general comments.
http://www.datafilter.com/alb/acousticModulationAndBrainInterface.html

"Working Models" -- Trying to select or create a model that lets you somehow
function in the American secret police state while under torture, plus more
general comments.
http://www.datafilter.com/alb/workingModelsUnderMindControlTorture.html

For more on the feedback aspects, see also this diagram I made a few years
back, "Feedback Disparity in Repressive Control Systems" at
http://www.datafilter.com/uva/ugly/feedbackDisparity.html
 

"We've been watching you a long time.  We know more about you than you
think." -- What an RTI blurted out, angry, when I told him he was "powerless."
 

---------

Finally, a postscript for mixed model fans.  Many people believe in psychic
phenomena, whether such powers exist or not.  Whatever the true case there,
the abusers of mind control technology play those beliefs as a psyop.  How
could you really tell if there were true psychic powers anyway, if you were
technologically manipulated and psyoped with fake psychics for, say, ten
years?  The abusers want to divide and conquer with their fake psychic tricks.
But I really do not care if real psychic powers exist or not.  I know the
technology does exist, and I know that what I see being used on me is
*definitely* technology.  So all people should want to expose the technology.
If there *were* psychic powers, kept secret with some omerta, then the
abusers of technology would exploit both that belief system and that secrecy.
They would reverse-engineer any biological/physical mechanism so they could be
the "king of the psychic hill."  They would suppress politically incorrect
psychics with cointelpro actions and psychic blinding weapons.  They would
try to convince technology victims it was all psychic so their cases wouldn't
stand a chance in court.
 

--
Mind Control: TT&P ==> http://www.datafilter.com/mc
Home page: http://www.datafilter.com/alb
Allen Barker
 
 



To: mcactivism@yahoogroups.com
From: "Allen L. Barker" <alb@datafilter.com>
Date: Fri, 07 Feb 2003 05:06:55 -0500
Subject: [mcactivism] microwave voice projector demonstrated at secret 1993 conference?
 

This is some information related to a reported demonstration of a
microwave voice projection device at the secret 1993 non-lethal
weapons conference.  It may be useful for a discovery motion in a
lawsuit or class action.  It might also be useful for FOIA requests
(though the FOIA has essentially been gutted at this point).  More
information is needed on the connection between the general term
(or euphemism) "Voice Synthesis" and a microwave device; I don't
know the reporter's source for the information.  And of course
they'll say it isn't even if it is, because it is secret.

Here is a Nexus magazine quote from their Oct/Nov 1994 issue.  The
three links below all include it, along with related information.
     http://www.newdawnmagazine.com/Articles/Brain%20Zapping1.html
     http://www.angelfire.com/or/mctrl/part1.html
     http://216.239.57.1/etc...

   Directed-energy weapons currently being deployed include, for
   example, a microwave weapon manufactured by Lockheed-Sanders and
   used for a process known as "Voice Synthesis" which is remote
   beaming of audio (i.e., voices or other audible signals) directly
   into the brain of any selected human target. This process is also
   known with the U.S. government as "Synthetic Telepathy." This
   psychotronic weapon was demonstrated by Dr. Dave Morgan at the
   November, 1993 non-lethal weapons conference.
Here's the link to the table of contents of that back issue of Nexus:
     http://www.nexusmagazine.com/222.conts.html

Here's the schedule of talks at that 1993 non-lethal weapons conference.
Do you really think the FBI doesn't know all about this technology?  Janet
Reno was a lunch speaker.  Dave Morgan's technology presentation is titled
"Voice Synthesis."
     http://www.datafilter.com/mc/nonlethalsConference93.html

Here's the "DoD Draft Non-Lethal Weapons Policy" from July 21, 1994:
     http://www.heart7.net/mcf/mindnet/mn168.htm

   The term "adversary" is used above in it broadest sense, including
   those who are not declared enemies but who are engaged in
   activities we wish to stop. This policy does not preclude legally
   authorized domestic use of non-lethal weapons by U.S. military
   forces in support of law enforcement.
Here is a 1994 "Memorandum of Understanding Between Department of
Defense and Departmant of Justice on Operations Other Than War and Law
Enforcement."  The departments agree to share and jointly develop
"technology and systems applicable to both."
       http://www.namebase.org/foia/mou01.html

These are some email items online from a David C. Morgan at Lockheed
(david.c.morgan@lmco.com).  I am *not* certain that this is the same
Dave Morgan.  On the surface, at least, the project here seems to
involve managing the states and "behaviors" of a heat pump system
aboard a ship.  It is certainly something to think about, whether or
not this is the same Dave Morgan.  I have slightly reformatted some of
the excerpts for readability.

http://www.cc.gatech.edu/classes/RWL/Projects/lockheed/Fall%202000/emails.htmhttp://www.rootnode.com/Members/jeff/lockheeddesign/frontpage/973817775/index_html

   Once we have a bunch of fully described objects (their individual
   information and how they fit into the ship structure via the system
   and subsystem format) and we know how they behave, we can create
   state tables that will contain the information that the defined
   behaviors allow. A valve state table would include fields for valve
   state (open or closed) and alarm status (in alarm or not in
   alarm). The structure of a heat pump state table would be
   identical, but the constraints imposed by the above behavior
   definitions would make it hold states of Cooling, Heating, Off, etc
   in the state field instead of the open or closed values that apply
   to valves. The alarm status would behave identically to the valve
   alarm status since the same behavior was assigned to each behavior
   set.

---------

 http://www.cc.gatech.edu/classes/RWL/Projects/lockheed/Fall%202000/long.project.plan.html

   Introduction and Conception History

   In the Summer of 2000 Lockheed Martin requested the services of the
   RWL to design a flexible and robust information system for the
   management of mechanical devices.

   The integration of computers with devices is inevitable. With
   continually decreasing size, computers will be embedded into many
   devices to make them "smart". A refrigerator is an example of a device
   that will be made intelligent with the addition of a computer. The
   refrigerator will report when the milk is getting low, or when items
   stored within it are on sale at the online grocery store, or when the
   temperature has become too high.

   The intent of this project is to provide an information system for the
   data collected from these "smart" devices. The system will be
   flexible, allowing for many types of devices. The system will also be
   modular; new devices can be added at any time. Furthermore, the system
   will provide an interface to the data. The interface presents
   instantaneous status information and historical data for each
   device. The parameters of the system, such as user access and device
   properties, will be configurable. Finally, the system will be
   redundant.

   One might envision this system in a warship.  The devices would
   include every part on that warship.  At any point, designated members
   would be able to check the status of each device to be sure that
   everything is working properly.  They would also be able to view the
   history of each device.  This system could also work inside a car,
   plane, factory, a home, or numerous other places.  Basically, anywhere
   one can find an abundance of devices which may or may not work
   together but are all part of a large system.

   The tentative completion date is Summer 2001.

---------

   http://www.rootnode.com/Members/jeff/lockheeddesign/frontpage/973818100/index_html

   * Here's what I did:

   Needing to define behaviors that varied right down to field type, I
   decided that I would group information based on field types. I
   separated Sensors from the rest of the data (since sensors will
   return floats and everything else can be based off of
   integers). This gives a general picture of the values being used
   for state data - indexed value (on, off, open, closed, heating,
   cooling, etc), a raw value (a float for sensor data) and some
   method of dealing with alarm data (I simply used negative
   integers). Ultimately, this results in a description of all
   possible behavior values (with raw values covering all of the
   floats from the sensors).

   1) Create a BehaviorCharacteristics table with a unique
   BehaviorValue (which describes the way to interpret the state data)
   and a textual description of the behavior (more for readability
   than anything else).  Basically, this is a constraints table that
   will be used to limit the available options for which a given
   device type can assert its state as being. For now, I use -1 to
   indicate In Alarm and 0 to indicate Nominal (in the future, the
   actual value may vary for the negative number to help show
   diagnostic information). The remaining states are indexed 1 to 10
   for On, Off, Stand By, Operating, Open, Closed, Raw Number (for
   sensors), Heating, Cooling, Inoperable. Not all of these states are
   currently used.  With all potential state data defined, I can begin
   assigning allowable state assignments (behavior characteristics) to
   a particular behavior.

   2) Create a Behaviors table that describes which behaviors can
   assume which states. Alarm behavior can take on Nominal (0) or In
   Alarm (-1), so that's two rows (one that maps Alarm behavior to -1
   and one that maps Alarm behavior to 0).  A valve's state can be
   Open (5) or Closed (6), so that's two rows for the valve state
   behavior. A heat pump can be Cooling (9), Heating (8) or Off
   (2). Sensors provide raw data and therefore are only capable of the
   Raw Number behavior characteristic (7). With specific behaviors
   defined, I can begin mixing and matching my behaviors to describe
   my devices.

   3) Create a BehaviorSet table that groups multiple behaviors into a
   single behavior set. This requires a BehaviorSetID (not unique,
   I'll explain why in a moment), a BehaviorSetName (more for
   readability than anything else) and a TableName. The BehaviorSetID
   cannot be unique since it will have multiple behaviors mapped to
   the same BehaviorSet (a combination of BehaviorSetID and Behavior
   will work). The TableName provides a means of linking a specific
   behavior for a specific BehaviorSet to a specific table. So, here I
   could define a generic Heat Pump behavior set as having the heat
   pump state behavior (Cooling, Heating or Off) and the Alarm
   behavior (In Alarm or Nominal). I could also define a generic Valve
   behavior set as having the valve state behavior (Open or Closed)
   and the Alarm behavior.  With future expansion of the types of
   devices (and hence, behavior sets) I can reuse my existing
   definitions for behaviors and just define a new behavior set. For
   example - we are currently integrating a new type of valve that
   also has sensors imbedded in it. This is easy to represent in the
   database because all I need to do is define a new behavior set
   "Valve with Sensors" and assign it the behaviors of valve state
   (Open/Closed), Alarm (In Alarm/Nominal) and Sensor (Raw
   Number). The table names that are associated with each behavior for
   each behavior set makes it easy to discern where I need to go to
   get the data. The next step is to bridge the gap between my defined
   behavior sets and my actual devices. Instead of assigning a
   behavior set directly to a device, I decided to use a little
   abstraction to allow for a bit more information to be discerned by
   end users. I created another table that maps device types to
   behavior sets. At first, this looks like an unnecessary step, but
   having valves from ten different manufacturers with all of them
   behaving the same makes it a little more difficult to track down
   information about any one type of valve. So, creating the mapping
   table allows for me to list specifics about a type of device (size,
   manufacturer, etc) that differentiate it from all other devices
   while still allowing it to have the exact same behavior.

   4) Create a device type to behavior set map and add extra fields
   for specific information on the device types. Link the DeviceTypeID
   (unique) to the devices table and the behavior set to the
   BehaviorSets table.

   The above structure allows me to easily define new behaviors, mix
   and match behaviors among device types, maintain individual
   information on a specific type of device even though it behaves
   just like another type of device, add/remove behavior
   characteristics from devices or behaviors themselves and gives me
   the ability to find out what tables I need to use (and the type of
   information they hold) to find out the state of any given type of
   device.

   * I'd have to think about it a bit to figure out which approach is
   better. Mine is more normalized and relational, yours more object
   oriented. They both have their merits.

   STORING HISTORY
   * To store history, we make copies of every table, adding a
   TimeStamp field. This is the primary key for these tables. All
   History is stored in these tables. It would be bad for the database
   to store history in the same places that we store current
   information becasue it would increase querry times.  - Yes. Simple
   and effective.  Reply


--
Mind Control: TT&P ==> http://www.datafilter.com/mc
Home page: http://www.datafilter.com/alb
Allen Barker





Date: Sun, 01 Jul 2001 20:21:50 -0400
From: "Allen L. Barker" <alb@datafilter.com>
To: mc@topica.com
Subject: Re: [MC]  AI monitors


Because the link below unfortunately expired, and was replaced
with an article about mental illness, I am including the full
text of the earlier article below.  [I think the fear that an
AI will become smarter than us and turn against us is curious.
Before that happens some humans will develop AI and turn it
against other humans.  Is that somehow better?]


"Allen L. Barker" wrote:

> http://www.usatoday.com/news/acovwed.htm
>
> 06/20/2001 - Updated 10:08 AM ET
> Artificial intelligence isn't just a movie
>
> By Kevin Maney, USA TODAY
>
> [...]
>
> In Littleton, Colo., a company called
> Continental Divide Robotics (CDR) is
> a result of work done at two AI labs -
> one at the Massachusetts Institute
> of Technology and the other at the
> Colorado School of Mines. CDR is
> about to offer a system that can locate
> any person or object anywhere in
> the world and notify the user if that
> person or object breaks out of a
> certain set of rules.
>
> One of the first uses is for tracking
> parolees. The parolee would wear a
> pager-size device that uses Global
> Positioning Satellite technology to
> know where it is. Over wireless networks,
> the pager constantly notifies
> CDR's system about its location. If
> the parolee leaves a certain area or
> gets near a certain house, the CDR
> software will make decisions about
> the severity of the violation and
> whom to contact. That makes it more
> sophisticated than the electronic
> anklets now used on some parolees.
>
> CDR's technology sounds simple, but
> it can involve a number of fuzzy
> choices. If a child being tracked
> goes just outside his limits, the system
> might decide to wait to see if he comes
> right back in. And it might decide
> whether to send you a light caution
> or a major warning - or to call the
> police. "We are literally creating
> software that is reactive and proactive,"
> says Terry Sandrin, CDR's founder.
> "It has the ability to make decisions."
>
> [...]
>


06/20/2001 - Updated 10:08 AM ET
Artificial intelligence isn't just a movie

By Kevin Maney, USA TODAY

Steven Spielberg's forthcoming A.I.: Artificial Intelligence is only a
movie. Or is it? The movie, set in the near future, is about a
humanlike robot boy who runs on artificial-intelligence software - a
computer program that doesn't just follow instructions, as today's
software does, but can think and learn on its own. In some ways, the
character is a fantasy. It's no closer to reality than the alien in
Spielberg's earlier E.T. the Extra-Terrestrial. Yet artificial
intelligence is very real. It's far from re-creating a human brain,
with its power, emotions and flexibility, though that might be
possible in as little as 30 years. Today's AI can re-create slices of
what humans do, in software that can indeed make decisions.

In recent years, this so-called narrow AI has made its way into
everyday life. A jet lands in fog because of relatively simple AI
programmed into its computers. The expertise written into the program
looks at dozens of readings from the jet's instruments and decides,
much as a pilot would, how to adjust the throttle, flaps and other
controls.

Lately, AI has increasingly turned up in technology announcements. For
example:

     * Charles Schwab, the discount brokerage, recently said it
     has added AI to its Web site to help customers find
     information more easily.

     * AT&T Labs is working on AI that can make robots play
     soccer and computer networks more efficient.

     * A computer program called Aaron, unveiled last month,
     has learned to make museum-quality original paintings. "It's
     a harbinger of what's to come," says technology pioneer Ray
     Kurzweil, who has licensed Aaron and will sell it to PC users.
     "It's another step in the blurring of human and machine
     intelligence."

The commercial successes help fuel laboratory research that's pushing
the fringes of AI ever closer to the equivalent of human intelligence.
Software is getting better at cleverly breaking down the complex
decision-making processes that go into even the simplest acts, such as
recognizing a face. Hardware is marching toward brainlike capacity.

The fastest supercomputer, the IBM-built ASCI White at Lawrence
Livermore National Laboratory in California, has about 1/1000th the
computational power of our brains. IBM is building a new one, called
Blue Jean, that will match the raw calculations-per-second computing
power of a brain, says Paul Horn, IBM's director of research. Blue
Jean will be ready in four years.

"Like myself, a lot of AI researchers are driven by the pursuit of
someday understanding intelligence deeply enough to create
intelligences," says Eric Horvitz, who was a leading scientist
in AI while at Stanford University and is now at Microsoft Research
in Redmond, Wash. "Many of us believe we really are on a mission."

Horvitz and others also believe this is breakthrough time for AI,
when the mission spins into a wide variety of technologies.

As an area of research, AI has been around since it was first
identified and given its name during a conference at Dartmouth
University in 1956.  It hit a peak of excitement and media attention
in the mid-1980s, when AI was overhyped as a technology that was about
to change the world. One fervent branch at the time was expert systems
- building a computer and software that could re-create the knowledge
of an expert. A brewing company, for instance, could capture a master
brewer in software, possibly making human master brewers less
necessary.

The exuberance was hindered by a couple of snags that led to
disenchantment with AI. For one, computers of the time weren't
powerful enough to even come close to mimicking a human's processing
power.  Two, AI was trying to do too much. Creating a complete
intelligence was too hard - and still is.

Knowing one thing well

These days, that's less of a barrier. Computers have gotten
exponentially more powerful every year. Now, a PC is capable of
running some serious AI software. And AI researchers have learned to
aim at pieces of human capacity, building software that knows it can't
know everything but can know one thing really well. That's how IBM's
Deep Blue beat champion Gary Kasparov in chess. Together, the
developments have "led to a blossoming of real-world applications,"
Horvitz says.

Those applications are taking on all forms.

In Littleton, Colo., a company called Continental Divide Robotics
(CDR) is a result of work done at two AI labs - one at the
Massachusetts Institute of Technology and the other at the Colorado
School of Mines. CDR is about to offer a system that can locate any
person or object anywhere in the world and notify the user if that
person or object breaks out of a certain set of rules.

One of the first uses is for tracking parolees. The parolee would wear
a pager-size device that uses Global Positioning Satellite technology
to know where it is. Over wireless networks, the pager constantly
notifies CDR's system about its location. If the parolee leaves a
certain area or gets near a certain house, the CDR software will make
decisions about the severity of the violation and whom to
contact. That makes it more sophisticated than the electronic anklets
now used on some parolees.

CDR's technology sounds simple, but it can involve a number of fuzzy
choices. If a child being tracked goes just outside his limits, the
system might decide to wait to see if he comes right back in. And it
might decide whether to send you a light caution or a major warning -
or to call the police. "We are literally creating software that is
reactive and proactive," says Terry Sandrin, CDR's founder. "It has
the ability to make decisions."

At AT&T Labs, scientist Peter Stone spends a lot of his time preparing
for Robocup, an annual robotic soccer challenge coming up in August.
This year, it will be in Seattle and will pit AI research labs against
one another. Rolling robots the size of pint milk cartons are armed
with sensors and AI software. Like real soccer players, each of the 11
robots on a team has to know its job but also must react to situations
and learn about the other team. At this point, the robots can pass the
ball a little but still mostly act on their own. Their capabilities
are improving quickly.

It seems frivolous, but getting AI-programmed robots to work as a team
to achieve something would have real-world implications. One would be
making the Internet more efficient. As Stone explains it, the Net is
made up of thousands of computerized routers all moving data around
but acting independently. If they could act as a team, they might
figure out better ways to transmit the data, avoiding clogged areas.

Aaron takes AI to the arts, which can be a little harder to
believe. But Aaron creates original work on a computer screen - quite
sophisticated work. Artist Harold Cohen taught the software his style
over 30 years, feeding in little by little the ways he decides color,
spacing, angles and every other aspect of painting.

After all that time, the program is finally ready, and computers are
powerful enough to make it work. While still in development, it won
fans such as computing legend Gordon Bell. Now, Kurzweil has licensed
it and plans to sell it for $19.95. Load it on a PC and let the artist
loose.

"There have been various experiments with having machines be an
artist, but nothing of this depth," Kurzweil says. "Cohen has created
a system that has a particular style but quite a bit of diversity - a
style you'd expect of a human artist."

Other uses of AI range from the amazing to the mundane.

Computer as companion

At Microsoft, Horvitz is trying to make your computer more of a
companion than an inanimate tool. His software lets the computer learn
about you. It learns who is important to you and who's not. It learns
how to tell if you're busy - maybe by how much you type, or by using a
video camera to see if you're staring at the computer screen or
putting golf balls across the carpet.

It can combine that information to help manage your workload. If an
e-mail comes in from someone very important, the computer will always
put it through. If it's from someone not so important and you're busy,
it can save the e-mail for later.

The software can do that with all kinds of information, including
phone calls coming in and going out of your office. The thinking at
Microsoft is that these capabilities might someday be a part of every
computer's operating system.

Schwab's AI implementation seems less grand but no less helpful. It's
using AI technology from iPhrase that can comprehend a typed
sentence. More than just looking for key words, it can figure out what
you really mean, even if you make spelling mistakes. So you could
type, "Which of these has the most revenue?" and get the answer you
were looking for. Based on the page you have up, it would know what
you mean by "these." On Schwab's Web site, www.schwab.com, this is
supposed to help users find information.

Beyond all the near-term uses of AI, there's the nearly unfathomable
stuff.

The trends that brought AI from the failures of the mid-1980s to
breakthrough success 15 years later will continue. Computers will get
more powerful. Software will get more clever. AI will creep closer
toward human capabilities.

If you want a glimpse of where this is heading, look inside MIT's AI
lab.  Among the dozens of projects there is Cog. The project is trying
to give a robot humanlike behaviors, one piece at a time. One part of
Cog research is focused on eye movement and face detection. Another is
to get Cog to reach out and grab something it sees. Another involves
hearing a rhythm and learning to repeat it on drums.

A brain like a cat's

In Belgium, Starlab is attempting to build an artificial brain that
can run a life-size cat. It will have about 75 million artificial
neurons, Web site Artificialbrains.com reports. It will be able to
walk and play with a ball. It's supposed to be finished in 2002.

Labs all over the globe are working on advanced, brainlike AI. That
includes labs at Carnegie Mellon University, IBM and Honda in Japan.
"We're getting a better understanding of human intelligence," Kurzweil
says. "We're reverse-engineering the brain. We're a lot further along
than people think."

But can AI actually get close to human capability? Most scientists
believe it's only a matter of time. Kurzweil says it could come as
early as 2020. IBM's Horn says it's more like 2040 or 2050. AT&T's
Stone says his goal is to build a robotic soccer team that can
challenge a professional human soccer team by 2050. He's serious.

In many ways, an artificial brain would be better than a human
brain. A human brain learns slowly. Becoming fluent in French can take
years of study. But once one artificial brain learns to speak French,
the French-speaking software code could be copied and instantly
downloaded into any other artificial brain. A robot could learn French
in seconds.

A tougher question is whether artificial intelligence could have
emotions.  No one knows.

And a frightening question is whether AI robots could get smarter than
humans and turn the tables on us. Kurzweil, technologist Bill Joy and
others have been saying that's possible. Horn isn't so sure. Though
raw computing power might surpass the brain, he says, "that doesn't
mean it will have any of the characteristics of a human being, because
the software isn't there to do that."

Horvitz has a brighter outlook, which at least makes the AI discussion
more palatable. He says humans are always getting better at guiding
and managing computers, so we'll stay in control. "Most of us (in AI)
believe this will make the world a better place," he says. "A lot of
goodness will come of it."



--
MC:TT&P --> http://www.datafilter.com/mc
Allen Barker

 


|Back to Updates Page|To MC:TT&P|