Skip navigation

Category Archives: Artificial Intelligence

Ema1

This September, you can have a cute robotic girlfriend that kisses on command and runs on batteries for only $175. The only problem is she’s 15 inches tall and doesn’t even remotely pass for human.

Sega is releasing EMA (Eternal Maiden Actualization), an interactive robot that can sing, dance, blow kisses, and pass out business cards. Equipped with infrared sensors, EMA puckers up for a kiss when she senses a human head nearby, entering what her designers call “love mode”.

“Strong, tough and battle-ready are some of the words often associated with robots, but we wanted to break that stereotype and provide a robot that’s sweet and interactive,” said Minako Sakanoue, a spokeswoman for Sega Toys. “She’s very lovable and though she’s not a human, she can act like a real girlfriend.”

Of course, the best robot-girlfriend that money can buy is still the Japanese Honeydoll.

Japan is home to almost half the world’s 800,000 industrial robots, and predicts a $10-billion market for artificial intelligence in a decade.

[Reuters, via NZ Herald]

Asimolistens

Asimo’s conversational skills just improved– he can now understand three different humans shouting at the same time.

Asimo’s modified new ability is still in the early stages– and it’s currently only being used for judging rock-paper-scissors contests, with three human players calling out their choices at once. But the number of voices and the complexity of the sentences the software can deal with should grow in future.

Hiroshi Okuno at Kyoto University, and Kazuhiro Nakadai at the Honda Research Institute in Saitama, both in Japan, have designed the new software, which they call HARK. Using an array of eight microphones to work out where each voice is coming from and isolate it from other sound sources, HARK software then works out how reliably it has extracted an individual voice, before passing it onto speech-recognition software to decode. That quality control step is important. The other voices are likely to confuse speech recognition software. So any parts of the sound file that contain a lot of background noise across a range of frequencies are automatically ignored when the patched-up recording of each voice is passed on to a speech-recognition system.

The HARK system actually goes beyond normal human listening capabilities, Okuno told New Scientist. “It can listen to several things at once, and not just focus on a particular single sound source.”

Although the HARK software can’t comprehend 10 voices at once yet, Okuno and Nakadai say it can follow three players calling simultaneously at 70 to 80% accuracy when installed into Honda’s Asimo robot.
The array of eight microphones is placed around the Asimo’s face and body, which helps it to accurately detect and isolate simultaneous voices. “The number of sound sources and their directions are not given to the system in advance,” says Nakadai.

Guy Brown at the University of Sheffield, UK, is impressed with the work, although he points out that it is largely built from existing elements used to process sound, such as getting an array of microphones to localise a sound, and using automated software to block out difficult-to-interpret parts of a voice recording.

“The main achievement has been to embed this technology in a robot and to get it all working in a real-time, interactive manner,” Brown says.

Rock-paper-scissors uses a small vocabulary, making the task easier. “Clearly there’s a long way to go before we can match the performance of human listeners in ‘cocktail party’ situations,” he says. In fact, when Okuno and Nakadai tried using their software to follow several complicated sentences at once, as three people shouted out restaurant orders, it could only identify 30 to 40% of what was said.

Alexander Gutschalk in Germany has just conducted one of the first studies of brain activity when dealing with the cocktail party effect, and says future collaboration between neuroscientists and roboticists could make robots better party conversationalists.

You know what that means– the wise-cracking robot butler we saw in Rocky 4 might be just around the corner.

Rockyrobot2_2Rockyrobot1

[New Scientist]

FamFacial recognition software is still is its infancy, since movement, lighting, and facial expression all interfere with a computer’s ability to recognize a face. But researchers at Carnegie Mellon University are working on software that creates 3-D face meshes that adjust to fit the subject and identify them, even if their face is partially obscured or on the move.

Unfortunately, this means it just got harder to ditch your robot ex-girlfriend.

[i09.com]

A bar-owner in Atlanta saw Robocop a few too many times, and invented a “Bum Bot” to annoy homeless people and force them to leave the area. The Bum Bot doesn’t look much like Peter Weller or a Terminator though– it looks kind of like that thing Capt. Pike would tool around in on Star Trek.

And now, here’s Stephen Colbert’s full investigative report on “Difference Makers: The Bum Bot”.

[O’Terrill’s Bar — Home of the Bum Bot]

Ghost_in_the_shell_004

Steven Spielberg and Dreamworks have announced plans to make a live action 3-D version of Ghost in the Shell. The news broke Monday, but I’ve been trying to sort out how I feel about it–and I think I’m excited. While Spielberg’s AI was flawed, it was the most spectacular view we’ve had of androids in American cinema in the past decade, and it’s time we get more cyborgs and androids on the big screen.

Spider-Man producer Avi Arad is also attached to the project, and the script is being written by Jamie Moss, a co-writer on the dirty cop drama Street Kings–which I enjoyed tremendously. Let’s hope this team can give the Major her due in 3-D.

Ap03071705952
io9.com has a robo-tastic new article about Japan’s plan to make robots an integrated part of everyday life. To compensate for the shortage of young workers willing to do menial tasks, the Japan Robot Association, the government, and several technology institutions drafted a formal plan to create a society in which robots live side by side with humans by the year 2010.

Takayuki Furuta, the director of the Future of Robotics Technology Center in Chiba, said that the country is on track to reach this goal, and that a primary goal of the collaboration is to establish international standards for humanoid robot software and hardware—in a similar manner to how techies determined what nuts and bolts and basic programs would comprise a standard computer so many years ago. Phase 1 (planning) and phase 2 (hardware) are complete as of March 2008; phase 3 (software) starts this month. “We’re going to be the first country in the world with an official robotics ministry,” he says.

In the US, he explains, there’s a strong emphasis on developing software, like artificial intelligence and programs for military tools and weapons. But Japan doesn’t have a military, so robotics research ends up going into applications for everyday life. And since Japan is a densely populated country with small living quarters, developing compact hardware for utilitarian humanoids becomes infinitely more important.

The initiative doesn’t end in 2010, but that’s the benchmark year by which they plan on having robots doing janitorial work, security, child care, client liaison work and intelligent wheelchairs nationwide. Roboduties will expand to everything else—driving cars, cooking dinner, producing TV shows, marrying humans—by 2020.

Read the full article at io9.com.

Nexi1Wow, wow, wow. Cynthia Breazeal and the team at MIT’s Media Labs have done it again with NEXI, a mobile, dexterous, social robot that displays emotions.

Click on this link, and allow Nexi to tell you more about herself.

Via Suicide Bots.

Toshibabot
Part of the appeal of robots is making them do things that we don’t want to do or can’t do for ourselves– and now researchers at Toshiba have developed a small, mega-cute talking robot that can learn how to use our remote controls.

ApriPoco is a 8.4 inch tall robot that is equipped with sensors that can detect infrared rays from remote controls. It uses its own infrared signal to respond to verbal commands, and can learn a range of instructions.
While users might get upset if a conventional machine makes a mistake, the researchers hope that the robot’s child-like appeal will make people more patient and willing to help it learn.

Such interaction has proved to work well in trials, particularly with seniors– people who need the most help with the growing array of remote controls in our lives.

Toshiba hopes to develop the robot for a commercial launch but has not yet decided when it might go on sale.

See video of ApriPoco at work below:

Ff_kurzweil1_f

Ray Kurzweil is sixty years old, but he believes The Singularity is near– and that he might live long enough to see it. WIRED interviewed Kurzweil about the extraordinary measures that he is taking to prolong his life long enough to transfer his consciousness into that of a machine.

In addition to guarding his health, Kurzweil is writing and producing an autobiographical movie, with cameos from Alan Dershowitz and Tony Robbins. Kurzweil appears in two guises in the film– as himself and as an intelligent computer named Ramona, played by an actress. Ramona has long been the inventor’s virtual alter ego and the expression of his most personal goals. “Women are more interesting than men,” Kurzweil says, “and if it’s more interesting to be with a woman, it is probably more interesting to be a woman.”

He hopes one day to bring Ramona to life, and to have genuine human experiences, both with her and as her. “I don’t necessarily only want to be Ramona,” he says. “It’s not necessarily about gender confusion, it’s just about freedom to express yourself.”

Kurzweil’s movie offers a taste of the drama such a future will bring. Ramona is on a quest to attain full legal rights as a person. She agrees to take a Turing test, the classic proof of artificial intelligence, but although Ramona does her best to masquerade as human, she falls victim to one of the test’s subtle flaws: Humans have limited intelligence. A computer that appears too smart will fail just as definitively as one that seems too dumb. “She loses because she is too clever!” Kurzweil says.

The inventor’s sympathy with his robot heroine is heartfelt. “If you’re just very good at doing mathematical theorems and making stock market investments, you’re not going to pass the Turing test,” Kurzweil acknowledged in 2006 during a public debate with noted computer scientist David Gelernter. Kurzweil himself is brilliant at math, and pretty good at stock market investments. The great benefits of the singularity, for him, do not lie here. “Human emotion is really the cutting edge of human intelligence,” he says. “Being funny, expressing a loving sentiment — these are very complex behaviors.”

Ramona_3

Terminator36

An elderly man commited suicide by programming a robot to shoot him in the head after building the machine from plans downloaded from the internet.

Francis Tovey, 81, who lived alone in Burleigh Heads on the Australian Gold Coast, was found dead in his driveway. According to the Gold Coast Bulletin, he had been unhappy about the demands of relatives living elsewhere in Australia that he should move out of his home and into care.

Notes left by Tovey revealed that he had scoured the internet for plans before constructing his complex machine, which involved a jigsaw power tool and was connected to a .22 semi-automatic pistol loaded with four bullets. It could fire multiple shots once triggered remotely. His notes suggested that Tovey chose to kill himself in the driveway because he knew there were workmen building a new house next door who would find his body.

The scheme worked, as carpenter Daniel Skewes heard gunshots and ran to Mr Tovey’s home. “I thought I heard three shots and when we ran next door he was lying on the driveway with gunshot wounds to the head,” Mr Skewes told the GCB.

A neighbour, who did not want to be named, told the newspaper that Mr Tovey had lived at his home on Gabrielle Grove since 1984. “He was a really marvellous man, an ideal neighbour and I will miss him greatly,” she said. “He was born in England, like I was, and we used to enjoy our tea together. He had visitors from England and family interstate from somewhere far away in Australia.

“There was no inkling of anything amiss, it is just very sad.”

We’ll never know the true extent of someone else’s secret pain.

From Times Online, UK edition.

R1444167291

A friendly dog can make older people feel less isolated — and it appears to make little difference if it’s a robot pup or the real thing.

Researchers at Saint Louis University in Missouri compared a 35-pound floppy-eared mutt named Sparky with Aibo to see how residents of three U.S. nursing homes would respond.

“The most surprising thing is [the robot dog] worked almost equally well in terms of alleviating loneliness and causing residents to form attachments,” said Dr. William Banks, a professor of geriatric medicine who worked on the study reported in the Journal of the American Medical Directors Association.

The researchers studied 38 nursing home residents who were divided into three groups. One got regular visits from Banks’ pet Sparky, another got visits from the AIBO Entertainment Robot. The third group got no visits from either dog.

Banks said he had been sure Sparky would have the edge, but to his surprise, both dogs provided virtually equal comfort after seven weeks of visits.

While AIBO has been discontinued, Banks thinks similar robots could offer companionship for older people and might even be programmed to keep tabs on their owners, alerting emergency workers of a sudden fall.

“Loneliness is common in nursing homes. Robots may be very useful for people who cannot for whatever reason have access to a living dog,” Banks said. Many senior citizens are too frail to care for a pet or have had to give up their own animals when they went to the nursing home.

As technology improves, robots of all types are sure to be a popular remedy for human loneliness. But isn’t there something even more isolating about being able to shut off something you love once it becomes inconvenient?

[Reuters]

I_robot__runaround

Asimov’s Three Laws of Robotics seem like the perfect guidlines for robot interaction with humans– but what about robots in combat? How do we program ethical behavior into a robot designed to engage in combat with a human? With our increasing reliance on unmanned aerial vehicles and iRobot’s surveillance and bomb-disarming bots, the question may not be so speculative in the years to come.

Researcher Ronald Arkin at the Georgia Institute of Technology’s Mobile Robot Laboratory has grappled with the issue in a new paper, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture”.

Arkin reviewed the laws of combat through the ages, noting that in human-to-human combat, it is not proper to attack civilians or even soldiers who have laid down their weapons in surrender. But Arkin also formulated a machine-ready algorithm for ethical behavior, a tricky undertaking given that even humans find all-too-many potential actions in combat ethically murky at best.

Arkin’s approach was to first “describe the set of all possible behaviors capable of generating a discrete lethal response…that an autonomous robot can undertake.”

Then he formulated a set of ethical constraints — based on the Geneva Conventions and other largely agreed-upon ethical norms for war — and applied them to this set of lethal behaviors. The resulting set of ethically lethal actions could then be implemented through a number of architectures and even a pseudocode that Arkin offered.

This is, of course, is a much-simplified and abstract approach, and it took Arkin nearly 100 pages to formalize such inherently loose concepts as return fire, ambush and other tried-and-true military tactics. Arkin said his work is only beginning, but he is optimistic about future developments.

In the words of James Cameron — “If a machine can learn the value of human life, maybe we can too.”

[GCN Insider]

Ray_kurzweil_01

Famed Artificial Intelligence expert Ray Kurzweil says, “I’ve made the case that we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029.”

Kurzweil also predicts that humans and machines will eventually merge by means of devices embedded in people’s bodies to keep them healthy and improve their intelligence.

“We’ll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.” The nanobots, he said, would “make us smarter, remember things better and automatically go into full emergent virtual reality environments through the nervous system”.

Kurzweil is one of 18 influential thinkers chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter.

The challenges were announced at the annual meeting of the American Association for the Advancement of Science in Boston, which concludes today.

The 14 challenges facing humanity are:

– Make solar energy affordable
– Provide energy from fusion
– Develop carbon sequestration
– Manage the nitrogen cycle
– Provide access to clean water
– Reverse engineer the brain
– Prevent nuclear terror
– Secure cyberspace
– Enhance virtual reality
– Improve urban infrastructure
– Advance health informatics
– Engineer better medicines
– Advance personalised learning
– Explore natural frontiers

From BBC News.

401_2

David Levy continues to titillate the media and grab headlines with his book Love and Sex With Robots.

Levy predicts that sex with robots will be possible in the next five years, and that in the near future there will be a demand for androids with personality programming sophisticated enough for people to develop real relationships with them and eventually fall in love.

But some experts argue that Levy’s ideas are far-fetched. Frederic Kaplan, who programmed Aibo’s robo-brain, wonders whether we even want robots made in our own image. “Human-machine interactions will be interesting in their own right, not as ‘simulation’ of human relations.”

What Kaplan doesn’t realize is that there will plenty of demand for both kinds for robots.

A company in Japan, Axis, has already produced the world’s first rudimentary sexbot called Honeydolls–check out my investigation of their uncanny valleys here.

Meanwhile in the US, the Real Doll is proving to be quite popular, as well as the Cyborgasmatrix dolls.

New York-based sexologist Yvonne K. Fulbright acknowledges that sexbots will probably find a niche market, especially with men seeking to fulfill fantasies their flesh-and-blood partners might be refusing. “But there will be a real stigma attached to sex robots. People are still going to feel like losers if that is their last resort,” she said.

I’ve said before that there will be a definite stigma attached to having sex with robots, but that won’t keep certain lonelyhearts from falling in love with their robot-mates. If you want proof, check out the pitiable lives of these Real Doll owners in the documentary Guys and Dolls.

Sylvain Calinon and his colleagues at the Swiss Federal Institute of Technology in Lausanne have created software that allows them to teach a humanoid robot new tasks – such as how to move chess pieces – simply by guiding its limbs through the necessary motions.

“We are taking insights from studies in developmental psychology and humans’ capabilities at transferring skills,” says Calinon. In humans, parents “mould” children when teaching them how to hold a pen, for example. Calinon hopes the software will allow consumers to teach domestic robots to do novel chores, something other roboticists agree would be useful.

“Rather than having a fixed behaviour repertoire, a [domestic] robot needs to learn from its owners and adapt to new conditions,” says Kerstin Dautenhahn, an expert in artificial intelligence at the University of Hertfordshire in the UK.

The 60-centimetre-tall robot, called HOAP-3, is made by Fujitsu. HOAP-3 is programmed with the dimensions of its arms and torso and the angles through which its joints can move, and uses this information when figuring out exactly how to adapt its movements to suit a new situation. When it is physically guided through chess moves, for example, it constructs a 3D model of the environment which includes the position its hand must be in to pick up the chess piece. It can even decide to try new motions, such as bending at the waist, to reach some pieces. “That’s a function the robot generates – we haven’t taught it that,” says Calinon.

From New Scientist (subscription required to read full article)

Sci0804irobot_a

Dr. Caroline West, a senior lecturer in philosophy at Sydney University, says we should already be thinking about what will happen when humanoids develop the ability to reason and integrate into society. If humanoids become as intelligent and capable of feeling as humans, should they be given the same rights? The question cuts to the heart of what a “person” is.

“It could happen tomorrow, it could happen in 50 years, it could happen in 100 years,” says Professor Mary-Anne Williams, head of the innovation and research lab at Australia’s University of Technology. “People and animals are just chemical bags, chemical systems, so there’s no technical reason why we couldn’t have robots that truly have AI.”

Professor Williams believes a unique form of robotic emotion could even evolve one day. “You could argue some robots can mimic (emotions) already,” she says. “But because a robot will experience the world differently to us it will be quite an effort for the robot to imagine how we feel about something.”

“One of the things we’ll want robots to do is communicate. But in order to have a conversation you need the capability to build a mental model of the person you’re communicating with. And if you can model other people or other systems’ cognitive abilities then you can deceive.”

Humans generally anticipate how another person might feel about something by thinking about how it would affect them. People who don’t have the ability to empathize can become psychopaths.

“I think there is a danger of producing robots that are psychopathic,” Prof Williams says.

Of course, Isaac Asimov formulated the three laws to try and prevent robots from harming humans, but Professor Williams says this is easier said than done. Especially when there are robots already trained to kill on the battlefield in Iraq.

“You need a lot of cognitive capability to determine harm if you’re in a different kind of body. What will we do when we have to deal with entities … who have perceptions beyond our own and can reason as well as we can, or potentially better?”

Dr. Caroline West says, “If something is a person then it has serious rights, and what it takes to be a person is to be self-conscious and able to reason. If silicon-based creatures get to have those abilities then they would have the same moral standing as persons. Just as we think it’s not okay to enslave persons, so it would be wrong to enslave these robots if they really were self-conscious.”

Via TechNewsWorld.

Toyota is targeting the early 2010s for the development of a viable human-assistance robot, and unveiled its latest two robot creations on Thursday– and one of them plays violin.
Toyota_robot_mobiro_img_0488jpg

The Violin-Playing Robot played “Pomp and Circumstance” at Toyota’s press conference in Tokyo Thursday. AP News said of the performance, “Compared to a virtuoso, its rendition was a trifle stilted and, well, robotic.” But the demonstration wasn’t meant to display mastery of music, it was a display of the robot’s ability to master fine movements with its joints and fingers–and that The Violin-Playing Robot was skilled enough to produce a tremolo effect.

Toyota_robot_mobiro_img_0454jpg_7

The Mobiro is a motorized wheel chair that can cope with uneven surfaces and turn on the spot. The rider controls Mobiro through controls in the armrest while sitting in the chair, but Mobiro can also be summoned from afar by remote control, and avoid objects on the way to finding its owner. Mobiro has a 20-kilometer range. Field tests will begin in the second half of 2008.

Toyota_robot_mobiro_img_0510jpg_5
Toyota also showed off Robina, a robot that was unveiled earlier this year. Robina is designed for face-to-face communication with humans, and served as a guide at the Toyota Kaikan Exhibition Hall in Toyota City in August. Robina navigates a through obstacles automatically, and is capable of signing autographs for her adoring fans.

While robot tricks like playing the violin and signing autographs are cute for now, Toyota’s end goal is for robots like Robina to be used in hospitals and at home. Toyota President Katsuaki Watanabe said robotics will be a core business for the company in coming years, and is predicting a high demand for assistance from Japan’s rapidly aging society. Toyota will test out its robots at hospitals, Toyota-related facilities and other places starting next year.

The company hopes to put what it calls “partner robots” to real use by 2010.

See more pics and video at Akihabara News.

Via PC World and AP News. Hi-Res photos from Akihabara News..

At $75,000, I have to admit this guy’s just a little bit out of my price range (I’m the kind of girl that bought I-Cybie instead of Aibo). Neiman Marcus is selling a robotic Swami head that can recognize faces and carry on conversations. The Swami has an impressive character engine that runs off a PC (which I assume you need to hide behind a red curtain to maintain the illusion), and his head has more than 30 robotic micro motors and microcamera eyes. No word on who is responsible for this creation, but I’m assuming it’s a Hanson Robotics special.

via Techdigest.tv

Nmo2797_mp

Img_2702_2
Instead of following standard neural network model, University of Sussex Center for Computational Neuroscience and Robotics research student Rachel Wood has designed a robot brain with a homeostatic network that exhibits the A-not-B error, a sign of intelligence found in babies between 7 and 12 months old.

The idea is that if AI can make the same developmental cognitive mistakes that we do as humans, it could be a critical step towards advanced AI– robots need to learn stability before they can achieve mental flexibility and adapt.

Meanwhile at University College London, researchers have created a program that sees the same optical illusions as humans.

The only question now is whether we want machines to emulate all of our flaws or try to improve upon them.

via New Scientist (subscription required to view full article)