Skip navigation

Category Archives: Robot Rights

A bar-owner in Atlanta saw Robocop a few too many times, and invented a “Bum Bot” to annoy homeless people and force them to leave the area. The Bum Bot doesn’t look much like Peter Weller or a Terminator though– it looks kind of like that thing Capt. Pike would tool around in on Star Trek.

And now, here’s Stephen Colbert’s full investigative report on “Difference Makers: The Bum Bot”.

[O’Terrill’s Bar — Home of the Bum Bot]

Ap03071705952
io9.com has a robo-tastic new article about Japan’s plan to make robots an integrated part of everyday life. To compensate for the shortage of young workers willing to do menial tasks, the Japan Robot Association, the government, and several technology institutions drafted a formal plan to create a society in which robots live side by side with humans by the year 2010.

Takayuki Furuta, the director of the Future of Robotics Technology Center in Chiba, said that the country is on track to reach this goal, and that a primary goal of the collaboration is to establish international standards for humanoid robot software and hardware—in a similar manner to how techies determined what nuts and bolts and basic programs would comprise a standard computer so many years ago. Phase 1 (planning) and phase 2 (hardware) are complete as of March 2008; phase 3 (software) starts this month. “We’re going to be the first country in the world with an official robotics ministry,” he says.

In the US, he explains, there’s a strong emphasis on developing software, like artificial intelligence and programs for military tools and weapons. But Japan doesn’t have a military, so robotics research ends up going into applications for everyday life. And since Japan is a densely populated country with small living quarters, developing compact hardware for utilitarian humanoids becomes infinitely more important.

The initiative doesn’t end in 2010, but that’s the benchmark year by which they plan on having robots doing janitorial work, security, child care, client liaison work and intelligent wheelchairs nationwide. Roboduties will expand to everything else—driving cars, cooking dinner, producing TV shows, marrying humans—by 2020.

Read the full article at io9.com.

Ff_kurzweil1_f

Ray Kurzweil is sixty years old, but he believes The Singularity is near– and that he might live long enough to see it. WIRED interviewed Kurzweil about the extraordinary measures that he is taking to prolong his life long enough to transfer his consciousness into that of a machine.

In addition to guarding his health, Kurzweil is writing and producing an autobiographical movie, with cameos from Alan Dershowitz and Tony Robbins. Kurzweil appears in two guises in the film– as himself and as an intelligent computer named Ramona, played by an actress. Ramona has long been the inventor’s virtual alter ego and the expression of his most personal goals. “Women are more interesting than men,” Kurzweil says, “and if it’s more interesting to be with a woman, it is probably more interesting to be a woman.”

He hopes one day to bring Ramona to life, and to have genuine human experiences, both with her and as her. “I don’t necessarily only want to be Ramona,” he says. “It’s not necessarily about gender confusion, it’s just about freedom to express yourself.”

Kurzweil’s movie offers a taste of the drama such a future will bring. Ramona is on a quest to attain full legal rights as a person. She agrees to take a Turing test, the classic proof of artificial intelligence, but although Ramona does her best to masquerade as human, she falls victim to one of the test’s subtle flaws: Humans have limited intelligence. A computer that appears too smart will fail just as definitively as one that seems too dumb. “She loses because she is too clever!” Kurzweil says.

The inventor’s sympathy with his robot heroine is heartfelt. “If you’re just very good at doing mathematical theorems and making stock market investments, you’re not going to pass the Turing test,” Kurzweil acknowledged in 2006 during a public debate with noted computer scientist David Gelernter. Kurzweil himself is brilliant at math, and pretty good at stock market investments. The great benefits of the singularity, for him, do not lie here. “Human emotion is really the cutting edge of human intelligence,” he says. “Being funny, expressing a loving sentiment — these are very complex behaviors.”

Ramona_3

Terminator36

An elderly man commited suicide by programming a robot to shoot him in the head after building the machine from plans downloaded from the internet.

Francis Tovey, 81, who lived alone in Burleigh Heads on the Australian Gold Coast, was found dead in his driveway. According to the Gold Coast Bulletin, he had been unhappy about the demands of relatives living elsewhere in Australia that he should move out of his home and into care.

Notes left by Tovey revealed that he had scoured the internet for plans before constructing his complex machine, which involved a jigsaw power tool and was connected to a .22 semi-automatic pistol loaded with four bullets. It could fire multiple shots once triggered remotely. His notes suggested that Tovey chose to kill himself in the driveway because he knew there were workmen building a new house next door who would find his body.

The scheme worked, as carpenter Daniel Skewes heard gunshots and ran to Mr Tovey’s home. “I thought I heard three shots and when we ran next door he was lying on the driveway with gunshot wounds to the head,” Mr Skewes told the GCB.

A neighbour, who did not want to be named, told the newspaper that Mr Tovey had lived at his home on Gabrielle Grove since 1984. “He was a really marvellous man, an ideal neighbour and I will miss him greatly,” she said. “He was born in England, like I was, and we used to enjoy our tea together. He had visitors from England and family interstate from somewhere far away in Australia.

“There was no inkling of anything amiss, it is just very sad.”

We’ll never know the true extent of someone else’s secret pain.

From Times Online, UK edition.

Itsuki02_3
I personally find the idea of robot abuse distasteful, but people are already developing a fetish around images of injured and maimed androids. My friend directed me to “Amputee Robot Girl Bondage” by Itsuki Takashi — the name is pretty self-explanatory. While destroyed ‘bots aren’t my cup of tea, I really liked some of Takashi’s other artwork.

This weekend at WizardCon I met Jeffrey Scott, a photo-manipulation artist with a strong interest in steam-punk style robotics. He’s best known for his art book Visions From Within The Mechanism, previously featured here at Nymphblog.

When I told Jeffrey Scott that we had a shared interest in robotics, he discussed his new work aristocracy 2032: mistress linn unplugged from securities for the sake of unconditional love. He said that the photo was meant as a commentary on robot rights, and predicted that legislation would eventually be necessary to ensure that humans didn’t mistreat their android companions. I countered that perhaps it was better that humans work out their violent tendencies on machines that could have their memories wiped soon afterward and be programmed not to feel– at which point the conversation digressed into a discussion about the existence of a soul.

Alas, we can enjoy the artwork for now and think about the consequences later.

Aristocracy_20032_mistress_linn_u_3

I_robot__runaround

Asimov’s Three Laws of Robotics seem like the perfect guidlines for robot interaction with humans– but what about robots in combat? How do we program ethical behavior into a robot designed to engage in combat with a human? With our increasing reliance on unmanned aerial vehicles and iRobot’s surveillance and bomb-disarming bots, the question may not be so speculative in the years to come.

Researcher Ronald Arkin at the Georgia Institute of Technology’s Mobile Robot Laboratory has grappled with the issue in a new paper, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture”.

Arkin reviewed the laws of combat through the ages, noting that in human-to-human combat, it is not proper to attack civilians or even soldiers who have laid down their weapons in surrender. But Arkin also formulated a machine-ready algorithm for ethical behavior, a tricky undertaking given that even humans find all-too-many potential actions in combat ethically murky at best.

Arkin’s approach was to first “describe the set of all possible behaviors capable of generating a discrete lethal response…that an autonomous robot can undertake.”

Then he formulated a set of ethical constraints — based on the Geneva Conventions and other largely agreed-upon ethical norms for war — and applied them to this set of lethal behaviors. The resulting set of ethically lethal actions could then be implemented through a number of architectures and even a pseudocode that Arkin offered.

This is, of course, is a much-simplified and abstract approach, and it took Arkin nearly 100 pages to formalize such inherently loose concepts as return fire, ambush and other tried-and-true military tactics. Arkin said his work is only beginning, but he is optimistic about future developments.

In the words of James Cameron — “If a machine can learn the value of human life, maybe we can too.”

[GCN Insider]

Skydollt03_s_big_s_

The internationally acclaimed French best-seller Sky Doll is being presented in English thanks to a new partnership between Marvel and French publisher Soleil.

Sky Doll is the story of Noa, a life-like female android without rights, who exists only to serve the State’s needs and desires. But when Noa meets two so-called “missionaries” who aid in her escape from her tyrannical master, all hell breaks loose for our cyborg siren as she uncovers clues that she may be much more than just a robotic toy.

The new title is pretty sexy stuff for Marvel– it remains to be seen if the comic will be censored for tender American eyes.

From io9.com

[Official Skydoll Site (under construction]

Check out Rhiz Khan’s in-depth interview with David Levy about the future of Love and Sex with Robots.

It’s the best television coverage I’ve seen on the subject yet.

Sci0804irobot_a

Dr. Caroline West, a senior lecturer in philosophy at Sydney University, says we should already be thinking about what will happen when humanoids develop the ability to reason and integrate into society. If humanoids become as intelligent and capable of feeling as humans, should they be given the same rights? The question cuts to the heart of what a “person” is.

“It could happen tomorrow, it could happen in 50 years, it could happen in 100 years,” says Professor Mary-Anne Williams, head of the innovation and research lab at Australia’s University of Technology. “People and animals are just chemical bags, chemical systems, so there’s no technical reason why we couldn’t have robots that truly have AI.”

Professor Williams believes a unique form of robotic emotion could even evolve one day. “You could argue some robots can mimic (emotions) already,” she says. “But because a robot will experience the world differently to us it will be quite an effort for the robot to imagine how we feel about something.”

“One of the things we’ll want robots to do is communicate. But in order to have a conversation you need the capability to build a mental model of the person you’re communicating with. And if you can model other people or other systems’ cognitive abilities then you can deceive.”

Humans generally anticipate how another person might feel about something by thinking about how it would affect them. People who don’t have the ability to empathize can become psychopaths.

“I think there is a danger of producing robots that are psychopathic,” Prof Williams says.

Of course, Isaac Asimov formulated the three laws to try and prevent robots from harming humans, but Professor Williams says this is easier said than done. Especially when there are robots already trained to kill on the battlefield in Iraq.

“You need a lot of cognitive capability to determine harm if you’re in a different kind of body. What will we do when we have to deal with entities … who have perceptions beyond our own and can reason as well as we can, or potentially better?”

Dr. Caroline West says, “If something is a person then it has serious rights, and what it takes to be a person is to be self-conscious and able to reason. If silicon-based creatures get to have those abilities then they would have the same moral standing as persons. Just as we think it’s not okay to enslave persons, so it would be wrong to enslave these robots if they really were self-conscious.”

Via TechNewsWorld.