In chapter 14 there is a passage on page 235 that pushes the question of autonomous robotic development. It is undeniably intriguing, but I feel like the demand for a higher and higher form of artificial intelligence merely romanticizes the purpose of robots - making them closer to their idealized science fiction form. Taking away "the meat" in favor of computer function, and presumably, and efficient work is no new idea. Ask anyone from Flint, Michigan: machines can build a car much faster than a man can. However those car-building, hulking machines are merely programmed to do the job, not necessarily figuring it out and refining its own tactics.
That being said, it as suggested in the chapter that robots could, theoretically, be sent to the moon (a suggestion made around the height of moon exploration in the 1960' and 70's) to collect data, materials, and eventually reproduce, in a way, and continue work for however long it is needed. This seems not only flawed, but dangerous. The demand to match the science fiction imagination is a frivolous one. It would be a landmark discovery if computers did ever obtain a "sense of self," but juxtapose that with the already established human sense of self. Robots could form judgments, biases, trait, and mannerisms. Maybe I've seen too many movies, but it doesn't seem necessary to provide computers with fully functioning minds, complete with free will. I just seems dangerous, though glamorous. At the end of the day machines are here to do our bidding, practically, and they should perform as far as they are programmed, not beyond and not of their own devices.
Tuesday, January 29, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
Okay, but can you be more specific in identifying what the exact dangers are in this scenario?
Post a Comment