Wednesday, April 22, 2009

The "Post-Human"/Tech. Singularity

Vernor Vinge defines Singularity as a time in the future when society, science, and economy are changing so rapidly that people will not be able to predict or conceive of anything after that time. (Singularity is largely dependent on how technology is developing at unprecedented rates, in areas such as nanotechnology, neuroscience, and Artificial Intelligence). This rapid technological advancement is making some argue that technology and machines will inevitably triumph over human intelligence. For example, in the KurzweilAI.net article, the author asserts that from the progress that he has observed in computer software, he would be surprised if something superior to human intelligence is not created after 2030.

2030? That's only two decades away. But considering how our society cannot function (or some have truly internalized that we cannot function) without reliance on some sort of technology, perhaps I should not be so surprised by this estimated time period. Nevertheless, will not serious ethical and moral implications prevent, or at least, put off, that time in our society when artificial/technological intelligence is considered greater and or more valued than human intelligence?

And what if something greater than human intelligence is created? Will there no longer be the need for improving one's mind or investing in one's education? As said in the group presenations, if the valued intelligence in society is solely determined by one's technological capabilities-- will not the poor, or those who cannot access or invest the greatest amounts of money in this new artificial technology--be left out in the cold and unable to access knowledge?

4 comments:

  1. I feel as if we have already created something greater than human intelligence already, however, we still have to give that technology the commands to act. At the same time, many older people are absolutely clueless when it comes to working computers. Thus the question is begged if technology is not already outpacing human intelligence as past generations are unable to keep up witht he advancements we see. Therefore, by the time 2030 comes around, will technology be so advanced that we may not be proficient, but the youth at the time could be experts, much like we're seeing in present day?

    ReplyDelete
  2. I don't believe that artificial intelligence is considered to be greater than human intelligence. I believe that it has the computational and mathematical capacity that allows for faster abilities, but humans are still necessary to give the "human" touch to these robots. Human intelligence, I believe, includes the ability to feel and understand and empathize which will never been seen as lesser than A.I. For that reason, I don't believe that the value of education will ever be forgotten.

    ReplyDelete
  3. I think the point at which artificial intelligence starts to threaten humans is not when it becomes more intelligent than us, but rather when the computer can decide for itself what commands to obey or not obey. It's not the raw intelligence that makes technological singularity an issue, it is the intelligence combined with reason. I think that this day might be sooner than we think. Even though older people are falling behind in terms of technology the types of things going on at the highest levels are extremely advanced.

    ReplyDelete
  4. Initially, new technologies will always be too expensive for the poor to afford. As is the case with any technology, however, it eventually becomes accessible to even those with meager incomes. Cell phones, for instance, at one point could only be had by the wealthy. Now, even those on minimal wages can afford a cell phone plan. Similarly, intelligence-enhancing devices should eventually be available to just about everybody as time passes and the cost of technology decreases. However, there will be probably be a (long) period before such devices could become available to those with average or below average financial means.

    ReplyDelete