Vernor Vinge defines Singularity as a time in the future when society, science, and economy are changing so rapidly that people will not be able to predict or conceive of anything after that time. (Singularity is largely dependent on how technology is developing at unprecedented rates, in areas such as nanotechnology, neuroscience, and Artificial Intelligence). This rapid technological advancement is making some argue that technology and machines will inevitably triumph over human intelligence. For example, in the KurzweilAI.net article, the author asserts that from the progress that he has observed in computer software, he would be surprised if something superior to human intelligence is not created after 2030.
2030? That's only two decades away. But considering how our society cannot function (or some have truly internalized that we cannot function) without reliance on some sort of technology, perhaps I should not be so surprised by this estimated time period. Nevertheless, will not serious ethical and moral implications prevent, or at least, put off, that time in our society when artificial/technological intelligence is considered greater and or more valued than human intelligence?
And what if something greater than human intelligence is created? Will there no longer be the need for improving one's mind or investing in one's education? As said in the group presenations, if the valued intelligence in society is solely determined by one's technological capabilities-- will not the poor, or those who cannot access or invest the greatest amounts of money in this new artificial technology--be left out in the cold and unable to access knowledge?