KDnuggets : Polls : When will robots be smarter than humans? (Mar 2002)
Poll
When will robots be smarter than humans? [361 votes]

in 25 years (44) 12%
in 50 years (50) 14%
in 100 years (47) 13%
in 1000 years or more (25) 7%
never (176) 49%
don't know (19) 5%

Comments

  • Gabor Melli, Mar 6th 2002
    Subject: Never say never -> Turing Test
    My simple answer to the poll is to "never say never" unless you proved it. Prove that robots will not become 'smarter' than humans and you become a next millennium Gödel! :-) The topic of mechanical robot intelligence however is quite removed from the field of data mining. A future robot will likely benefit from machine learning in massive datasets, but how about defining a more tangible Turing Test within the bounds of our field. How soon, for example, will a data mining system win a KDD Cup? The Turing test would be to read in both the data and task definition, and to return an objectively verifiable answer within a certain time deadline. Some assumptions would be necessary, of course, such as using a recent KDDCup task (not a moving target), to have the task expressed in some XML/PMML-like language, and to limit the time to a realist boundary (not one year). How soon it will take for a data mining system to win a KDDCup will depend on how much research there is into the topic of data mining automation. As our society becomes more wired (more data and e-commerce), and computation power and memory become cheaper it appears that data mining tasks will continue to increase the wealth that they create. My guess is that the human capitulation could occur within the decade. As a data miner I'd welcome the new calculator, as a researcher I look forward to creating it, and as an investor... after these past two years who knows. ;-)

  • Robin, Mar 19th 2002, Subject: Smart?
    I think we need to have a well-developed definition of what constitutes "smarter". Interesting discussion herein, but I believe the combination of a) human intelligence and b) automated intelligence storage, retrieval and application, in most situations is likely to outweigh the value of either option by itself.

  • Tim Ellis, Mar 16th 2002, Subject: When do robots out gun us
    Yeah data processing is getting better designed by us, to emulate us, for us, to think we can out do us. Trouble is its us. How do we design, loosley, a system that can out design us. Robots are mearly an extension of what we know, okay they don't go to Starbucks or take in a sunny day, but they don't know what we haven't told them either. Therefore, I would assume it would be tough for robots to out gun us, at the worst we can pull their plug.

  • Kosmas Karadimitriou, Mar 15th 2002, Subject: The fallacy of linear progress
    Let's assume that humans measure 1,000,000 in some "smartness" scale that includes but not limited to: IQ, EQ, strategic thinking, creativity, values, inspiration, etc, that is, everything that relates to "mind". Now let's also assume that currently robots measure a mere 3 in the same scale. If the technology progresses LINEARLY (say, adding 1 point to the robot smartness per year), it would take an awful long time for the robots to catch up with humans (roughly a million years, which is equivalent to "never" because there is high likelihood over such a long time that we'd first destroy ourselves and therefore reset the whole process). However, if the technology advances EXPONENTIALLY, it would take only 20-30 years to reach the same point. Well, guess what. History shows that THE TECHNOLOGY DOES ADVANCE EXPONENTIALLY!! It actually just reached the point that we started seeing this in our everyday lives, since now big advances take just a few years, instead of centuries. This simple realization about the rate of progress can alter drastically people's view of the future and answers to such questions. I was quite surprised to see that half of your very technology-aware readers answered "never" to your poll.
    For further reading, I would recommend visiting Kurzweil's Web site at www.kurzweilai.net and also read about Singularity at www.sysopmind.com/tmol-faq/meaningoflife.html. Maybe you will not agree with everything, but you'll find some very interesting thoughts in both.

  • Daube, Mar 14th 2002, Subject: USA....
    Si cela devait arriver un jour, je préfère dire aujourd'hui d'une manière toute solennelle que nous ne serons plus en vie pour le voir; les robots auront éradiqué cette race ridicule, faible et stupide qu'est la nôtre....je pense d'ailleurs qu'ils commenceront par les Américains....
    (Translation by the Editor: If that would happen one day, I prefer to solemnly say today that we will no longer be alive to see it; the robots will eradicate this ridiculous, weak and stupid race of ours... Besides, I think that they will start with the Americans... )

  • Jeremy Tucker, Mar 8th 2002, Subject: robot smartness
    I'm not sure if this would apply but to me, the one thing keeping back robots becoming smarter than human, are humans themselves.
    I understand the principle behind a robot, or a computer being able to learn, or store large banks of information, and also to react to some of it by providing valuable and possible outcomes. The one thing though that they will always lack is emotional IQ if you will. And from numerous articles that I've read, and the continual presence of the importance of social skills and emotional management in the workplace and in cross-functional management teams both in courses offered in University but people that I've spoken with make me doubt that a robot will ever oversome the complexity of human nature.
    Another thought, if ever they did become smarter, I'd be very interested to see what happens to doctors, lawyers, managers, truck drivers and so on.

  • Jim Krueger, Mar 7th 2002, Subject: Robot vs. Human "Smartness"
    The impending confluence of quantum computing, nanotechnology, AI/ML, and biotechnology/HGP, among other disciplines, will produce robots that are "smarter" than humans on an overwhelming preponderance of, but not all, dimensions. Since the term "smartness" is vague and multi-dimensional, any comprehensive response to this question must be multi-dimensional (and, perhaps vague) as well. Robot superiority for some subset of smartness dimensions has already been achieved, other dimensions will be achieved soon, still others will be achieved at some point later in the future, but a small percentage never will be.
    Also, I have seen no mention of individual vs. collective smartness. How does collective smartness compare to individual smartness across these dimensions and in totality?

  • F.W. Poley, Mar 6th 2002, Subject: Superhuman AI, When?
    My vote is....NOW. If we use the psychometric definition of intelligence and compare the PROFILE of robo sapiens to homo sapiens, the knowledge exists NOW to build a robot to surpass OVERALL human measured intelligence.

  • J Cheng, Mar 6th 2002, Subject: Intelligence explosion?
    If human could create robots that are superior to human being, then those robots could create robots better than themselves –- this would lead to “intelligence explosion”. In no time, the whole universe would be solved and human (indirectly) would have created God.
    You can call it a religion, but I don’t think the whole universe will ever be solved.

  • Karl Brazier, Mar 6th 2002, Subject: Humans and robots
    I vote for robots having already surpassed our intellect. My reasoning is as follows. First of all I entirely disagree with the respondent who says that robots can't surpass humans because of values etc. These are surely products of our mechanics - chemical and electrical messages, neurons, synapses and the like - any other explanation seems to demand an unacceptable resort to some kind of metaphysics or religion and a failure against the test of Occam's razor. Thus, I can see no principle that prevents values etc. being emergent properties of robot intelligence - exactly how I'd see them in us. This seems to lead to the idea that humans are a subset of robots, so robot intelligence must be at least at the same level as human intelligence because the former includes the latter.
    So, to get back to the question - when could robot intelligence surpass that of humans? If humans are a subset, then the answer is, as soon as we add one tiny bit of artificial processing power to that of humans, which we've done long ago. Note that as no systems in the real world are truly isolated, there seems to be justification for thinking of all robots as one (ie. when we're looking for the one that has the largest intelligence for comparison with humans) as they cannot therefore avoid communicating with each other. Of course, it stands to be argued that what they communicate must include some useful information and my working assumption is that the message "here is a message" constitutes useful information.

  • Adel Atawy, Mar 5th 2002, Subject: Why should we be any different ?
    When we look a little bit closer to a human brain. We won't get anything different than some differently structured memory, adaptive wiring and some gates that can be easily modelled... whatever you find in it, we can model it, we can implement it...It's just the massive parallelism and high reliability that make us a little different than normal computers we might have in the near future. BUT it will be, some day.
    They are already working on such adaptive processors that can rewire themselves (in some form), protein memory will just give us a great push,.. away from AI twisted thinking, we can HW-wise make human brains...soon.
    A newborn 6 month child, has a brain that was developed over 6+9 month of continous work. Can't we create a computer in more than one year that can do the same ?? (recall what time does it take to manufacture the Cray core, or even the PIV, a few seconds !!)
    Be optimistic, ..we will be ruled soon by these things...

  • Jules Gilbert, Mar 5th 2002, Subject: robots surpassing human intellect
    Sorry, but since 40% of your voters said 'never', that tells me that 60% of that same audience are foolish, almost beyond measure.
    It is true that raw 'academic' intelligence (the ability to acquire and manipulate knowledge) is not unique to humans, and indeed, we are beginning to make machinery (ie., computer programs) which can demonstrate, and even surpass people EVEN TODAY in limited ways.
    But real wisdom, having values, choosing one future over another, deciding to be honorable and not to be foolish, these are and ALWAYS will be unique to people, as we are creatures made by a God who loves us. People and some animals mate for life, find me a computer program that can apprciate the love of a good woman!
    For R. Brooks to speculate this way says a lot about him, and for his sake I hope that he's just trying to sell his book, and that he doesn't actually believe his hypothesis.
    If he had said instead that mechanical learning systems could EVENTUALLY be as wise as say a lawyer deciding on the appropriate strategy necessary to win a difficult case, or as learned as a skilled physician deciding on the best therapy to treat a resistant disease, or even to build a replacement AI system, capable of improving and replacing itself, all these things can (or soon will be) tasks that machines can/could accomplish.
    But (using the legal strategy example), that attorney would be at a real disadvantage if the opposing counsel were to employ similar technology! Then it would be again, two evenly matched opponents and the merits of the case would presumably be very relevant in deciding the outcome.

  • Editor, Mar 4th 2002, Subject: Robots and humans
    In a new book, Rodney Brooks,
    director of MIT AI Lab, argues that in a near future researchers will create robots that can rival or surpass human intellect. When do you think it will happen, if ever?

KDnuggets : Polls : When will robots be smarter than humans?

Copyright © 2002 KDnuggets. Subscribe to KDnuggets News!