Thursday 29 December 2011

How the future looked this week of Dec.29th, 2011


With 2012 just around the corner, a quick look at what was happening on PhutureNews this week. My personal favorite was Justin Bieber being elected as the 52nd President of the United States in 2053...nearly 30% of people voted that they thought this could happen, but more to the point of the question was whether a naturalized American (vs. natural born, whatever that means) could run for the highest office. Course, after watching the Bieb on David Letterman last night, I have to say I think the chances of him running for the Oval Office as higher than 20%; as Dave pointed out, “you’re only 17, your whole life is ahead of you, you can do anything!” I think Dave had been reading PhutureNews J

In other news, 20% of people think there is a real chance of the world ending with the end of this cycle of the Mayan calendar on Dec.23rd, 2012 – better enjoy this year folks! And if that doesn’t finish us of, over 70% of people voted that they thought there would be a major cyber war before 2020, assuming the Georgian war didn’t count.

On a more positive note (and I always like to end on a positive note!), over 90% of people voted that they thought it likely that there would be an Olympics in Africa before 2040...heck, half of the schedule is already filled up so they better get a move on.  And, interestingly, less than 15% of people thought that the IOC may lift its drug bans at Olympics in the near future, although we’ll have to see what they say about the various prosthetics that are on their way!

Any ideas of stories from the future you’d like people to vote on, submit them at PhutureNews!

Wednesday 28 December 2011

More human than a human

Soon machines may be more human than a human - the results of the latest Loebner prize competition had some machines almost outscoring human beings on a rating of "how much do you think this is a human" on its annual Turing test competition.

Back in 1950 one of the giants of computational theory, Alan Turing, proposed replacing the question “can machines think” (Turing was the first to even mention machine intelligence in 1947) with a test where a human chats with a machine. If a human couldn’t tell whether they were talking to a human or machine, he reasoned, then shouldn’t we consider whoever was on the other side as intelligent? For instance, if a spaceship appeared out of the sky and landed and we began chatting with whoever was inside, if they carried on a conversation with us like a sentient being, wouldn’t we consider this an intelligence? But there seems to be prejudice on our side when it comes to things we create...

ELIZA was the first computer to fool some humans in 1966, though not for the majority of people, and was the predecessor of chatbots of today. Enter the Loebner prize competition which runs the Turing test each year for a $100k Grand Prize...so far there have been no clear winners although they are getting closer and closer (you can try chatting yourself with the 2011 Loebner prize winner Rosette here. The “More human than a human” article I put up on PhutureNews describes the day when a machine wins the Loebner grand prize. The new chatbots being created this year may give the contest a run for its money in 2012, with SuperChatBot being trained up using social media comments as a data source. I guessed the date of 2017 as the year when machines will finally beat the Turing test, and so far 91% of people agree with this prediction. Already the Cyberlover malware chatbot convinces lonely people across the web that they are chatting with a real human being, emerging as the first “valentine risk”...not very far in the future it will be impossible to tell if an email, text, or even voice conversation is with a real human being, opening up some very interesting and potentially dangerous issues.

The bigger question is, after Big Blue beating Kasparov in chess back in the 90’s, and last year another IBM creation beating the pants off humans in Jeopardy, when a machine beats the Turing Test, at what point do we need to begin considering machines as intelligent in their own right? If you prick us, do we not bleed? And when do we need to begin considering the rights of such machine intelligences...?