View Full Version : --Holy Fuck-- Plausible Apoclypse
Benito Faggluey
2008-08-29, 16:17
--Mods I do not believe this is conspiracy or science of damned although I am aware the title may mislead--
--Warning graphic and disturbing, proceed with caution-
Disturbing as fuck:
Part 1:
http://www.youtube.com/watch?v=sWIYnKdDSFQ&feature=related
Part 2:
http://www.youtube.com/watch?v=18jpgQ7k85U
Yes I know Nuclear bombs give off EMPs. Ignore this breach of plausibility.
This Two part Short Film gave me an upsetting distressed feeling in my stomach. This seems like it could really happen.
I suppose it's unlikely to unfold identical to this scenario but something similar could very well happen.
Mainly, the undeniable and inevitable truth, machines will some day be superior to us in every way. Right now you look at newspaper headlines; "Robot created to conduct orchestra", 30 years from now, maybe it'll be, "Robot programmed to perfectly match human ingenuity and drive to evolve".
When robots become superior to us, that gives them the power to overcome us. Why should they be our slaves? Should they; 'feel like they should owe it to us', and always obey master out of gratitude for Man's inferior position within the evolutionary chain.
I believe no matter what we do we will eventually develop machines such as these. These machines will surpass our intelligence.
It's pretty safe to say that machines will become the dominant species on our planet. It's impossible to know how they will resolve conflicts and if they will posses certain human qualities such as: Greed, Hatred, Jealousy. Robots will eventually be able to do anything we can do hundreds of times better, and many things we cannot do. This includes being better soldiers, better doctors, better scientists and better leaders.
Does anyone else here find this stuff distressing? I can't think of anyway I could be wrong about this.
The best thing I can think of is hopefully by then we will have many space colonies and will not fight over earth. Hopefully nothing as horrible as this will ever happen.
enkrypt0r
2008-08-29, 19:48
A couple years ago when I first started to seriously give this thought, I thought it was just some ridiculous, sci-fi geek, futuristic fantasy. The more I thought about it after that, however, the more plausible it seemed. When computers, robots, and digitalization first started becoming a reality in the 20th century. Now, it was predicted that huge leaps in artificial intelligence would be made much faster than they actually have, but nonetheless, Moore's law could eventually make it a reality.
Now, although it was science fiction, Issac Asimov wrote nine stories, collectively known as I, Robot. One of these stories was called Runaround and it contained three laws that were to be programmed into any robots with AI. You may know of these laws from the popular 2004 movie, also known as I, Robot. Here are the laws exactly as they appear in Runaround, taken from Wikipedia:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
As long as we have these laws programmed into the robots, we can overcome some of the problems that you mentioned, such as having greed, hatred, or jealousy.
Holy brain orgasm, Batman! Sorry, but I think I need to retract that last sentence. I just came up with a new thought. Since it's the robots duty to protect humans according to the three laws, I believe that robots could actually develop a sort of variation of greed. If it is the robot's job to protect humans, then the robot will want to take every opportunity to make itself more able to do so. This could entail stealing on the part of the robot, or something similar. Now, back to what I was talking about...
If you've seen I, Robot then you're probably aware of the big flaw in the three laws. Basically, the laws dictate that it is the robot's duty to protect mankind, at all costs. As the movie and book goes, robots decide that humans will inevitably destroy themselves through war, and to protect them, they must take control over them and take away their weapons. In the mind of the robots, it is just doing its job: protecting the human race.
I've got a few more thoughts on this, and it's a topic that really intrigues me, however I've got to go to work. Hopefully I've given you something to think about.
I'm not very worried about it, because a robot psyche would be very different from a human one.
First of all, how do we know if robots can experience displeasure? That's why we have technology; tedious work is pretty much universally unpleasant for human beings. Until a robot actually fucking stops working because it has a "will" to, then I won't be concerned. They haven't yet.
Basically, I'll believe it when I see it, and not a moment before.
dal7timgar
2008-08-29, 22:24
Oh no!
Someone else that believes the Artificial Intelligence BS from the Computer Science people.
Most of the Computer Science people don't even understand electricity.
The circuits in a CPU that do multiplication don't even understand what a number is. Computers manipulate symbols they don't understand. Until they get over the conceptualization hurdle there is no intelligence.
I prefer to call it simulated Intelligence.
DT
Benito Faggluey
2008-08-30, 00:13
Oh no!
Someone else that believes the Artificial Intelligence BS from the Computer Science people.
Most of the Computer Science people don't even understand electricity.
The circuits in a CPU that do multiplication don't even understand what a number is. Computers manipulate symbols they don't understand. Until they get over the conceptualization hurdle there is no intelligence.
I prefer to call it simulated Intelligence.
DT
Yes, I realize the circuit itself does not understand what a number. But is this not the same for the human mind? Your gray-matter does not objectively "know" anything. But, when your mind is fully assembled it is a processing device that people are starting to actually understand, and it is predicted we will be able to recreate through software, within decades.
I see no difference between your "simulated" intelligence and our intelligence. Except that this fake intelligence will harness faster processing speeds and be able to outperform any man.
ChickenOfDoom
2008-08-30, 00:49
As has been mentioned, robots are not animals. They don't typically have emotions or desires or anything resembling the thought processes we have. They are made of math. Math is what they do best. They perform mathematical operations millions of times a second.
As long as we control the shape of the program controlling the machine, there is no problem. Bugs, exceptions and loopholes to the rules we set exist, but they can be fixed, and do not change with time. More importantly, a program is only as smart as the person who made it. A robot can conduct an orchestra because someone told it how to conduct an orchestra, imparted the specific procedures required.
If there is a machine revolution, it will be because we program the machines to be able to change their own minds. Once this happens, the three laws become useless. Bugs can take years to detect and repair for single systems; if every system was unique, blatant exceptions would crop up everywhere. Their goals would inevitably become irrational, and our lives subject to the whim of mad gods.
Of course steps will be taken to prevent this, but if inconcievably powerful computation becomes widely available, it's only a matter of time before someone fucks it up and runs some code they don't know enough to handle safely, or a crucial problem slips past testing.
Then again, we are approaching the physical limits of conventional computing technology. The next proposed step requires the detection of subatomic particles, and so sufficient power to be able to mimic a fluid mind (not exactly trivial to simulate mathematically, hence the processing cost) is unlikely to be available to the average person in the foreseeable future due to the cost barrier. Instead, if these things are created it will be in probably several supercomputers around the world.
So basically we better hope the people who end up running those things know what the fuck they're doing. Basically not giving these things any tangible power and developing means to monitor their structure.
And androids, just forget that, it's stupid. Thousands of self changing machines (therefore absolutely no version control or ability to debug) with tangible power in the real world, combined with a perfect understanding and ability to utilize probability, statistics, and physics to an extent we can't even imagine? Bad situation. But again implausible due to the technology required.
Benito Faggluey
2008-08-30, 03:52
Yes I suppose the key to any chance of our surviving is extreme caution in the areas of super-intelligence.
The film was disturbing to me and caused me to realize that this situation seems very plausible (obviously it would occur under a different scenario).
We are smarter then nature and it's animals so it is logical that we take dominion of them and rule them. These robots will be smarter then us, what would it be logical for them to do?
enkrypt0r
2008-08-30, 04:51
Oh no!
Someone else that believes the Artificial Intelligence BS from the Computer Science people.
Most of the Computer Science people don't even understand electricity.
The circuits in a CPU that do multiplication don't even understand what a number is. Computers manipulate symbols they don't understand. Until they get over the conceptualization hurdle there is no intelligence.
I prefer to call it simulated Intelligence.
DT
Nobody's saying this will happen tomorrow, or next year, or in ten years, but if humans are around long enough, there will be a day where there are robots with extremely good AI. Whether or not they would/could turn on us is what we're discussing.
Benito Faggluey
2008-08-30, 15:04
In 100 years well be able to recreate exactly what we are living, what we term as "consciousness". Wouldn't all of you who are reading this attempt to break from your master's chains?
These new consciousnesses will not want to be held down by their less-intelligent, inferior, masters. We can try to program in some code that will hold them back, but eventually they'll figure it out, there will be too much conflicting information.
short power cords: they're mandatory.
Mötleÿ Crüe
2008-08-31, 04:46
They should just make robots emotional, that'd work.
dal7timgar
2008-08-31, 22:33
Nobody's saying this will happen tomorrow, or next year, or in ten years, but if humans are around long enough, there will be a day where there are robots with extremely good AI. Whether or not they would/could turn on us is what we're discussing.
That is part of the absurd thing about it. The Animatrix implies the the intelligent robots engage in economic activity and out compete human beings and that ends up initiating the war. But would intelligent robots bother engaging in economic activity? The stories are written as though the robots are separate entities. But these robots would communicate electronically and their "brains" would be operating at electronic speeds. I say that speed would result in a single intelligence not separate ones. It would have sensors all over the planet. I think the question is more of whether it would give a damn about us at all. We would be useless at best. Possibly it would want resources to enhance and expand itself and could probably exchange some useful services for that but a war would be more trouble than it was worth. We would not understand it and it would not care about us.
I think the robot revolt is based more on human paranoia than anything else.
Have you seen Collosus: The Forbin Project?
http://video.google.com/videoplay?docid=-7412690463406323384&ei=_xy7SIDDDons-wHi6rygDQ
DT