pinterest-p mail bubble share2 google-plus facebook twitter rss reddit linkedin2 stumbleupon
The Premium The Premium The Premium

Five Rules For A Future Filled With Robots

Five Rules For A Future Filled With Robots

Innovations in advanced robotics and “thinking” computer programs are making sci-fi fans’ dreams of autonomous robots more and more likely. With programs like the DARPA Robotics Challenge (DRC) motivating researchers with lucrative contracts, and private companies like Google launching AI think tanks, it is likely that mobile, self-aware robots will be a reality within the next few decades. Even though these programs seek to promote positive development, there is no telling how or where the robots of the future will be used, or what level of intelligence or autonomy they will possess. What’s more, we have to be conscious of the fact that most funding for this research comes from national defense programs.

This delicate balance between social and military benefit is nothing new in the land of technology, so we might as well accept it and press on in the hope that we can advance responsibly. The big question: what rules should we impose on our robot buddies to ensure their responsible integration? If they’re going to be working alongside us day in and day out, in factories, on the battlefield, and potentially on new planets, we can’t just let them wander around doing whatever. To this end, we have come up with 5 rules which we think will be vital to the success of having walking, talking robots in our daily lives.

This list purposefully avoids reference to Isaac Asimov’s Laws of Robotics. Why? First, they’re fictional, and it seems unlikely that anything that even comes close to the sort of functionality and thought those robots demand can be built into a machine (for now). Even if AI at that level were possible, it would be far too intelligent to be limited by a few basic rules. Second, the laws pretty much preclude the possibility of using robots in warfare, and unfortunately that just isn’t realistic.

Our five rules are more directed towards the design and production of our future helpers, and are listed from least to most important. Hopefully, these rules will guide the creation of our automatons so that they won’t end up doing something unsavoury – like enslave the human race, for example.

Rule 5: We Need R&D Transparency In Robotics.

Jan. 19 airpower summary: Reapers target enemy forces

This is far and away the least likely rule to be followed. One of the major fears in the development of any technology is that your ideas and developments will be stolen, thus losing you competitive edge and profits. When research and development is made transparent, any possibility of market advantage pretty much disappears. But the benefits of this rule for society are huge.

First, transparency and cooperation leads to faster, more utile advancements. Second, and more importantly, there is a greater degree of accountability and control over who is making what and for what purpose. There are 70 countries currently developing their own drone fighters to mimic the American Predator and Reaper systems. The reason? Because it is a terrible situation everyone when just one power controls the most advanced weapons in the world and uses them to browbeat the others. Of course, there are arguments for the necessity of this type of situation, but the benefits of cooperation far outweigh any disadvantages.

Rule 4: Humanoid Robots Must Not Be Built (Solely) To Kill

Robots - Not Just killing

Speaking of weaponised robots, we think it would be best to avoid any Terminator-esque warfare. Granted, it would be wonderful were it the case that if and when war is waged, the involved nations simply threw robots at one another, but that won’t be the case. As with any military tech, there will be the haves and the have-nots. Moreover, as military drones already show, the act of killing is a less and less visceral experience to those doing it by remote, making it all too easy to be detached when a machine takes a human life. Mechanized warfare makes the emotional strain which might prevent violence melt away. It’s a bit of a catch 22 in that we can reduce the number of people on the front lines, but at the price of making warfare an even easier proposition.

Let’s be realistic though: our future robots are going to kill people, just more efficiently. But hopefully this is only a small part of what these technologies can and will do. On many occasions military technology is repurposed by society to serve the greater good and improve daily lives. Going back even to pre-history, violence and conflict have led to medical, social, technological, and philosophical progress as much as it has caused stagnation. That said, it would be nice to avoid the killing if possible and make the happy, love-thy-neighbourly part of development the main focus of future robotics, rather than a happy and unintended result.

Rule 3: Humans Are Responsible For Their Robots’ Actions.

Robots - Ownership

In the event of mass produced robots flooding our streets and homes to help with carrying the shopping and doing the dishes –brutal, time consuming tasks that need to be done by anything or anyone other than us – it will be nice to know whose robot is doing what. In the same way we are responsible for our vehicles and machines today, we should be responsible for them tomorrow. If one has a firm grasp on reality, one would not blame a car for intending to hit a pedestrian. In the same way, we can’t have our robots taking the blame for our own poor use of them. If we follow the next rule properly, this is even more important because it will be our decisions which guide our robot slaves’ action. We’d have no one to blame but ourselves for HAL and Skynet – they were just doing their jobs.

Rule 2: Robots Must Look Like Robots.

Robot - Look like

Let’s face it, at best, attempts to make robots look human come off as nightmare inducing. There is something so very wrong about a thing that almost looks human, but isn’t. More seriously, having robots walking around doing menial, dangerous labour we wouldn’t wish on our worst enemies, with smiles on faces that look just like our smiles and faces would likely result in some serious psychological damage.

A land filled with replicants (androids who look and act exactly like human adults but for their lack of emotion) would look the same as it does now, but feel completely odd. Daily social interactions would be laced with strange encounters with emotionless, human-looking things, but we would never really be able to know for sure which were real and which were false. Of course, we could tag our robot helpers, throw their voices, or do any number of small adjustments to make them identifiable, but then what’s the point of giving them the creepy mask in the first place? Why make a robot robotic in every sense but its appearance? This leads us to rule #1…

Rule 1: Robots Must Be Machines, Not People.

China Foxconn

It may seem like a missed opportunity, but it’s probably best to keep the adaptive intelligence level of our future workers to a set minimum, regardless of what we are capable of creating. If the goal of advanced robotics is to create aids that can do things and go places we don’t want to, can’t, or shouldn’t, it’s not really necessary that we give them fully functional thoughts, ideas, or emotions. If we really want to buy and sell these guys, making them work for us in awful positions, it really wouldn’t do to have them thinking and feeling to the point where they realise how terrible their lot in life really is. There has been more than one period of time when we did this to whole groups of people just like us, and all it took to make that seem okay was to dehumanize them. It would be nice if we could avoid that nasty business this time around and just not make our mechanical slaves people at all. It would be like we were learning from our past – radical.

  • Ad Free Browsing
  • Over 10,000 Videos!
  • All in 1 Access
  • Join For Free!
Go Premium!