Will Robots Replace Humans!? | Happy Birthday to the “Father of Robotics”

To be honestly,I think in the chess games, the AlphaGo can replace the general players :smirk:
It only took two and a half years, the AlphaG has been so powerful

1 Like

It reminds me of the TV series Westworld :grinning:.

I like Dolores! But I don’t think robots will take over the world. Maybe humans will live for 150 or 200 years, or even 1000 years in the future, like half-robot :grin:

2 Likes

Only labour intensive tasks, currently AI has no match to human brain…

2 Likes

I love my robotic vacuum (mainly because I hate vacuuming myself :joy:), so thank you, Joseph Engelberger!

3 Likes

Depends which humans were used to compare to a robot. :smile:

2 Likes

I think we are still about 30-50 years away until robots become somewhat of a norm

1 Like

Yes I think Robots will become significant competition for humans, and these will then co-evolve.

Take for example the domestication of the wolf into the dog. Originally the wolf had to hunt for food and live in the cold and have the dangers to it. Now they have food brought to them in great variety and quantity and live in warm safer environments. That would imply the wolf dominated and won control over humans - right?

No, that is not how humans see it, they see themselves as in control of the dog, and they keep as pets, some useful functions, the human decides where and when for the dog (to the most part).

So neither is wrong and neither is right.

As technology improves, the artificial life (which it will become) will evolve a mix of all by itself (machine learning with being programmed by human) and by humans who add technology over time. The human will also evolve to make use of robots.

There will end up a co-dependence, just evolution has done to nearly all life on this planet. A life form which gains a benefit from other life forms will get to an equilibrium.

Another example is the lion and the antelope. If the lion becomes too powerful and in too much quantity, they feed the antelope to death. If the antelope got too good at evading the lion their population would increase and over-feed the grazing land due to older / slower / weaker antelope still eating. No, the grass, the antelope and the lion have co-evolved to keep a balance.

There have been however a number of mass extinctions as co-evolution means that if a massive event disrupt one form of life, it disrupts the other co-evolved lifeforms. This may be in the future where certain robots need humans and if humans die then those robots also die, but other robots who can manage without humans can continue.

There is no predicting the outcome here. One approach is humans make robots who make humans extinct who then make those robots extinct, but then life on the planet will simply keep going on and another intelligent life form will evolve, it may also make robots which then extinct the new intelligence, and so life does it again. Eventually an intelligent life form will evolve which has humans and they do it in a manner where humans and that intelligent life are in equilibrium. But still, the species who now exists buying Anker would have been long extinct.

3 Likes

I see two breakthroughs due to the current limitations.

Current limitations of humans:

  • intelligence is a function of many factors but ultimately limited by evolution, which runs very slowly, each generation has a fresh opportunity to be a little different.
  • intelligence needs big problems to use the plasticity of the human brain to form new synpatic pathways. So that requires education and stimulating tasks. Global warming will cause many to have to focus on basic needs (e.g. moving out of Florida).
  • evolution needs competition. If a mediocre useless human can live a full life then evolution will not be given a chance to deselect them out, so they are allowed to dominate.
  • social inequality will mean when evolution produces a better human, it is not given an equal fair chance to succeed. Misogyny, religion, politics, all act to squash a human’s chances. Money makes money, debt makes debt, laws are based on popularity so slows innovation, religion proposes myth as fact.

Current limitations of robots:

  • made by limited humans.
  • humans cannot make something as intelligent as themselves
  • limited by manufacturing skills, such as ability to sense and detailed dexterity
  • limited by energy supply (humans eat, but robots need batteries which need recharging)

Where I see a leapfrogging potential:

  • Global warming will tend to make the mediocre useless human deselected out - unfortunately that will manifest as mass human suffering.
  • Eventually humans can make self-learning algorithms for robots to they can learn at their own speed not at the speed of the human’s ability to improve the robot intelligence. Once this occurs then robots will evolve intelligence exponentially.
  • Eventually robots can make robots, then the physical dexterity and energy supply problems will be improve exponentially.
  • Robots will produce competition for humans, so then humans are force to evolve faster (less mediocre humans around to dilute the gene pool).
  • If humans can get better at manipulating its DNA then it can remove some of the barriers to intelligence and to evolve faster.

So ultimately I see a future intelligent robot, co-existing with a future more intelligent human, they will probably end up co-dependent as robots use humans for whatever humans are good for, and vice versa.

2 Likes

I think home automation and driverless cars open up the problem with hackers taking control.

Imagine a future ransomware of “pay me $ bitcoin or I’ll destroy the contents of your freezer” or “pay me $ bitcoin and your daughter on the freeway now won’t die”

2 Likes

Don’t forget about HomePod!

I think they will certainly become more the norm in 20-30 years time, as human replacements to higher danger job roles, medical roles, menial job positions and as companions (such as for the elderly / lonely).

Despite this, I think they will be prevented from becoming as advanced as human intelligence (via their AI) as many articles have already been written on the dangers of allowing unchecked self developing AI.

As long as we don’t get to this stage, it could be interesting;

1 Like

So if someone in the past tried to stop humans, how could you? Say you are a Neanderthal and those homo sapiens are getting too powerful. How would you prevent?

The statistical odds from the fossil record is our species chances are very low.

So statistically it is not whether humans will become dominated, but who dominates us. Currently we are evolving very slowly with an average reproductive cycle of 30 years now, with reproductive cycles getting slower and male fertility dropping, then evolution is slowing.

Robots are limited for reasons which are easier to fix than what will limit human evolution. So possibly robots will not dominate us, but something else is statistically likely. Homo Sapiens with current intelligence is only about 200,000 years old so our ultimate fate is far from decided.

Our sun is likely to keep life going for another 1 Billion years, or about 5000 the duration of modern humans. Plenty of time for humans to become dominated by robots.

3 Likes

You think we can control this?

So example from today’s news

I don’t want to get into the specifics of that example but what it illustrates: the creeping movement of technology. You cannot halt AI, you can only slow and stutter it via control. Any policy we may produce may control a decade or two of its pace. The key difference will be when we give robots self learning so they are not limited by what humans can teach, and then when robots can make robots so not limited by what humans can make.

So it is not if but when.

I thought the Sun will keep us going for another 7 billion years. I think overpopulation comes first. The challenge will be moving to new territory.

To an extent yes, via what the AI is taught by it’s human programmer(s). AI has it’s place but the intelligence is artificial, as in it’s name.

We can see Japanese beauty robots in many places, these robots can be very realistic, but sometimes looked very scary, because they are in uncanny valley.

  1. Human vision resulting in negative emotions.
  2. The insecurity of the imperfections of technology.
    Even if we make the robot and people exactly the same, as long as we realize that this is a robot rather than reality, we will feel discomfort.
    SO I think people may not have a robot wife in the future:smirk:
1 Like
2 Likes

Machine Learning is where allow the robot to program itself.

Once a robot can make robots then they can evolve without us.

That’s the part I don’t think will be allowed fully occur, as once you lose control over the robot by allowing it to essentially think and be creative by itself (with no kill switch or boundaries) dangers can occur. Hollywood may have a penchant for going over the top in films but quite a few robot orientated movies make good points of the negatives which could occur if robots are given unchecked self learning abilities.

Human’s can be bad enough but are (to an extent for some) guided by emotions and the value of life, just look at the extent some go to for protecting other species, yet would the species return the favour or in that matter would robots?