Can “Protecting humanity” be AI’s base case

Why is AI dangerous and why do people like Stephen Hawking, Elon Musk think that AI could end humanity?

Modern AI models are a black box and tasked with a specific function or functions. Examples being: to identify every word in a speech, generating arts humans will like based on prompts or create a crop strain that is disease resistant. The AI model studies millions and billions of data inputs to produce outputs and based on feedback and more data - it refines its own parameters.

If imagine the AI model was tasked to “Kill bacteria X” or “Save the planet” or “ending war through infiltrating media” - all of this could lead to humanity being extinct. The said bacteria - might need humans to flourish and AI might conclude that killing humans can kill the bacteria. Or Saving the planet might lead it to a hypothesis that ending humanity is the easiest way to save the planet. The point being with an all powerful AI which can not only govern itself but interact with other technologies like 3D printing, automated manufacturing and robotics - it can find a lot of reasons of why killing humanity might be best option for its stated task.

And even if all AI models were written with a base function to never kill a human or harm a human - how will an autonomous car make a decision when it must collide with either a child or an old person or if AI sees two humans in peril but can only save one.

While this looks as existential as flying planes falling off the sky - I think we will learn to tame it, we will govern it and we will have a very augmented human being living not far from now who wouldn’t be limited by his/her brain but will use infinite brain power to create a multi-planet species.

In the meanwhile, very soon, I imagine us dancing and having multi-day raves to an AI producing techno music in clubs in SF or Berlin and definitely at burning man.


Red Planet - Township Rebellion

Previous
Previous

Love

Next
Next

Two questions everyone must answer