01 is powered by Vocal.
Vocal is a platform that provides storytelling tools and engaged communities for writers, musicians, filmmakers, podcasters, and other creators to get discovered and fund their creativity.
How does Vocal work?
Creators share their stories on Vocal’s communities. In return, creators earn money when they are tipped and when their stories are read.
How do I join Vocal?
Vocal welcomes creators of all shapes and sizes. Join for free and start creating.
To learn more about Vocal, visit our resources.Show less
The problem with Hollywood's A.I. is that there's no true danger or real substance to the given reality, or on-screen humanity. In all actuality, when and if the time comes where artificial intelligence (or even super intelligence) has been deemed a reality, and/or then becoming a danger to our survival, humanity as we know it is one hundred percent far gone. You don't need Stephen Hawking or Elon Musk to tell you this, it's a simple differentiability made between that of autonomous thinking and our own complex subservience to it. Simply look at how human beings are so ever-involved in the creation of driverless cars, or examine China's integration of A.I. into nuclear submarines. Artificial intelligence has taken the world by storm, and if you don't think there's only a tiny bit of danger to this, than you're sadly mistaken, but is it all that dangerous when compared to the likes of ecological disasters and, simply put, space?
For one, autonomy, whether in human thinking or in the mechanical world, can be among many existential risks, as the obvious assertion relates: you don't have to be intelligent to be dangerous. It's not about technology's smarts, or even its ever-growing premises, but it's more about the autonomy (or the ability to act freely) that makes them a terrifying asset. For those more interested in the tech itself, you might be wondering just what makes A.I. intellectual? This is a broad question that can span between a wide variety of topics, yet it more often than not deals with the concepts for making something into someone in the sense of artificial cognitive functionality; AI are merely forms of the mechanical that mimic real life. That alone may have negative aspects to it when thinking broadly, but there's still plenty of things more dangerous to humanity than AI...
While artificial intelligence itself may be the quintessential spark in human scientific and mechanical endeavors, I feel that nanotechnology is far more capable (and terrifying). Yes, it has some remarkably positive connotations in the likes of medicine, but when placed into the real world, nanotech could be used in menacing ways by the military, or corporations.
Nothing would be able to stop the growth and potential power inherent in nanotech if used in a malicious way, or to further dread by overzealous governments; it's among things more dangerous to humanity than A.I. Think about the possibilities provided in nanotechnology, one can design a substance, or "gray goo" that can grow and grow without being stopped. Just read Wired's why the future doesn't need us on more insight into the evil avenues of nanotech.
Even governmental control can be among the things more dangerous to humanity than A.I. As it stands, the world's pretty screwed up, and that's not Trump's fault. Worldwide, there's a mass concern for oppressiveness and the fundamentality of rights. As the future wanes, and the complexity of our future gains more resilience, we can begin to spot just how our next hundred years may be dematerialized if we can't correct contemporary failures, like oppression for starters.
There's also world hungry and mass violence to think about; if the stage and state of our world gets any worse than it already has, you can bet the likelihood of A.I. becoming more dangerous than a country-wide mob is slim to none. Oh, and think of the potentiality of a world-wide collapse in governance, I'm talking monetary systems, government, everything simply caput overnight. That's much more terrifying than your Echo Dot gaining sentience.
Any type of interstellar object can easily wipe out the entire human race; an asteroid alone can just as easily do the same, if not worse. What you may be used to seeing are tiny rocks, maybe even clumps of ice (which are comets, not asteroids), but an asteroid can range in size from that of micro-dust to span the gargantua of a planet.
Just imagine the sight of a moon-sized asteroid on due course for earth; while you may think we're ready for something like that, holistically we aren't ready for shit. Asteroids are among many things more dangerous to humanity than A.I., simply because we have zero resources to assist us if that were to happen. Just pray we don't see any asteroids on the horizon any time soon.
Elon Musk may have been the one to say it, and I quote from his Twitter page:
"We need to be super careful with AI. Potentially more dangerous than nukes."
That was posted in 2014, nearly four years ago and this is still a topic of discussion. Well, sorry to break it to you Mr. Musk, but nukes are still the most destructive items in existence and are very well considered among things more dangerous to humanity than A.I.
Yes, artificial minds can be have plenty of negative aspects if used in the wrong light, but a nuclear weapon? No matter how one uses them, or even doesn't, they're still all-powerful machines that can decimate parts of the globe within seconds, and the world currently has at least 3,000 of these that are operational. Yikes.
This may be a little outlandish and, quite frankly, dependent upon a more sci-fi notion of reality, rather than the actuality of reality itself, but it's still far more terrifying than artificial intelligence. Odds are, though, life on different planets is a real thing, just look at this Kialo discussion page, which pretty much debunks the question "Do aliens exist."
Hell, if you really put it into account, A.I. could be considered "aliens" in our modern connotations, but when correlated to what exactly are the things more dangerous to humanity than A.I., aliens are the winners by a long shot. Here's a discussion question for you, would you rather have to combat super intelligent robots, or a super advanced alien race? If you were unsure, "robots" is the correct answer.
Did you ever see The Day After Tomorrow? Similar to the movie, humanity could very well be a goner, or at least a huge chunk of life on earth when pitted against the likes of a super massive climate-based catastrophe. There are a great deal of concerns we simply don't yet know when it comes to the climate and its overall alterations. When considering things more dangerous to humanity than A.I., it's pretty obvious how this can become only worse if left unchecked.
The further it progresses (and worsens), the less likely we are as a whole to be prepared for the inevitable outcome. We could be witnesses of the next ice age, or worse of all a tidal-like event of mass weather patterns. Imagine if a searing heat wave just took up home in the north, wiping out the polar ice caps and effectively sending a wave of melted ice across the surface of the planet. Not even artificial intelligence could possibly stop this existential threat from happening.
This may go hand in hand with the changing and alterations of climate, but ecological disaster can very well be another addition to the things more dangerous to humanity than A.I., especially when you take into account the possibilities that climate change has to offer. If coalesced, the two together can start an ice age event, rendering technology in itself a useless formality for humanity's continuance. When separate, ecological disasters range in the multitude and are far more dangerous than even climate change itself.
Simply by way of the planet, earthquakes, tsunamis, mudslides, and hurricanes are merely four of the various ways our own Earth can kill us all off. Again, A.I. would be able to do nothing in this instance either in helping us or even helping itself, rendering it useless. Now you may be wondering what's the worst possible ecological disaster known to man...
Ahh, the super volcano. It's not quite been proven yet, but many scientists speculate that the Permian–Triassic extinction event may have been caused by this very ecological occurrence, which may or not have wiped out nearly 90 percent of the planet at the time. Still think A.I. is scary?
Any type of eruption, in human standards, are drastic natural events that can severely disrupt order in a multitude of ways. A super volcanic eruption is over ten times worse, and can possibly lead to the downfall of political infrastructures, biosphere degradation, and the depletion of natural food sources. There's also nothing we know of that can help in such a situation, so it's threat to humanity is large and plausible. While another Permian–Triassic extinction is highly unlikely in our lifetimes, the super volcano is just another one of a many things more dangerous to humanity than A.I.
Despite the fact that artificial intelligence may have its own class of negative attributes, there's an entirely different field of science that deals with the creation of and considerations in symbiotic life forms through scientific theory. Epitomized by the likes of Black Mirror, technology has its negatives in the context of the future, but science is much more terrifying when put to the ultimate test, which is "creation."
Synthetic biology is another addition to the things more dangerous to humanity than A.I., for the contexts of such scientific endeavors can lead to the creation of deadly viruses, or a super weapon capable of ripping the planet into pieces. It's not so scary in today's point of reference, but given time synthetic biology and the realities of the bioweapon are there (and real).
With that being said, a pandemic in of itself is another addition among things more dangerous to humanity than A.I. for the very connotations behind it are a science fiction writer's dream come true. From the early onset of a major virus, like that of influenza, AIDs, or even the black plague, to something even more deadlier, like the aforementioned bioweapon, there's plenty of potential negatives when unearthing the concepts behind a pandemic.
Otherwise known as the "Black Death," the 1300s mass plague effectively killed off an estimated 75-200 million people! Do you think any artificial intelligence could even come close to that, realistically? This may have happened over the period of at least 50 years or so, but still an epidemic as frightening as the black plague is far more terrifying than the likes of A.I., and hell let's throw into the ring the possibility of a zombie apocalypse, it's clear we'd be screwed in any situation of the like.
Unknown Cosmic Anomalies
Look, if you're more worried about artificial intelligence being a potential downfall of society, then maybe you haven't looked up into the night sky before. There are more stars than there are grains of sand on our planet, which means there is an unruly list of dangers out there, besides the obvious alien attack, or asteroid calamity.
From the simplicity of gravitational waves giving out from beneath our planet to the more obvious chance of a black hole overtaking our world, these are real things that are more dangerous to humanity than A.I., and more readily plausible. There are anomalies floating about in the cosmos widely unheard of still to this day, some of which have the potential to obliterate our planet on a whim. Solar flares, cosmic radiation, a tiny burst of gamma rays, moving stars, hell a freaking supernova could all very well wipe out the planet without any of us even knowing it. Not to mention our own planetary sun could easily destroy the human race, whether, by way of falling out of gravity or simply exploding, space has its own long list of potentially destructive calamities that make A.I. look like nothing more than a small speed bump in the destruction of humanity. At least it can teach us a thing or two before wiping us out, unlike a magnetic solar storm, which would render us obsolete in a matter of minutes.