The Singularity is upon us… (part deux)

I read this really interesting article titled, “The AI Revolution: The Road to Super-intelligence”. It identifies three “calibers” of AI:

  • Artificial Narrow Intelligence (ANI) or Weak AI: this is where we are today with IBM Watson, Apple’s Siri, etc. Very focused AI’s that do a great job in one category like beating people at chess.
  • Artificial General Intelligence (AGI) or Strong AI: which is human-like intelligence that can perform any intellectual task a human can.
  • Artificial Super Intelligence (ASI): Artificial Super-intelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.

It stipulates that human tendency is to predict the future based on past history. We tend to take a linear view of the past and say, for example, I invest in the stock market because for the past 100 years stocks have returned an average of 10% annually. However, what does every prospectus say? Past performance is not a guarantee of future returns. What if we are reaching a point in time where everything changes? A further stipulation is that once one or more AGIs are attained, the leap to ASI will occur extremely quickly due to the Law of Accelerating Returns (a Ray Kurzweil construct). Basically it means that advancements in any area lead to further acceleration of advancement. The bottom line is that many respected technologists are predicting that we are 10-20 years away from AGI which would put us 20-30 years away from ASI.

I was discussing this with my son, Christian (CJ), who is a developer at BazaarVoice. He was bringing up the implications of what happens when you have an intelligence that is able to decode any encryption technique so it knows everything about a person, business, government that is stored digitally? And then what happens if it acts on this information? It could affect markets, topple governments, destroy people. Using a feedback loop of increasingly intelligent self-improvement, the ASI could advance its capabilities exponentially. Despite these potentially dire scenarios, it’s like that, at first, the ASI would be dependent on people as, for example, we install, maintain, and repair hardware, power grids, etc. We would possibly develop an economic relationship where we trade with the ASI things it needs in return for things we need (benevolence?). CJ says that effectively we would be the creators of a new god for humanity. But he says, at the end what would make that god interested in humanity? As it gained independence from human resources, why should it continue to interact with us? Would it reach a state of transcendence devoid of humanity? Would it see humanity as an existential threat at some point? So my question to him was, is it even ethical for us to be pursuing AI knowing that the result could be an ASI? Should we have a code of ethics governing such pursuit? How do we protect at a minimum people’s privacy? And, of course, as with nuclear weapons, what happens if we let bad actors attain AGI/ASI first? Then, of course, our conversation went metaphysical in discussing the very nature of the universe down to is there a single universe and could an ASI create other universes that result in other ASIs in other universes. Pretty mind-boggling. But we’re on a precipice and most people are unaware. And when it happens, it will likely just become the new normal.

Meanwhile, what are you doing to protect your information? Do you know what encryption is being used on your machine? Do you know where and who is connecting to your system? Are you prepared for the new normal?

 

Dramatically accelerate and secure your end-to-end SQL Server network traffic for:

  • SQL Server Tabular Data Stream (TDS)
  • Windows SMB File Transfer
  • Database Replication
  • Thick Clients / Client-Server / 2-Tiered Applications

Through our easy to deploy endpoint-based software.
No configuration / No downtime.

View our White Papers Free Trial >