Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human artificial intelligence would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that artificial intelligence itself isn’t the greatest risk to humanity. Rather, it’s “the absence of social, intellectual frameworks” through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.
Unchecked Artificial Intelligence Will Bring On Human Extinction