Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human artificial intelligence would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that artificial intelligence itself isn’t the greatest risk to humanity. Rather, it’s “the absence of social, intellectual frameworks” through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.
Latest News
Business
DocuSign Stock Jumps as CEO Cites “Powerful New Innovation for Customers”
CEO Allan Thygesen hailed the firm's suite of new AI integrations as key revenue drivers Shares of electronic signature technology firm DocuSign (NASDAQ:DOCU) spiked 14% in after-hours trading Thursday as investors weighed...