Nvidia plans to expand beyond computer graphics and into the field of artificial intelligence. For this purpose, the company is unveiling an unusual processor and computer to solve scientific problems at unbelievable fast speed.
Tesla P100 – a high performance chip
On Tuesday, the chip maker said they designed the new Tesla P100 chip for use in corporate data centers. These chips include 15 billion transistors apiece to achieve very high performance. The number of transistors on the chip is almost double that of Nvidia’s prior high-end graphics processor and some of the new server chips that Intel announced a week ago, the report said.
At Nvidia’s annual technology conference, chief executive Jen-Hsun Huang said, “It’s the largest chip that has ever been made.”
Value Partners Asia ex-Japan Equity Fund has delivered a 60.7% return since its inception three years ago. In comparison, the MSCI All Counties Asia (ex-Japan) index has returned just 34% over the same period. The fund, which targets what it calls the best-in-class companies in "growth-like" areas of the market, such as information technology and Read More
Unidentified cloud computing services will be the initial buyers of the chip, predicts Huang, and by next year, the chips will arrive in servers sold by other companies. Meanwhile, Nvidia plans to offer a computer of its own priced at $129,000 that will come with eight Tesla P100 chips and software for artificial intelligence applications.
Nvidia’s new computer has been named the DGX-1, and Huang claims it to be capable of processing artificial intelligence tasks as fast as 250 servers that get their power from general purpose chips like Intel’s and will cost much more. A typical chore that takes 150 hours to complete on one standard server would take just two hours on the DGX-1, says Huang.
Nvidia focusing on machine learning
For several years now, Nvidia has been making efforts to find users for its GPU chips meant for graphical processing units apart from video games. These chips already feature hundreds of simple processors in comparison: between one and 22 large calculating engines found on a typical microprocessor, the report says. Many large supercomputers already use them to solve scientific problems.
Artificial intelligence has recently been at Huang’s focus. More specifically, the focus is on a technique called machine learnin, which helps in recognizing images and spoken language. While programmers are required to explicitly define the characteristics of a face with conventional image-recognition, machine-learning software enables computers to learn to pick out faces by surveying huge quantities of so-called training photos, says the report.
Nvidia is already a seller of GPUs and special-purpose computers that use machine learning to help cars map their surroundings, detect hazards and drive on their own.