AI Multi-Capital Giant China No Bangs One Tricks

AI Multi-Capital Giant China No Bangs One Tricks

When opening the GPT-3 model AI made his debut on May 2020, its performance is widely considered a literal arts condition. Able to produce invisible text from prose made by humans, GPT-3 sets new standards in deep learning. But oh what’s the difference is a year. The researchers from the Beijing Intelligence academy announced on Tuesday the release of their own generative learning model, Wu Dao, a Mammoth AI who seemed to be able to do everything GPT-3, and more.

First, Wu Dao flat is very large. This has been trained in the parameters of 1.75 trillion (basically, the selected coefficient of its own model) which is full ten times greater than 175 billion GPT-3 is trained and 150 billion larger parameters than Google switch transformers.

To train the model on many of these parameters and do it quickly – Wu Dao 2.0 arrived only three months after the release version 1.0 in March – Baai researchers first developed an open-source learning system similar to a mixture of Google experts, nicknamed Fastmoe. This system, which can be operated on Pytorch, allows the model to be well trained in the supercomputer and conventional GPU groups. It gives Fastmoe more flexibility than the Google system because FastMoe does not require hardware rights such as Google TPU and therefore can run on hardware outside – even though the cluster is supercomputer.

With all the strength of computing it appears a number of capabilities. Unlike the most inner learning model that does one task – writing a copy, producing false, recognizing the face, winning at Go – Wu Dao is multi-capital, similar to the theory of anti-hatespeech AI or Google recently released a mother. Baai researchers show the ability of Wu Dao to process natural languages, text making, image recognition, and image making assignments during the Lab annual conference on Tuesday. This model can not only write essays, poetry, and verses in traditional Chinese, it can both produce alt text based on static images and produce almost photorealistic images based on natural language descriptions. Wu Dao also showed off his ability to load virtual idols (with a little help from Microsoft-Spin -F Xiaoice) and predict the 3D protein structure such as Alphafold.

“How to go to artificial general intelligence is a big model and big computer,” Dr. Zhang Hongjiang, Chair of Baai, said during the Tuesday conference. “What we wake up is power plants for the future AI, with Mega data, Mega computing power, and Mega models, we can change data to trigger future AI applications.”

Business