What is Beijing Academy of Artificial Intelligence (BAAI), China’s GPT-3 Rival?
Beijing’s plans for artificial intelligence are serious, perhaps more serious than OpenAI or DeepMind’s capabilities. Established in November 2018 in Beijing, China, the Beijing Academy of Artificial Intelligence (BAAI) is a non-profit research institute dedicated to promoting collaboration among academia and industries. Does that sound familiar?
Its aim is also to foster top talents and a focus on long-term research on the fundamentals of AI technology. As a collaborative hub, BAAI’s founding members include leading AI companies, Chinese universities and research institutes. What happens in China, doesn’t stay in China any longer.
- BAAI Introduces Superscale Intelligence Model ‘Wu Dao 1.0’ will likely out innovative GPT-3 and OpenAI, now being monetized by Microsoft.
- You can visit their site here. The current leader of the team is Ji Rong Wen.
- Technically speaking, the WuDao 2.0 natural language processing model had 1.75 trillion parameters, topping the 1.6 trillion that Google unveiled in a similar model in January.
- BAAI is essentially updating its model faster than OpenAI.
China is pressuring Alibaba to sell the SCMP to make it yet another propaganda device for the CCP.
China has been pouring money into AI to try to close the gap with the US, which maintains an edge because of its dominance in talent and semiconductors, among other things like the top Universities. China can’t buy its way to AI dominance, but within a generation it could fill the talent gap. Getting talent through immigration doesn’t appear to be China’s way.
China wants to be seen as the supreme power in next-gen AI, so many of the articles will state that China has surpassed (x rival in the United States).
Still the BAAI shows evidence of China’s significant concentration on becoming the world leader in AI by 2030. The Beijing Academy of Artificial Intelligence, styled as BAAI and known in Chinese as 北京智源人工智能研究院, launched the latest version of Wudao 悟道, a pre-trained deep learning model.
It’s not all about quantity but China’s claim is interesting. Wudao has 150 billion more parameters than Google’s Switch Transformers, and is 10 times that of OpenAI’s GPT-3, which is widely regarded as the best model in terms of language generation.
Can China Catch up in Deep Learning?
- Unlike conventional deep learning models that are usually task-specific, Wudao is a multi-modal model trained to tackle both text and image, two dramatically different sets of problems.
- At BAAI’s annual academic conference on Tuesday, the institution demonstrated Wudao performing tasks such as natural language processing, text generation, image recognition, image generation, etc.
China’s focus on multi-modal modeling is notable. However comparing Wudao with GPT-3 is like comparing apples and oranges.
What can Wudao AI do?
The model is capable of writing poems and couplets in the traditional Chinese styles, answer questions, write essays, generate alt text for images, and generate corresponding images from natural language description with a decent level of photorealism.
It is even able to power “virtual idols”, with the help of XiaoIce, a Chinese company spun off of Microsoft–so there can be voice support too, in addition to text and image.
Deep Learning Buzzwords
- Multi-modal model is currently a buzzword within the deep learning community, with researchers increasingly wanting to push the boundary towards what’s known as artificial general intelligence, or simply put, AIs that are more than incredibly smart one-trick ponies.
- Google’s MUM, or Multi-task Unified Model, unveiled two weeks ago at the Silicon Valley giant’s annual developer conference, capable of answering complex questions and distilling information from both text and image, is one recent example of multi-modal models.
Wudao does suggest China’s organization around artificial intelligence research is far superior now as of 2021. The latest iteration of the project was led by the non-profit research institute Beijing Academy of Artificial Intelligence (BAAI) and developed with more than 100 scientists from multiple organizations.
While China does not yet have the talent of the U.S. or the University talent pool of international students, it’s starting to have a collaboration network and the funding to move fast in the 2020s and 2030s in the domain.
Why Bigger is Thearetically Better?
In Multi-modal models the number of total parameters is somewhat significant.
Parameters are variables defined by machine learning models. As the model evolves, parameters are further refined to allow the algorithm to get better at finding the correct outcome over time. Once a model is trained on a specific data set, such as samples of human speech, the outcome can then be applied to solving similar problems.
In general, the more parameters a model contains, the more sophisticated it is. However, creating a more complex model requires time, money, and research breakthroughs.
Can Supercomputers Super-charge Multi-modal AI?
This model with 1.75 trillion parameters is already the 2.0 version of Wudao, whose first version was just launched less than 3 months ago. One of the main reasons the Chinese researchers made progress quickly was that they were able to tap into China’s supercomputing clusters, with the help of a few of its core members who also worked on the national supercomputing projects.
The BAAI has a scholarship program. BAAI supports researchers to solve the most basic and most important problems of AI development. The “BAAI Scholars Program”, which funds researchers from universities, research institutes, and enterprises, focuses on the original innovations and core technologies of AI.
China’s Emphasis on the Race to AI
In an era of fast-evolving AI models and China’s bet on deeper pockets, BAAI researchers claim to have broken the record set in January by Google’s Switch Transformer, which has 1.6 billion parameters. OpenAI’s GPT-3 model made waves last year when it was released with 175 billion parameters, making it the largest NPL model at the time.
While China accuses the U.S. of adversarial activities, it’s actually China that is giving this propaganda boost of this arms race to AI supremacy. The clown-jungle PR and might is right diplomacy to small nations is unfortunately one of the things that discredits China from international respect.
How Good is Wu Dao’s 2.0
Regardless of China’s lack of social and diplomatic sophistication outside of its realm, let’s examine how good WuDao 2.0 could be.
WuDao 2.0 covers both Chinese and English with skills acquired by studying 4.9 terabytes of images and texts, including 1.2 terabytes each of Chinese and English texts. It already has 22 partners, including smartphone maker Xiaomi, on-demand delivery service provider Meituan and short-video giant Kuaishou.
“These sophisticated models, trained on gigantic data sets, only require a small amount of new data when used for a specific feature because they can transfer knowledge already learned into new tasks, just like human beings,” said Blake Yan, an AI researcher from Beijing. According to some academics in the field:
- Such models act as strategic infrastructure for AI development
- Large-scale pre-trained models are one of today’s best shortcuts to artificial general intelligence
How was Wudao Innovated and How does it compare with Google?
- BAAI researchers developed and open-sourced a deep learning system called FastMoE, which allowed Wudao to be trained on both supercomputers and regular GPUs with significantly more parameters, giving the model, in theory, more flexibility than Google’s take on the MoE, or Mixture of Experts.
- This is because Google’s system requires the company’s dedicated TPU hardware and distributed training framework, while BAAI’s FastMoE works with at least one industry-standard open-source framework, namely PyTorch, and can be operated on off-the-shelf hardware.
AI Innovation will Accelerate Due to National Competition between China and America in the 2020s and 2030s
BAAI is a matter of national pride for China. BAAI is funded by the Beijing government, which put 340 million yuan (US$53.3 million) into the academy in 2018 and 2019 alone, pledging to continue its support, a Beijing official said in a 2019 speech. The actual numbers in 2021 could be much higher.
In 2022 the United States is likely to approve additional funding in AI and R&D to stay in front of China for as long as possible. However with China’s economy on a trajectory to over-take the U.S. in the late 2020s, that sort of race seemly inevitability to favor China in the 21st century.
Hopefully by the time they manage to achieve technological sophistication their global behavior, diplomacy and how they treat other countries and people has improved considerably.
Synced covered this really well, you can read their article here.