On Monday, Ars Technica hosted the Ars Frontiers virtual conference. In our fifth session, we covered “The Lightning Beginning of AI – What Suddenly Changed?” The painting featured a conversation with Big BaileyPrincipal Product Manager for Generative Modeling at Google DeepMind, W Haiyan Changgeneral manager of Gaming AI at Xbox, which is run by Bing Edwards, Ars Technica’s AI correspondent.
The panel was originally broadcast live, and you can now watch a recording of the entire event on YouTube. Introduction to the “Lightning AI” part It starts at the 2:26:05 mark in the broadcast.
With “AI” being such a vague term, meaning different things in different contexts, we started the discussion by looking at the definition of AI and what it means to panelists. “I like to think of AI as helping to derive patterns from data and use them to predict insights,” Bailey said. “It’s nothing more than extracting insights from data and using it to make predictions and provide more useful information.”
Zhang agreed, but from a video game perspective, she also views AI as an evolving creative force. For her, AI is not just about analyzing data, detecting patterns, and classifying them. It also develops abilities in creative language, image generation, and coding. Zhang believes this transformative power of AI can elevate and stimulate human creativity, especially in video games, which she sees as “the ultimate expression of human creativity.”
Next, we delve into the panel’s key question: What has changed and led to this new era of AI? Is it all just hype, perhaps based on ChatGPT’s high visibility, or is there some major technology breakthrough that has brought us this new wave?
Zhang pointed to advances in AI technologies and the vast amounts of data now available for training: “We’ve seen breakthroughs in the model architecture of Transformer models, as well as recursive autoencoder models, as well as the availability of large datasets to train these models afterward and pair them with a third party, the availability of hardware such as GPUs and MPUs to really be able to model, to take data and to be able to train it for new computing capabilities.”
Bailey echoed these sentiments, adding a notable nod to the open source contributions, “We also have this vibrant community of open source tinkerers who are open source models, models like LLaMA, fine-tuned through high-quality instruction tuning and RLHF datasets.”
When asked to explain the importance of open-source collaboration in accelerating the progress of AI, Bailey mentioned the widespread use of open-source training models such as PyTorch, Jax, and TensorFlow. She also emphasized the importance of sharing best practices, saying, “I definitely think this machine learning community only exists because people share their ideas, opinions, and code.”
When asked about Google’s plans for open source templates, Bailey pointed to the list Google Research Resources on GitHub and underlined their partnership with Hugging Face, the online AI community. “I don’t want to give away anything that might go down the tube,” she said.
Generative artificial intelligence on game consoles, and the dangers of artificial intelligence
As part of a conversation about developments in AI hardware, we asked Zhang how long it would be before generative AI models could run natively on consoles. She said she was excited about the prospect and noted that a dual cloud client configuration might come first: “I think it will be a combination of working on AI to be conclusion in the cloud and working collaboratively with local inference to bring the best player experiences to life.”
Bailey noted progress in shrinking the Meta LLaMA language model to run on mobile devices, hinting that a similar path forward might open up the possibility of running AI models on game consoles as well: “I would love to have a large, highly customized language model running on a device.” Portable, or running on my game console, which probably makes a boss particularly hard to beat, but might be easier for someone else to beat.”
To continue, we asked if a generative AI model were to run locally on a smartphone, would that exclude Google from the equation? “I think there’s probably room for a variety of options,” Bailey said. “I think there have to be options available for all of these things to co-exist meaningfully.”
When discussing social risks from AI systems, such as misinformation and deepfakes, both speakers said their companies are committed to the responsible and ethical use of AI. “At Google, we take great care to make sure that the models we produce are responsible and behave as ethically as possible. And we really integrate our responsible AI team from day one, whenever we train models from curating our data, making sure that some kind of mix is created,” Bailey explained. correct pre-training.
Despite her previous enthusiasm for open-source models and AI that run locally, Bailey mentioned that API-based AI models that only run in the cloud may be safer in general: “I think there is a significant risk of misuse of models in the hands of People who may not necessarily understand or are not aware of the risks. This is also part of the reason why it sometimes helps to prefer APIs over open source models.”
Like Bailey, Zhang also discussed Microsoft’s approach to responsible AI, but also noted the ethical challenges specific to gaming, such as ensuring that AI features are inclusive and accessible.
Audience questions about artificial general intelligence, dataset sources
Towards the end of the session, Bailey and Zhang took questions from our audience, which were submitted through YouTube comments and selected by the moderator. Due to the length of the answers and lack of time, they only answered two, but the questions addressed popular concerns about artificial intelligence.
First, with reference to artificial general intelligence (AGI), which many define as an artificial intelligence agent that can perform virtually any intellectual task that a highly skilled human being can perform. Someone asked, “How do we allay the fears of artificial general intelligence?”
Bailey highlighted the fact that while AI is an “undefined” term, progress has been made in creating a model that can handle a wide range of tasks. “If we’re just talking about creating a useful model in general, we’re about to be there,” Bailey said. However, she expressed that the kind of general artificial intelligence often depicted in science fiction scenarios “is still a long way off, if ever, and is something that we as an industry need to pay attention to and start building operations”.
Bailey stressed the need for responsible AI features and safeguards, and urged the industry to work on clearer definitions of AI to help lawmakers develop effective regulatory standards. She also addressed concerns about replacing jobs with AI, emphasizing that AI is likely to boost productivity and create new roles, likening its potential use to “having a little graduate student as part of my little research lab.”
Echoing Bailey’s sentiments, Chang encouraged public dialogue about AI’s impact and role in society. I particularly focused on the tendency of people to embody artificial intelligence. Zhang used the classic game example pac man To illustrate the point: “I mean when you play pac man, and those ghosts are haunting you, you’re cursing those ghosts as if they’re alive, and you’re projecting humanity and personality onto these synthetic beings, which are just rule-based algorithms, right? “
In the final part of the discussion, panelists were asked if they had any concerns about AI models being trained using public data without the creators’ consent—essentially, scraping internet content to fuel these powerful models.
Bailey provided an example of how her team at Google has dealt with this problem with their Bard project. They train their models on publicly available data and code, but have included the concept of validating recitation into their tool. This means that if any generated code matches something in a public repository like GitHub, the model will supply the URL back to the source, thus giving attribution to the author and identifying the license used for that code. This, according to Bailey, also helps discover new projects and functions that users may not be aware of.
“I think there are ways to provide credit and then also be careful and only include data that the authors have expressly included under license for your pre-workout mix,” Bailey said.
Zhang, in turn, cited Bing Chat as an example of a Microsoft AI product citing public data. She says Microsoft designed the Bing AI platform from the start to make sure all information is based on actual internet sources, and attributes every answer to its original page or source: “I think this is central to how we think about developing products with generative AI.” [at Microsoft]With these different forms of GPT templates, we think about that inclusion and make sure every creator and every contributor is supported and feels that their content is respected.”
“Analyst. Web buff. Wannabe beer trailblazer. Certified music expert. Zombie lover. Explorer. Pop culture fanatic.”
More Stories
It certainly looks like the PS5 Pro will be announced in the next few weeks.
Leaks reveal the alleged PS5 Pro name and design
Apple introduces AI-powered object removal in photos with latest iOS update