This Week in AI, July 31: AI Titans Pledge Responsible Innovation • The Beluga Invasion

"This Week in AI" on KDnuggets provides a weekly roundup of the latest happenings in the world of Artificial Intelligence. Covering a wide range of topics from recent headlines, scholarly articles, educational resources, to spotlight research, the post is designed to keep readers up-to-date and informed about the ever-evolving field of AI.



Hitting the mark with AI
Image created by Editor with BlueWillow

 

Welcome to the inaugural edition of "This Week in AI" on KDnuggets. This curated weekly post aims to keep you abreast of the most compelling developments in the rapidly advancing world of artificial intelligence. From groundbreaking headlines that shape our understanding of AI's role in society to thought-provoking articles, insightful learning resources, and spotlighted research pushing the boundaries of our knowledge, this post provides a comprehensive overview of AI's current landscape. Without delving into the specifics just yet, expect to explore a plethora of diverse topics that reflect the vast and dynamic nature of AI. Remember, this is just the first of many weekly updates to come, designed to keep you updated and informed in this ever-evolving field. Stay tuned and happy reading!

 

Headlines

 
The "Headlines" section discusses the top news and developments from the past week in the field of artificial intelligence. The information ranges from governmental AI policies to technological advancements and corporate innovations in AI.

 
???? AI Titans Pledge Responsible Innovation Under Biden-Harris Administration

The Biden-Harris Administration has secured voluntary commitments from seven leading AI companies - Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - to ensure the safe, secure, and transparent development of AI technology. These commitments underscore three principles fundamental to the future of AI: safety, security, and trust. The companies have agreed to conduct internal and external security testing of their AI systems before release, share information on managing AI risks, and invest in cybersecurity. They also commit to developing technical mechanisms to ensure users know when content is AI-generated and to publicly report their AI systems' capabilities, limitations, and areas of appropriate and inappropriate use. This move is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.

 
???? Stability AI Unveils Stable Beluga: The New Workhorses of Open Access Language Models

Stability AI and its CarperAI lab have announced the launch of Stable Beluga 1 and Stable Beluga 2, two powerful, open access, Large Language Models (LLMs). These models, which demonstrate exceptional reasoning ability across varied benchmarks, are based on the original LLaMA 65B and LLaMA 2 70B foundation models respectively. Both models were fine-tuned with a new synthetically-generated dataset using Supervised Fine-Tune (SFT) in standard Alpaca format. The training for the Stable Beluga models was inspired by the methodology used by Microsoft in its paper: "Orca: Progressive Learning from Complex Explanation Traces of GPT-4.” Despite training on one-tenth the sample size of the original Orca paper, the Stable Beluga models demonstrate exceptional performance across various benchmarks. As of July 27th, 2023, Stable Beluga 2 is the top model on the leaderboard, and Stable Beluga 1 is fourth.

 
???? Spotify CEO Hints at Future AI-Driven Personalization and Ad Capabilities

During Spotify's second-quarter earnings call, CEO Daniel Ek hinted at the potential introduction of additional AI-powered functionality to the streaming service. Ek suggested that AI could be used to create more personalized experiences, summarize podcasts, and generate ads. He highlighted the success of the recently launched DJ feature, which delivers a curated selection of music alongside AI-powered commentary about the tracks and artists. Ek also mentioned the potential use of generative AI to summarize podcasts, making it easier for users to discover new content. Furthermore, Ek discussed the possibility of AI-generated audio ads, which could significantly reduce the cost for advertisers to develop new ad formats. These comments come as Spotify seeks a patent for an AI-powered "text-to-speech synthesis" system, which can convert text into human-like speech audio that incorporates emotion and intention.

 

Articles

 
The "Articles" section presents an array of thought-provoking pieces on artificial intelligence. Each article dives deep into a specific topic, offering readers insights into various aspects of AI, including new techniques, revolutionary approaches, and ground-breaking tools.

 
???? ChatGPT Code Interpreter: Do Data Science in Minutes

This KDnuggets article introduces the Code Interpreter plugin by ChatGPT, a tool that can analyze data, write Python code, and build machine-learning models. The author, Natassha Selvaraj, demonstrates how the plugin can be used to automate various data science workflows, including data summarization, exploratory data analysis, data preprocessing, and building machine-learning models. The Code Interpreter can also be used to explain, debug, and optimize code. Natassha emphasizes that while the tool is powerful and efficient, it should be used as a baseline for data science tasks, as it lacks domain-specific knowledge and cannot handle large datasets residing in SQL databases. Natassha suggests that entry-level data scientists and those aspiring to become one should learn how to leverage tools like Code Interpreter to make their work more efficient.

 
???? Textbooks Are All You Need: A Revolutionary Approach to AI Training

This KDnuggets article discusses a new approach to AI training proposed by Microsoft researchers, which involves using a synthetic textbook instead of massive datasets. The researchers trained a model called Phi-1 entirely on a custom-made textbook and found that it performed impressively well in Python coding tasks, despite being significantly smaller than models like GPT-3. This suggests that the quality of training data can be as important as the size of the model. The Phi-1 model's performance also improved when fine-tuned with synthetic exercises and solutions, indicating that targeted fine-tuning can enhance a model's capabilities beyond the tasks it was specifically trained for. This suggests that this textbook-based approach could revolutionize AI training by shifting the focus from creating larger models to curating better training data.

 
???? Latest Prompt Engineering Technique Inventively Transforms Imperfect Prompts Into Superb Interactions For Using Generative AI

The article discusses a new technique in prompt engineering that encourages the use of imperfect prompts. The author argues that the pursuit of perfect prompts can be counterproductive and that it's often more practical to aim for "good enough" prompts. Generative AI applications use probabilistic and statistical methods to parse prompts and generate responses. Therefore, even if the same prompt is used multiple times, the AI is likely to produce different responses each time. The author suggests that rather than striving for a perfect prompt, users should make use of imperfect prompts and aggregate them to create effective prompts. The article references a research study titled "Ask Me Anything: A Simple Strategy For Prompting Language Models" which proposes a method of turning imperfect prompts into robust ones by aggregating the predictions of multiple effective, yet imperfect, prompts.

 

Learning Resources

 
The "Learning Resources" section lists useful educational content for those eager to expand their knowledge in AI. The resources, ranging from comprehensive guides to specialized courses, cater to both beginners and seasoned professionals in the field of AI.

 
???? LLM University by Cohere: Your Gateway to the World of Large Language Models

Cohere's LLM University is a comprehensive learning resource for developers interested in Natural Language Processing (NLP) and Large Language Models (LLMs). The curriculum is designed to provide a solid foundation in NLP and LLMs, and then build on this knowledge to develop practical applications. The curriculum is divided into four main modules: "What are Large Language Models?", "Text Representation with Cohere Endpoints", "Text Generation with Cohere Endpoints", and "Deployment". Whether you're a new machine learning engineer or an experienced developer looking to expand your skills, the LLM University by Cohere offers a comprehensive guide to the world of NLP and LLMs.

 
???? Free From Google: Generative AI Learning Path

Google Cloud has released the Generative AI Learning Path, a collection of free courses that cover everything from the basics of Generative AI to more advanced tools like the Generative AI Studio. The learning path includes seven courses: "Introduction to Generative AI", "Introduction to Large Language Models", "Introduction to Image Generation", "Attention Mechanism", "Transformer Models and BERT Model", "Create Image Captioning Models", and "Introduction to Generative AI Studio". The courses cover a range of topics, including Large Language Models, Image Generation, Attention Mechanism, Transformer Models, BERT Model, and Image Captioning Models.

 

Research Spotlight

 
The "Research Spotlight" section highlights significant research in the realm of AI. The section includes breakthrough studies, exploring new theories, and discussing potential implications and future directions in the field of AI.

 
???? The Role of Large Language Models in the Evolution of Data Science Education

The research paper titled "The Role of Large Language Models in the Evolution of Data Science Education" discusses the transformative impact of Large Language Models (LLMs) on the roles and responsibilities of data scientists. The authors argue that the rise of LLMs is shifting the focus of data scientists from hands-on coding to managing and assessing analyses performed by automated AI systems. This shift necessitates a significant evolution in data science education, with a greater emphasis on cultivating diverse skillsets among students. These include creativity informed by LLMs, critical thinking, programming guided by AI, and interdisciplinary knowledge.

The authors also propose that LLMs can play a significant role in the classroom as interactive teaching and learning tools. They can contribute to personalized education and enrich learning experiences. However, the integration of LLMs into education requires careful consideration to balance the benefits of LLMs while fostering complementary human expertise and innovation. The paper suggests that the future of data science education will likely involve a symbiotic relationship between human learners and AI models, where both entities learn from and enhance each other's capabilities.