This Week in AI, August 18: OpenAI in Financial Trouble • Stability AI Announces StableCode

"This Week in AI" on KDnuggets provides a weekly roundup of the latest happenings in the world of Artificial Intelligence. Covering a wide range of topics from recent headlines, scholarly articles, educational resources, to spotlight research, the post is designed to keep readers up-to-date and informed about the ever-evolving field of AI.



### ALT ###
Image created by Editor with Midjourney

 

Welcome to this week's edition of "This Week in AI" on KDnuggets. This curated weekly post aims to keep you abreast of the most compelling developments in the rapidly advancing world of artificial intelligence. From groundbreaking headlines that shape our understanding of AI's role in society to thought-provoking articles, insightful learning resources, and spotlighted research pushing the boundaries of our knowledge, this post provides a comprehensive overview of AI's current landscape. This weekly update is designed to keep you updated and informed in this ever-evolving field. Stay tuned and happy reading!

 

Headlines

 
The "Headlines" section discusses the top news and developments from the past week in the field of artificial intelligence. The information ranges from governmental AI policies to technological advancements and corporate innovations in AI.

 
???? ChatGPT In Trouble: OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day

OpenAI is facing financial trouble due to the high costs of running ChatGPT and other AI services. Despite rapid early growth, ChatGPT's user base has declined in recent months. OpenAI is struggling to effectively monetize its technology and generate sustainable revenue. Meanwhile, it continues to burn through cash at an alarming rate. With competition heating up and enterprise GPU shortages hindering model development, OpenAI needs to urgently find pathways to profitability. If it fails to do so, bankruptcy may be on the horizon for the pioneering AI startup.

 
???? Stability AI Announces StableCode, An AI Coding Assistant for Developers

Stability AI has released StableCode, its first generative AI product optimized for software development. StableCode incorporates multiple models trained on over 500 billion tokens of code to provide intelligent autocompletion, respond to natural language instructions, and manage long spans of code. While conversational AI can already write code, StableCode is purpose-built to boost programmer productivity by understanding code structure and dependencies. With its specialized training and models that can handle long contexts, StableCode aims to enhance developer workflows and lower the barrier to entry for aspiring coders. The launch represents Stability AI's foray into AI-assisted coding tools amidst growing competition in the space.

 
???? Introducing Superalignment by OpenAI

OpenAI is proactively working to address potential risks from superintelligent AI through their new Superalignment team, which is using techniques like reinforcement learning from human feedback to align AI systems. Key goals are developing scalable training methods leveraging other AI systems, validating model robustness, and stress testing the full alignment pipeline even with intentionally misaligned models. Overall, OpenAI aims to show machine learning can be conducted safely by pioneering approaches to responsibly steer superintelligence.

 
???? Learn as you search (and browse) using generative AI

Google is announcing several updates to its Search Engine Generation (SGE) AI capabilities including hover definitions for science/history topics, color-coded syntax highlighting for code overviews, and an early experiment called "SGE while browsing" that summarizes key points and helps users explore pages when reading long-form content on the web. These aim to enhance understanding of complex topics, improve digestion of coding information, and aid navigation and learning as users browse. The updates represent Google's continued efforts to evolve its AI search experience based on user feedback, with a focus on comprehension and extracting key details from complex web content.

 
???? Together.ai extend Llama2 to a 32k context window

LLaMA-2-7B-32K is an open-source, long context language model developed by Together Computer that extends the context length of Meta's LLaMA-2 to 32K tokens. It leverages optimizations like FlashAttention-2 to enable more efficient inference and training. The model was pre-trained using a mixture of data including books, papers, and instructional data. Examples are provided for fine-tuning on long-form QA and summarization tasks. Users can access the model via Hugging Face or use the OpenChatKit for customized fine-tuning. Like all language models, LLaMA-2-7B-32K can generate biased or incorrect content, requiring caution in use.

 

Articles

 
The "Articles" section presents an array of thought-provoking pieces on artificial intelligence. Each article dives deep into a specific topic, offering readers insights into various aspects of AI, including new techniques, revolutionary approaches, and ground-breaking tools.

 
???? LangChain Cheat Sheet

With LangChain, developers can build capable AI language-based apps without reinventing the wheel. Its composable structure makes it easy to mix and match components like LLMs, prompt templates, external tools, and memory. This accelerates prototyping and allows seamless integration of new capabilities over time. Whether you're looking to create a chatbot, QA bot, or multi-step reasoning agent, LangChain provides the building blocks to assemble advanced AI rapidly.

 
???? How to Use ChatGPT to Convert Text into a PowerPoint Presentation

The article outlines a two-step process for using ChatGPT to convert text into a PowerPoint presentation, first summarizing the text into slide titles and content, then generating Python code to convert the summary to PPTX format using the python-pptx library. This allows rapid creation of engaging presentations from lengthy text documents, overcoming tedious manual efforts. Clear instruction is provided on crafting the ChatGPT prompts and running the code, offering an efficient automated solution for presentation needs.

 
???? Open challenges in LLM research

The article provides an overview of 10 key research directions to improve large language models: reducing hallucination, optimizing context length/construction, incorporating multimodal data, accelerating models, designing new architectures, developing GPU alternatives like photonic chips, building usable agents, improving learning from human feedback, enhancing chat interfaces, and expanding to non-English languages. It cites relevant papers across these areas, noting challenges like representing human preferences for reinforcement learning and building models for low-resource languages. The author concludes that while some issues like multilinguality are more tractable, others like architecture will require more breakthroughs. Overall, both technical and non-technical expertise across researchers, companies and the community will be critical to steer LLMs positively.

 
???? Why You (Probably) Don’t Need to Fine-tune an LLM

The article provides an overview of 10 key research directions to improve large language models: reducing hallucination, optimizing context length/construction, incorporating multimodal data, accelerating models, designing new architectures, developing GPU alternatives like photonic chips, building usable agents, improving learning from human feedback, enhancing chat interfaces, and expanding to non-English languages. It cites relevant papers across these areas, noting challenges like representing human preferences for reinforcement learning and building models for low-resource languages. The author concludes that while some issues like multilinguality are more tractable, others like architecture will require more breakthroughs. Overall, both technical and non-technical expertise across researchers, companies and the community will be critical to steer LLMs positively.

 
???? Best Practices to Use OpenAI GPT Model

The article outlines best practices for obtaining high-quality outputs when using OpenAI's GPT models, drawing on community experience. It recommends providing detailed prompts with specifics like length and persona; multi-step instructions; examples to mimic; references and citations; time for critical thinking; and code execution for precision. Following these tips on instructing the models, such as specifying steps and personas, can lead to more accurate, relevant, and customizable results. The guidance aims to help users structure prompts effectively to get the most out of OpenAI's powerful generative capabilities.

 
???? We're All Wrong About AI

The author argues that current AI capabilities are underestimated, using examples like creativity, search, and personalization to counter common misconceptions. He states that AI can be creative by recombining concepts, not merely generating random ideas; it is not just a supercharged search engine like Google; and it can develop personalized relationships, not just generic skills. While unsure which applications will prove most useful, the author urges an open mind rather than dismissiveness, emphasizing that the best way to determine AI's potential is by continued hands-on exploration. He concludes that our imagination around AI is limited and its uses likely far exceed current predictions.

 

Tools

 
The "Tools" section lists useful apps and scripts created by the community for those who want to get busy with practical AI applications. Here you will find a range of tool types, from large comprehensive code bases to small niche scripts. Note that tools are shared without endorsement, and with no guarantee of any sort. Do your own homework on any software prior to installation and use!

 
????️ MetaGPT: The Multi-Agent Framework

MetaGPT takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc. Internally, MetaGPT includes product managers / architects / project managers / engineers. It provides the entire process of a software company along with carefully orchestrated SOPs.

 
????️ GPT LLM Trainer

The goal of this project is to explore an experimental new pipeline to train a high-performing task-specific model. We try to abstract away all the complexity, so it's as easy as possible to go from idea -> performant fully-trained model.

Simply input a description of your task, and the system will generate a dataset from scratch, parse it into the right format, and fine-tune a LLaMA 2 model for you.

 
????️ DoctorGPT

DoctorGPT is a Large Language Model that can pass the US Medical Licensing Exam. This is an open-source project with a mission to provide everyone their own private doctor. DoctorGPT is a version of Meta's Llama2 7 billion parameter Large Language Model that was fine-tuned on a Medical Dialogue Dataset, then further improved using Reinforcement Learning & Constitutional AI. Since the model is only 3 Gigabytes in size, it fits on any local device, so there is no need to pay an API to use it.