Post GPT-4: Answering Most Asked Questions About AI

Is AI overhyped, or is there a valid reason to be afraid?

Post GPT-4: Answering Most Asked Questions About AI
Image by Author


We live in both exciting and strange times. Generative AI, like ChatGPT, has changed everything. We have seen companies like Google coming under pressure for the first time, there is uncertainty in the current job market, and open-source development is firing on all cylinders. It is hard to keep up with AI development and misinformation.  

In this blog, I will try to answer some of the frequently asked questions about AI. These answers are based on the opinions that I have developed while writing and reading about recent development on AI. 


Which one is better: Open Source or Closed Source? 


In my opinion, both open-source and closed-source AI development is necessary. You need to understand that the backbone of ChatGPT is Transformers which is open-source and developed by a team at Google Brain. Without open-source development, we will have slow innovation. There are so many community lead projects that are running big corporations. 

On other hand, closed sources have the proper team, resources, and capital to develop polished products. In the OpenAI case, DALLE 2 and ChatGPT require multiple GPUs, and sometimes the cost of just experimenting can rise to multi-millions. It is a clean and bug free application. 

If you ask me, I would say open source is better. Open source projects are publicly available, are transparent, drive innovation, and developers can earn money by selling licenses or by providing additional features. 


Will AI replace tech workers and artists entirely?


No. Let me explain in simple terms. AI will never replace any job. It is here to assist us. There will be a huge workplace cultural change. People who leverage AI tools will gradually replace those who are still performing manual tasks.

I know the Dalle-2, Mid Journey, ChatGPT, and GPT-4 are great, but trust me they are not better than average humans. ChatGPT makes mistakes, and it doesn't understand complex tasks and concepts. For example, if you ask ChatGPT to develop a proper application with multiple integrations, it will fail to understand the whole picture. You have to make multiple manual changes to get things right. 


What are the potential risks of generative AI, and how can you avoid them? 


  1. Copyright issue: these models are developed on public and some private data that are under copyright law. Your hard work is used by some company to develop a product and you are not receiving compensation. We can resolve it by passing AI laws.
  2. Security and privacy: ChatGPT has become bigger than anything and it is hard to keep the gigantic system secure. There were instances when users were complaining that they were looking at the history of other people. Apart from that, you are allowing OpenAI to access your chat, and for a company, it is a concern. You can resolve this issue by creating your own ChatGPT application using open-source models and toolkits. Check out OpenChatKit: Open-Source ChatGPT Alternative.
  3. Plagiarism: educational institutes are struggling as students are using these tools to submit assignments, develop projects, and even write the thesis. Some free tools like OpenAI AI Text Classifier can help teachers detect generated work. You can also check 5 Free Tools For Detecting ChatGPT, GPT3, and GPT2.
  4. Misinformation and Abuse: Large language models like ChatGPT can be used for mass misinformation campaigns or even online abuse. You can resolve this issue by using the Watermarking technique.


Why do Elon Musk and other tech leaders want to pause the development of AI for 6 months? 


An open letter, signed by Elon Musk and 11,761 individuals, including AI experts, has been issued by the non-profit organization, Future of Life Institute. The letter calls for a temporary halt to the development of advanced AI for six months. The signatories urge AI labs to avoid training any technology that surpasses the capabilities of OpenAI's GPT-4, which was launched recently.

What this means is that AI leaders think AI systems with human-competitive intelligence can pose profound risks to society and humanity. 

First of all, it is impossible to stop the development. How are they going to stop open-source development or developments made by countries like China? The cat is out of the box. What we can do is work towards making it safe and secure.  

In my opinion, I believe that there is a business angle to this open letter too. A lot of companies are failing to launch successful applications like GPT-4, and they need 6 months of breathing room to develop and compete with Microsoft and OpenAI. 


What is next? Will we be able to see AGI in our lifetime? 


We will see a lot of development in multimodality where the model will be able to take input as image, video, and audio and output text, image, and audio. For example, if you ask AI to write a technical blog, it will add text, code blocks, and images to create a proper blog that you can publish. Or you can talk to an AI like a person and it will respond to you via audio like Jarvis from iron man. 

In the future, you will see more adoption of AI in our work life, and it will open a new field of study like prompt engineering. 

What I know for sure is that we are far away from AGI (Artificial General Intelligence), a self-aware machine that can think and decide on its own. These models and AI applications are built on human-generated data, and for AI to exceed humans on all levels it needs to learn on its own. So, I will not see AGI in my lifetime, but I am hopeful. 

Should you be afraid of AGI? I guess time will tell.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.