ChatGPT : Simple questions, not-so-simple answers

Manoj Pandey*

ChatGPT is the new buzzword on the technology scene. People are fascinated about this new artificial intelligence tool, and while many are using it for improving skills and efficiency, and making money, others are busy discussing its potential and pitfalls.

Let me join the second brigade and put on a platter some questions, and then try to answer them. The answers are going to be generalised, not because they can’t be answered in specifics but because there are numerous possibilities and each has a rather complex technical reason behind it. 

First question first: What is c? 

What is ChatGPT? What are other similar technologies being developed?

You must have encountered chatbots on websites, which respond to customer queries. Chances are that you have used at least one of the common digital assistants – Google Assistant on your Android phone, Siri on an Apple device, and a standalone assistant such as Alexa.

ChatGPT is an advanced chatbot. It is thousands of times, and for some works millions of times, more advanced than digital assistants, because it works on a new level of artificial intelligence (AI). 

Let’s see how AI makes ChatGPT so powerful. The chatbot that answers customer queries has either a set of questions from which the customer chooses one at a time or it allows the customer to pose his own query. In both these cases, the chatbot has a database of ready-made answers against all possible queries. It cannot answer if you raise a query that is not mapped to a specific answer. 

Suppose the chatbot has been trained through a computer program to find out answers relating to any relevant expression (=keyword), it will be that much more efficient in answering queries, isn’t it? Google has evolved to a level where it can understand the intent behind queries (= has semantic capabilities) and generally puts forth the most relevant search results.

Now, if the program behind the chatbot has been trained with billions of inputs in a way that it can generate the best answer to a given query (the process is generally called machine learning), it will look as if a human is answering the queries in the background. One more step takes it closer to human beings: when the program has been fed billions of possibilities and trained to understand text and images like humans do. The program seems to understand queries that are made in human language (e.g. Can you tell which method is better…?), and it produces answers that are in human language. Remember, no answers have been specifically prepared and stored in this program’s database for any possible query. You ask it a meaningless question and it will produce an intelligent-looking answer! 

Notice how the computer has moved from blurting out ready-made answers to generating responses, because it has been trained to do so; and from being dumb to understanding the human language and responding in the same way (in a way, becoming autonomous). When the chatbot is that powerful, it is ChatGPT: a chatbot that is Generative Pre-Trained Transformer. For those who have not used ChatGPT, it is a chat facility provided on the website of OpenAI company. Like on Google search box, you type your query and it comes out with answers. It is free for common use, but its latest version (GPT4) is premium, and its codes (API, plug-ins) are also being provided to others to evolve their own intelligent chatbots or apps or programs.

This is how much ChatGPT can help when you want to praise a magazine so that your article gets published!

ChatGPT has not evolved all of a sudden. The concept of language modelling is over seven decades old. Starting with primitive models and small data, it became so complex that the program could be trained on millions of bits of data to understand linguistic patterns in text created by humans. Then it could be trained unsupervised, and then it could be brought to a level when errors in its understanding of human language came down to an acceptable level. That is when it was released to the public in the form of ChatGPT. 

A number of technology labs and companies have been working on AI for decades. Ecommerce companies have been using AI for analysing consumer behaviour, social media platforms for serving the content that you have been watching, and so on. 

Research on ‘understanding’ the human language is also in use for some years. For example, Google has evolved an advanced language model, called LaMDA, which is being used by it for producing relevant search results. Google has also released ‘Bard’, a language tool similar to ChatGPT. Many other companies boast of similar capabilities in their apps, which have specialised applications. Making use of the capabilities of GPT technology, many apps and programs are being developed for doing professional tasks (e.g. coding, video and image editing, market research) in a jiffy and often with little professional skills.

What makes ChatGPT exceptional is its language processing capabilities. Since it is free and big, it has drawn attention much more than other AI tools, apps, etc. 

Is ChatGPT a type of search engine? If not, does it pick up information from the web to answer questions?

ChatGPT in some ways behaves like a search engine with a higher level of intelligence, but GPT and search engines work differently. In response to user queries, the search engine gives out actual resources on the web, in the form of links, small snippets, images, videos, etc. On the other hand, a GPT tool uses the information fed into it (mostly in the form of web resources) and generates its own responses based on its training and other factors. 

So, if you ask Google “How should I dress for a marriage party?”, Google will bring you links to websites, YouTube videos, blogs, social media posts, etc what the Google algorithm ‘thinks’ to be the best-fit for your query. You can make your query more specific to get better results. 

In response to the same query, ChatGPT will come up with suggestions on what dress to wear, as if you are being guided by an expert. If you tell it that this party will be in London and that you are looking for the best costume irrespective of price and that you like pastel colours, and that it is your sister’s marriage, its reply will be customised to these prompts. 

How efficient is ChatGPT? How dependable is it?

The quality and reliability of the responses given by a GPT tool depend on hundreds of factors, especially how advanced the AI system behind the chatbot is, and how much and how good data it has been fed for training.

It has been empirically found that while the first public version of ChatGPT (ChatGPT 3.5), released last November, was less reliable, its new version, GPT4, is much more reliable and accurate. 

If you asked me this question, I would call it absurd. I can tell what Gandhi ji said on cow, but how can I answer what he did not say? But ChatGPT answered this question, and so well! Cunning, no?

ChatGPT is good at some tasks while very bad at others. It does a fine job in producing the first draft, say, of an article or a script. It can easily write emails, short notes, article summaries, etc. It also is a great tool for producing an outline (e.g. for a research paper, a presentation, a long article, a business strategy or a course curriculum). It summarises long documents rather well. It also produces workable and error-proof program codes in different computer languages. 

The best aspect of ChatGPT, perhaps, is ideation. Even if its responses are not always of high quality and fully relevant, it puts forth ideas that the user might not have thought about. 

ChatGPT is highly creative – it can ‘think’ beyond human imagination, sometimes because it can mix up things the way humans would not. In less than a minute, it can write a small novel with the locale, plot, characters and ending of your choice. Ask it to write a limerick lampooning the political leader you don’t like, crack a joke, tell a soothing story when you are down, make a questionnaire for your interview with a chosen celebrity, edit your essay to match the style of Bertrand Russell or your story to match the style of Hemingway…

GPT and related technologies are opening new possibilities in image/ video editing. On many tools (e.g. DallE, Stable Diffusion) you can create photo-realistic images just by giving text prompts such as ‘create night sky with clouds, stars on one side and a lightning strike on a tree’.  The creative possibilities of GPT are mind-boggling.

ChatGPT has found numerous applications and new use cases are being explored every day. It is being effectively used by companies for improving customer response systems. Users have reported it to be efficient in data processing and visualisation, search engine optimization of websites, creating business-related documents and legal papers, and other tasks that otherwise take a lot of time and effort. Thousands of small-time techies, students and others are earning money by giving online services using ChatGPT and other similar AI tools. This has led to mass production of low-quality articles, computer art, videos, academic and research papers, articles and so on.  

In some tasks, ChatGPT is found to be not up to the mark. For example, it can generate a poem or a joke or a fictional story based on your inputs, but these usually lack the human creative touch. Since its responses are based on the data it has been trained on, it cannot ‘imagine’ a new way to tackle a situation, generate an earth-shaking new idea, create a strategy no one has thought of till date. We can keep discussing possible applications of ChatGPT on and on, but let me end this reply by saying that ChatGPT works the best when it is used as an assistant, rather than depending solely on it. It can give new ideas, perform routine tasks in a jiffy and thus save time, help in coding and research, but it cannot ‘think’, not yet. 

Will ChatGPT lead to loss of employment, because it will quickly and efficiently do jobs that humans do at present?

Yes and no. 

Like other major technological developments, it will disrupt the existing systems. Systems in businesses, governments, institutions, media and elsewhere where humans are at present doing the ‘white collar’ tasks. 

There is a certainty that many jobs will eventually die down due to extensive use of ChatGPT and other similar technologies, and new jobs will arise. The speed with which that happens, and the extent to which that impacts the labour market, will depend on the type of job, location and language, etc. 

Let’s recall how computerisation has impacted the jobs that were being carried out in offices. Computers have taken away the jobs of typists, stenographers, type-setters, printers, etc. The newspaper does no longer need type-setters, and offices seldom need short-hand skills. But the earlier jobs of typing and proofing documents, sending them to customers, etc are now done much more neatly, efficiently and fast with the help of computing devices.

New jobs of data entry operators, page makers, etc. have arisen – and in a greater number. Machines and computing tools also need a new set of technicians to maintain and secure them. The enormous amount of data that is generated today due to technology itself needs lakhs of professionals such as data analysts/ scientists and coders. In addition, millions of people are gainfully employed in providing remote services in ways that were not possible before. 

Therefore, a quick and large-scale shift in jobs might occur when ChatGPT is adopted in a big way. The technological laggards have-nots are likely to lose because they will be slow in acquiring new skills for new jobs as well as the existing ones. 

An optimist view here is that technology can become an enabler as much as disrupter. It has been seen that those with poor educational background and skills adopt low-level technology faster than their well-to-do counterparts.

ChatGPT is too new to have generated empirical studies on how this technology empowers or further disempowers those with poor resources and skills. MIT did carry out a study recently on over 400 college-educated professionals on how much ChatGPT empowers users with different capabilities. It was found that the use of ChatGPT improved the performance of poor performers significantly while improving the efficiency of high performers only marginally.

In India, we have seen that the urban Indian youth – the poor and not-so-poor among them – have been exceptionally good at adopting the mobile phone and social media. On the social media and social sharing arena, they have been able to overcome their educational and linguistic infirmity by adopting short-form video in their mother tongue. They do not seem to be suffering much (due to their perceived technological backwardness) in taking computer-based competitive examinations. On the other hand, they are found to be very poor in adopting technology when it comes to academics (Is it mostly because academic studies look dull and burdensome?), and advanced skills that need proficiency in English.  Timely government interventions can perhaps reduce the likely impact of   GPT on the underprivileged. For example, GPT and other AI technologies and tools can be immediately introduced in schools and technical institutes, and free training/ skilling/ re-skilling courses can be started for job aspirants and low-level workers.

Is it a socially safe technology? Will it not lead to cultural dominance by advanced societies/ nations?

This is also a serious concern. There are many reports that ChatGPT treats different religions, ethnicities and racial backgrounds differently, being less sensitive to some as compared to others. 

Disinformation, spread of hateful and polarising narratives, cultural dominance – these are inherent risks in the way GPT works: based on the information provided to it by the owner of that app, program or bot.  So, even if the primary developers such as OpenAI, Microsoft and Google take care of social and ethical aspects, the service providers may not be that discrete. People with intent to make money at all costs will definitely like this technology for pushing undesirable content and providing harmful services. Besides, criminals, people with extreme views, and deviant state actors (= governments and their agencies) will definitely use this technology to the detriment of society. 

I believe that ChatGPT has interned in an Indian news channel. If you think that AI is not yet like a human, I’d like to agree. In that case, there was an Indian TV reporter in the team that trained it.

Much like what is seen on social media, intelligent chatbots can engage people in harmful ways and manipulate them. These can be programmed to  exploit people’s fears and weaknesses, and blackmail them. Some can be made to convince people to commit crimes, take an extreme action or follow a cult. There is at least one confirmed report of a person who committed suicide after a long chat with ChatGPT – such events will be wide-spread as people start depending on it to find answers to their emotional problems. 

ChatGPT’s capabilities in producing answers to all types of queries are creating a headache for academic and research bodies. Students are seen creating presentations and doing assignments by using ChatGPT rather than applying their minds. Instead of making an effort to solve questions, students are reported to be using ChatGPT to create ‘made-easy’ answers. Even research papers and scholarly articles are reported being created partly or wholly through ChatGPT. The tool has been banned or restricted by a number of academic institutions in many countries and many more are going to follow. There is also talk in some communities about restricting its availability. Governments not in good terms with the USA are feeling uncomfortable due to its potential to spread information that paints them in a bad light. 

While acknowledging perceptible risks, I would like to agree with those who think that the potential harm to the society from ChatGPT will not be as much as from the social media and the dark web. Isn’t it sad that tech giants, governments and oversight organisations have not been able to regulate them? That is why there is a call to give a pause to future development of GPT (discussed in detail below).

It is also argued that, like Google search and Google maps – the two people-oriented technologies that have changed the way humans find information and places in the real world, this technology will change the way humans create digital resources.

Some experts advise focussing on, and promoting, the positives as much as being concerned about potential harms.  For example, if ChatGPT has the ability to generate ‘artificial’, spurious and harmful content, it also has the potential to become a powerful tool in the hand of authorities for detecting fake news, contraband items and plagiarism. The analytical powers of GPT can be used for getting alerts and warnings about harmful trends arising in different fields.

Will ChatGPT open floodgates for development of technologies that would start dominating the human race?

You might like to agree with me that either deliberately or by a serious accident, technology in one form or the other will one day overwhelm humanity. Be it genetic engineering, space travel or atomic energy, such technologies have the potential to go out of hand of sensible humans. There are/ will be checks and balances in their research and usage. However, artificial intelligence is one actor that can directly overwhelm humans and can also snatch other critical technologies from humans.

Luckily, GPT – as it is today – is not too dangerous from this point of view. Yet, it is a stepping stone towards artificial general intelligence – the ability to understand and learn intellectual tasks, even unfamiliar ones, much like a human being. That is why a large number of experts have issued an open letter to give a pause to the development of GPT beyond GPT4. They want a moratorium just for six months and hope that in such time tech labs, international authorities and governments will come together to formulate a regulatory framework. 

The fears raised by these concerned citizens range from employment to social impact to human existence. Though the dangers are not imminent, a timely caution will save the human race from annihilation by technology.

I cannot present the concerns from GPT better than what the open letter says, so let me quote it: ”… Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable…” 

A few in-depth and illustrative resources to follow

I asked an AI-enabled image creator to make an image of a garden at sunrise, with six hibiscus flowers and a branch in the foreground.


*Manoj Pandey is a former civil servant. He does not like to call himself a rationalist, but insists on scrutiny of apparent myths as well as what are supposed to be immutable scientific facts. He maintains a personal blog, Th_ink

Disclaimer: The views expressed in this article are the personal opinion of the author and do not reflect the views of which does not assume any responsibility for the same.


Please enter your comment!
Please enter your name here