ChatGPT – the AI game changer?
We’re always keen on exploring the latest technologies and available tools. That is why, we set up a live discussion between two technology experts with a special interest in AI issues. This article draws together the main points of their conversation, including likely uses for ChatGPT, as well as its current limitations and potential ethical issues.
Table of contents
ChatGPT is the latest AI chatbot tool to catch the public’s attention. It’s definitely a game changer – but while many are thrilled at the possibilities, others can only think of nightmare fictional AIs, like Skynet or the Matrix. Whatever your viewpoint, there’s no denying ChatGPT’s popularity. Within a week of it’s launch at the end of November 2022, more than one million users had signed up. Just weeks later, that number had passed 100 million (source).
Naturally, at Boldare, we’re curious about the latest technologies and available tools. Which is why, together with our Agile Product Builders community, we set up a live discussion between Krzysztof Osinski, Senior Vice President of Research & Development from DTiQ (a Boldare client) and Romuald Członkowski, Boldare’s Customer Success Guide.
What is ChatGPT?
Created by OpenAI, the prototype ChatGPT was launched on November 30, 2022. It is a generative pre-trained transformer (GPT), a chatbot that automatically completes texts on the basis of received prompts. It uses machine learning techniques applied in natural language processing and can be referred to as a powerful human language simulator. ChatGPT builds on OpenAI’s previous language models and was trained with human supervision.
In response to questions and prompts, ChatGPT is capable of producing detailed, coherent text and well-articulated answers – outputs can include articles and commentaries, business strategies, software code, etc. The potential is clear, but what are we seeing in this chatbot’s first few months?
Current uses of ChatGPT
To start with, as Krzysztof Osinski says, the most obvious use is as a customer service chatbot, finding requested information and dealing with customer inquiries. The key difference is that current chatbots are not smart – they are very limited in the answers they can provide and usually, the customer sees them as a barrier to be overcome on the way to talking to a human being. ChatGPT has the potential to engage in much more ‘human’ conversations. The benefits will be quicker service, possibly more accurate information, and cost savings for businesses.
We are already seeing some very mainstream adoption. Microsoft are partnered with OpenAI and are using ChatGPT with the Bing search engine (source). The result? Bing now does more than provide a simple list of links, it responds to your search queries with summarised textual answers, drawing on sources across the internet. It is more of a chat in which you might ask follow-up questions, even have a ‘conversation’ – a very different search experience.
Looking ahead – the likely impact of AI chatbots
ChatGPT is clearly a level up from previous chatbots and machine learning-powered tools. While it does have some current drawbacks (more on those in a while) a wider vista of potential applications is opening up, as discussed by Krzysztof and Romuald:
- Medical and health-related services – product-related medical portals already exist but ChatGPT points the way toward online diagnosis. Users will detail their symptoms and the portal/bot will identify their condition and suggest next steps (obviously including referral to a human doctor where necessary). In fact, another tool launched last year – BioGPT – has been trained specifically on 15 million PubMed abstracts. Although not yet available to the general public, the next version of BioGPT will have 1.5 billion parameters and a performance level that is 81% accurate (source).
- Legal services – Similar to the medical applications (both medical and legal sectors use enormous quantities of specific information) we can expect to see legal advice portals in the future.
- Copywriting – Text, articles, even books can be produced using ChatGPT. Although accuracy and perspective are issues here, in the future, writers may have to find ways to distinguish their work from that produced by technology.
- Software and coding – ChatGPT will produce code. As with text, that code then requires an expert check but Krzysztof foresees a (near) future in which you simply input the user stories and ChatGPT (or similar) will write the code for an app to address those user needs. This is a real opportunity for software houses and DIY no-code development platforms. When OpenAI releases the ChatGPT API for commercial use and not just research, expect to see a wide variety of conversation-based no-code development options.
In Krzysztof’s own sector – DTiQ is a leader in loss prevention and video surveillance in retail – he anticipates huge potential for simplifying the work of auditors trying to trace business losses. ChatGPT or an equivalent will conduct deep data searches and analysis to help auditors and investigators identify incidents and possible losses more quickly and efficiently. There are many possible uses - it all depends on the information a chatbot is trained on.
Will we use ChatGPT responsibly?
As with any new tool or technology, questions arise over its ethical and responsible use. This is especially so for ChatGPT because some of its outputs would be defined as ‘creative’ if produced by a human. Shortcuts to creativity always raise ethical questions.
Unsurprisingly, it is one of these potentially unethical uses that drove a lot of the early press attention for ChatGPT: students using the ChatGPT interface to produce essays. Given that our global academic culture is still largely on the regurgitation and interpretation of knowledge in essay form, it is possible to use ChatGPT very unethically, claiming credit for the chatbot’s work.
Another issue derives from the potential medical and legal services applications mentioned above. Such services will function more usefully with access to patient/client information and records. This then raises privacy and security issues for both users and providers.
It’s not an issue of whether the technology is good or bad – it’s neither – it’s a question of how the technology is used. If people take the results on faith, without understanding the inherent limitations, we have a problem; such as the developer who simply accepts ChatGPT’s code, or a patient who expects ChatGPT to write them a prescription.
The potential drawbacks of ChatGPT
One reason fact-checking ChatGPT’s results is so important is that the results of any query are not guaranteed to be accurate (in fact, the bot is programmed to tell users to independently verify any text it produces. This was highlighted by OpenAI CEO Sam Altman in a tweet shortly after the launch:
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
Add to this the fact that ChatGPT has been trained on a dataset that only goes up to 2021 – when compiling results and responses to our questions, it knows little about anything that’s happened in the last year or so.
How will human roles evolve?
As we create automated tools to take on traditionally human roles, it’s logical that our role as humans will change. To use some of the above examples, developers and website builders may have a more architectural role in terms of software and digital design.
Likewise, auditors and analysts will become investigators. As we ‘outsource’ our data-handling to our digital creations, it becomes more important that we select the right data to be handled – in other words, knowing what questions to ask ChatGPT (and how best to ask them) will become a sought-after skillset. We will become curators, choosing which data to feed the AI in order to get the most accurate and useful outputs.
What we can see already is how institutions are already evolving their procedures and systems to adapt to the reality of ChatGPT.
For example, to return to the press frenzy around students using ChatGPT to produce essays, Harvard University is looking at the potential acceptable uses of the technology instead of simply imposing a ban (source). Likewise, ChatGPT content is permissible in essays for the International Baccalaureate, to be cited like any other source or reference (source).
A chatbot future?
With its high adoption and usage rates, plus the press and popular attention, there’s no doubt ChatGPT has made an impact. ChatGPT and its successors/offspring have the potential to change the job market, change how we provide or access services, change the structure of society even. However, as Krzysztof points out, while this may the next big technological quantum leap, significant changes are years away, not months.
We are currently in the ‘what if…?’ stage and the potential answers to that question are undeniably exciting.
Share this article: