Damien Deighan is CEO and Founder of Data Science Talent. With 20+ years’ experience in recruiting, Damien has served the world’s largest blue chip organisations and placed staff in over 20 countries. Damien is co-host of the popular Data Science Conversations podcast, which features insights from the world’s leading academics in data science.
Here, Damien reflects on issues organisations have experienced after adopting ChatGPT. Following incidents such as Samsung’s security breaches, how can enterprise companies take advantage of LLMs safely and effectively?
Since the launch of Chat GTP3 on November 31st 2022, the pace of development in the generative AI space has been incredible.
On 21st Feb 23 Bain & Co announced a services alliance partnership with OpenAI with the intention of embedding OpenAI’s technologies (ChatGPT, DALL·E, and Codex) into their clients operations having already done so with their own 18k workforce. Coca Cola were swiftly announced as the first major corporate to engage with this new alliance, although interestingly no other major corporation has announced their involvement since.
Just 4 weeks later OpenAI announced their Plugins for Chat GPT and popular platforms such as Wolfram, Expedia, Klarna and Opentable were revealed as the first third party platforms to integrate.
Microsoft’s heavy investment in OpenAI and their rapid deployment of Chat GPT into their product range, added to the fact they are the trusted provider of corporate software applications, might suggest deep integration of Microsoft/OpenAI products into large companies might be inevitable.
However this is not necessarily how things are likely to pan out. Two things happened in March 2023 that give us some clues to what might happen next instead.
What the Samsung incident means for internal use of LLMs in business.
In early April several tech publications reported that Samsung employees leaked sensitive corporate data via Chat GPT, 3 times inside 20 days. This included a recording transcription of an internal meeting and source code for a new program in their semiconductor business unit. The problem is in each of these instances employees decided to input proprietary information into a third party platform, thereby removing control of this information from Samsung and putting company IP at risk.
Samsung’s immediate response was to limit use of ChatGPT and announce they are developing their own AI for internal use.
ChatGPT is an incredible piece of technology and its use in business can help drive significant leaps in productivity. However, the Samsung incident is also a clear warning to enterprise leaders of the importance of ensuring proper use of Chat GPT so that company information and IP is not shared in this way.
Bloomberggpt and the development of domain specific LLM’s
In addition to security concerns, another issue with generic closed LLM’s is their performance in tightly regulated industries where a high level of accuracy is critical. On 30th March, Bloomberg announced they had developed their own LLM and published the related paper “BloombergGPT: A Large Language Model for Finance”. Initially, BloombergGPT is intended to be a finance specific internal AI system with future plans to make it available to customers using their Bloomberg Terminal system. BloombergGPT can perform a variety of NLP tasks related to the finance industry including Sentiment analysis, news classification, and question answering.
Unlike generic LLM’s such as ChatGPT, the model is trained on a combination of curated general web content and internal financial datasets. Bloomberg’s huge company archive of news and financial documents collected over a 40 year period, means that high quality clean data is at the core of the model training. This should result in a system that performs better than a generic LLM in the specific domain of finance and this drive for accuracy is at the center of Bloomberg’s initiative.
Using pre-trained foundation models deployed with private domain specific datasets
In addition to the Samsung incident, OpenAI themselves experienced a major data breach on March 20th. During an outage, personal data of 1.2% of ChatGPT Plus subscribers was exposed, including payment-related information. The breach was caused by a bug in an open-source library, which allowed some users to see titles from another active user’s chat history. This led to Chat GPT being banned in Italy and on April 13th, Spain announced that they were investigating OpenAI over a suspected breach of data protection rules. This further highlights the need for large companies to tread carefully in the early stages of their adoption of LLM’s.
Does this mean the security risks and accuracy concerns presented by generic LLM’s mean that most large companies will follow Bloomberg and develop their own LLM from scratch?
No, probably not.
The cost of building an LLM from scratch is significant and it might make sense for Bloomberg, because their LLM will be central to their terminal product which comes with a $27k per year subscription charge. Most large corporations will not be able to justify the time and money involved in developing something similar from scratch.
There is a growing number of start-ups offering pre-trained LLM’s that any company can customize commercially into their own domain specific LLM.
Microsoft recently announced that customers of Azure machine learning could build and operate open source foundation models through its link with Hugging face. A few weeks later AWS launched Bedrock, which through an API, allows users to customize a range of foundation models that include Amazon Titan, Jurassic-2 (A121labs), Claude (Anthropic) and the open source Stable Diffusion (Stability ai).
The big decision a company needs to make is whether or not they should adopt a closed foundation model or an open source model.
Conclusion – is a hybrid approach to LLM’s most likely?
It’s difficult to predict how exactly things will evolve given that we are right at the start of the Age of AI. The allure of using powerful generic LLM’s such as ChatGPT and its direct competitors who also have powerful closed systems will be high in the short term as the open source offerings will take some time to catch up.
It’s probable that many companies will formally adopt the likes of Microsoft 365 copilot to drive efficiencies in its general internal operations, and allow its employees to use ChatGpt within certain bounds.
However, in regulated industries in particular, I suspect few large companies will be comfortable with using private datasets with closed models and they may not be able to adhere to laws such as GDPR if they do go down this route. For interactions that require an LLM to interface with sensitive customer data or other proprietary internal datasets, then open source will likely be the winner.
One thing, I can confidently predict… the key ideas in this article will probably be out of date within two weeks of publication!