Blog | BlueChip Communication Integrated Marketing, PR & Financial Services

What is content AI? Why should we care about it and how might it be used within the financial services industry?

Written by Carden Calder | Mar 14, 2023 4:13:04 AM

Whether you love it or hate it, Artificial Intelligence (AI), and content AI specifically, is here to stay and dominating chatter on the airwaves, and from classroom to boardroom. 

ChatGPT, the most reported on AI platform, was crowned the fastest-growing consumer app in history by Swiss bank UBS in a 3 February analyst note based on SimilarWeb data.  

We’re talking about an estimated 100 million active users in January, just two months after the launch. For reference, TikTok (the previous title holder) took nine months to hit that number; Instagram, two and a half years.  

(Click here or scroll to the end to see our email newsletter vs the ChatGPT version...)

At BlueChip’s inaugural Summer Soiree, we drilled Danielle Stitt on content AI and chat GPT and the potential impact on communication, marketing and financial services.

Here are our top 3 takeaways: 

What is content AI? Why should we care about it and how might it be used within the financial services industry? 

A: Content AI is the use of artificial intelligence techniques such as natural language processing, machine learning, and deep learning to create human-like content such as written copy, images, audio, and video.  

But it goes much deeper than that. 

Content AI now has the capacity to generate a wide variety of content, mimicking news articles and social media posts to product descriptions and marketing materials. Though automated, the content created is almost inseparable from human-generated content, and the gap between the two continues to narrow the more the machines continue to learn and evolve (cue title sequence for Terminator 2 

Jokes aside, the capacity for content AI in the workplace is, as it stands, untapped. Having the ability to automate creative tasks significantly reduces the reliance on otherwise expensive human hours, allowing companies to create and distribute more content in less time. Even if the automated content is not completely up to scratch, it’s sophisticated enough to get it 70% of the way there before a human takes over.  

It might surprise you to learn that several high-profile publications already use AI to generate content, too. Forbes uses a tool called Quill to write earning reports, and The Associated Press uses Wordsmith to write thousands of articles about finance and sport. These systems can’t produce articles autonomously just yet – to get something legible out of them, someone has to feed in statistics. Each system, which has learnt basic rules about grammar and structure, then determines what data has the most weight and repurposes it into an article. 

For financial services organisations, I can see content AI platforms like Chat GPT, Jasper, Rytr, copy ai, Writesonic, etc, help inform credit decisions, risk management using structured & unstructured data sets, fraud prevention such as credit card fraud when it flags a charge outside the normal spending patterns, and as a driver of high-frequency trading which looks at social media content for example, customer service and personalisation.  

 

What impact might we see from content AI on financial services companies’ and leaders’ reputations?  

In short, significant.  

On the positive side, content AI has the potential to help everyone from insurers and superannuation funds, to fund managers and retail financial services organisations completely streamline communication and improve the customer experience. Just one example is that software like Chatbots, automated replies, and personalised email content curation are massive time savers and should allow the customer service team more time to handle less common or trickier questions. 

On the flip side, I think there are significant risks operationally and reputationally. A simple example is that right now AI-generated content can come across as insincere or robotic. A case in point is the online American media giant CNET being outed for using content AP to write articles on personal finance and then forced to issue public corrections on more than half of the stories because they had published with fairly serious plagiarism and errors. 

Content AI also raises concerns about data privacy, security, and ethical dilemma related to the use of automated decision-making algorithms. Financial services companies and leaders need to be very transparent about how they collect, store, and use data, and ensure any AI-powered communication strategies are aligned with their values, principles, and regulatory requirements at all levels. Failure to do so can seriously damage reputation and erode trust amongst stakeholders. 

 

What might be some potential risks for financial services businesses using content AI? Might the use of content AI lead to an increase in cyber risk? 

One potential risk is the security of the AI system itself. If the AI system is not properly secured, it could be vulnerable to cyber-attacks, such as data breaches, malware infections, or denial-of-service attacks. Hackers could exploit vulnerabilities in the AI system to steal sensitive data or disrupt business operations. 

Another risk is the quality of the content generated by an artificial intelligence system like ChatGPT. If the AI system is not properly trained or supervised, it could generate inaccurate, misleading, or inappropriate content, which could harm the reputation of the financial services business or even lead to legal or regulatory issues at scale. 

To reduce these risks, financial services businesses should ensure that their AI systems are properly secured, trained, and monitored. This will include regular vulnerability testing of any processes which involve artificial intelligence and putting in place safeguards such as access controls, data encryption, and intrusion detection systems. If you haven’t already you’re going to need to go further than just having procedures to respond to an (inevitable) cyber breach to have a scenario planned around AI and ideally practiced the management team’s response. There’s not much point doing, say, penetration testing if you haven’t also operationally tested the management response to the kind of novel ethical dilemma artificial intelligence, ChatGPT and cybercrime will present.  

 

(Wo)man vs the machine

In a parallel universe, ChatGPT also wrote our latest edition of 'Take a beat Tuesday' - our monthly newsletter. See how it did below and how it weighs up against our efforts.