Guide to Artificial Intelligence in Jersey's Finance Industry


Share:
Overview Artificial Intelligence Technologies AI and Ethics

The financial and professional services industry is undergoing continued change because of emerging technologies. Current research focusses mainly on how these technologies will change the industry at a higher level, with little detail given on how the day-to-day work of employees will be impacted and what practical skills they will need to perform this work and remain relevant in the job market.

Companies that embrace Artificial Intelligence (AI) and support upskilling to help professionals adapt to, and benefit from, changes brought about by AI will have the most success attracting — and retaining — top talent.

Financial services stand out as the only industry in which the share of professionals with AI skills and the speed at which they are adding AI skills to their profiles is above average. This is an example of how industries, beyond tech, have the potential to be not only early adopters but drivers of AI innovation.

Jersey Finance’s fintech ambition is to strive to be the easiest international finance centre to do business with remotely, in a digital world. Our fintech strategy centres around two overarching strategic ‘drivers’: ‘enhancing client experience’ and ‘driving efficiency.’

These drivers have been put in place to guide our approach to fintech, ensuring that Jersey’s finance industry can continue to deliver world-class services to international clients in an increasingly virtual world. They also empower firms to support greater productivity and satisfy growing regulatory and reporting requirements, as well as skills shortages.

Working in partnership with Grant Thornton UK, we are proud to launch a new series of insights into AI, diving into the key topics surrounding the use of AI by the financial services community, both globally and specifically in Jersey.

Released over the coming months, we will examine AI from a variety of viewpoints, looking at several topics, from the genesis of AI, key technologies used now and what skills will be needed in the future.

The Evolution of AI

The rapid rise of AI has not come out of the blue. The history of AI as a field of systematic research traces its origins back as far as the 1950s, when the term ‘artificial intelligence’ was originally coined. The evolution of AI since then can be defined across several distinct phases, each marked by significant advancements in theory, methodology and application. The continued development has been characterised by several distinct periods of intense growth, decline and resurgence.

AI - A Potted History

The ‘First Golden Age’ of AI began in the mid–20th century, fuelled by optimism and significant advances in computational power and algorithms, leading to pioneering programs in problem-solving and logic.

Agenda

History of AI

History of AI

1950s

“Can machines think?”

This was the question posed by Alan Turing in his 1950 paper ‘Computing Machinery and Intelligence’. While the term ‘artificial intelligence’ wasn’t coined until 1956 by John McCarthy, the 50s was the start of the first Golden Age of Artificial Intelligence.

In the 50s, Turing also introduced what we now know as the Turing Test: a machine capable of tricking a human it is chatting with into believing it was a human. The assumption was that the machine was capable of thinking.

More

1950 – 1970

Between the 1950s to the 1970s, the work on AI was flourishing as the capacity of computers continued to grow, with programs such as ELIZA showing promise that artificial intelligence was to be expected in the near future.

More
1950 - 1970
1970s

1970s

The first ‘AI winter’, in which the optimism of AI declined occurred during the 1970s when the reality of the technological and theoretical limitations of the period was recognised. Computers, at the time, were unable to store enough information or process it fast enough for AI to be a reality.

More

1980s

During the 1980s there was prolific development of expert systems which lead a second surge of AI. This was driven by the advent of machine learning and expert systems. Expert systems are systems designed to solve complex problems by doing if-then rules, based on a knowledge base. Towards the end of the 1980s, the limitations that these systems had in terms ability of their ability to acquire knowledge led to the second AI winter.

More
1980s
1990 - 2000

1990 – 2000

A bottleneck of expert systems meant that until the early 2000s, there wasn’t much movement towards AI and the second AI winter continued

More

2000s onwards

Increasing IT capabilities meant a surge in development towards AI. In this period, big data and deep learning starts becoming more mainstream, laying the base towards other AI applications and the development of generative AI in the form we know it today.

More
2000s onwards

Glossary of Key AI Terms

Artificial general intelligence (often shortened to general intelligence)

The ability to accomplish virtually any goal or cognitive task including learning equivalent to human intelligence without input

Artificial intelligence

Non-biological intelligence

Backpropagation

The algorithms that enable artificial neural networks to learn, through a process of incrementally reducing the error between known outcomes and model predictions during training cycles

Deep learning

A concept loosely based on the brain that recognises patterns in data to gain insight beyond the ability of humans; for example, to distinguish between the sonar acoustic profiles of submarines, mines and other sea life, a deep learning system doesn’t require human programming to tell it what a certain profile is, but it does need large amounts of data

Deep neural network

Uses sophisticated mathematical modelling to process data in complex ways, through a greater number of layers than a neural network

Generative models

Existing data is used to generate new information; for example, predictive text looks at past data to predict the next word in a sequence

Intelligence

The ability to achieve complex goals

Narrow intelligence

The ability to achieve a narrow set of goals, such as playing chess

Natural language processing (NLP)

When a computer interprets and understands human language and the way and context in which it’s spoken or written; the aim is to deliver more human-like outputs or responses

Neural network

A group of interconnected ‘neurons’ that have the ability to influence each other’s behaviour

Machine learning

The ability of a machine to learn without being programmed; the algorithms used improve through experience, either predictively using historic data or generatively using new data

Predictive analytics and models

Similar to machine learning but narrower in scope, predictive analytics has a very specific purpose, which is to use historical data to predict the likelihood of a future outcome; for example, risk-based models on when a stock may fall

Reinforcement learning

A type of machine learning technique that enables an AI system to learn in an interactive environment by trial and error using feedback from its own actions and experiences

Robotic process automation (RPA)

Software that’s built to automate a sequence of primarily graphical repetitive tasks

Supervised learning

Supervised learning uses labelled datasets to train algorithms in order to predict outcomes and recognize patterns.

This section will look at the specific technologies that make up Artificial Intelligence (AI) as well as capabilities that are expected to be developed in the future.

Current Technologies

AI is typically classified according to how the machine learning algorithm enhances its capabilities. The field of AI consists of three primary approaches. These approaches are referred to as ‘Supervised Learning,’ ‘Unsupervised Learning,’ and ‘Reinforcement Learning,’ and give AI systems the flexibility to tackle diverse problems within varied applications:

Supervised Learning

Supervised Learning is a type of machine learning approach that involves predicting the value of a pre-defined target/outcome based on known inputs. This approach requires labelled data for the model training, which allows the algorithm to capture the relationship between the input labels (features) and the target outcome. Supervised learning can be seen in action in your email folder: separating spam emails into a spam folder in your inbox is a type of machine learning that uses supervised learning.

Unsupervised Learning

In contrast to supervised learning, unsupervised learning identifies patterns and relationships in data without predefined outcomes, making it exploratory and useful when there isn’t a lot of labelled data, or the available labelled data is expensive. It can include cluster analysis for grouping similar observations and association analysis for identifying relationships between variables. Such methods can provide insights such as “customers interested in X often show interest in Y and Z as well”.

Reinforcement Learning

This technique involves an ‘agent’ exploring an environment to identify optimal actions through trial and error, relying on a reward function for feedback. This approach is beneficial in settings with unknown optimal actions and dynamic problem types like robotics or game playing and is used in areas such as autonomous vehicles and dynamic pricing within financial services, where the environment frequently changes.

Other classifications of AI

In addition to the three primary approaches, there are a number of other classifications of AI which make up the full suite of technologies.

Machine Learning

Traditional machine learning algorithms are designed to perform tasks like prediction or classification by analysing input data to identify underlying patterns. Various algorithms have been developed in traditional machine learning, with differing functions depending on the nature of the data inputs and the specific task that the algorithm is applied to. To function effectively, traditional machine learning relies on extracting specific characteristics from data through a process called feature engineering, which is where raw data is transformed into data that can be used by the machine learning algorithms.

Deep Learning

While machine learning falls under the umbrella of AI, deep learning can be considered a subset of machine learning. In contrast to traditional machine learning, deep learning automates the process of extracting information from input data, removing the need for human intervention in feature engineering. By doing so, deep learning can automatically determine the most optimal features for the task at hand, as opposed to the data being assigned features externally by a human.

Generative AI

This type of AI refers to a class of AI systems designed to generate new content or data that resembles and, in some cases, is indistinguishable from human-created content. It is a subset of deep learning. At its core, generative AI works by learning patterns and structures from existing data and then using that knowledge to create new content. This content can span various domains, such as images, text, audio, and video, or a combination of those domains.

Anticipated and Expected Future Capabilities

Predicting the trajectory of AI development is challenging due to its rapid pace, but several trends are increasingly probable. The anticipated developments in AI are set to deeply impact the finance sector in a myriad of ways, from enhancing customer service to redefining how risk is assessed. In this section we discuss how AI adoption is likely to scale over the next two, four, and six years, and its implications for the future landscape of financial services.

Short-Term (Over the next 2 years):

Refinement of Generative AI

The evolution of Generative AI models is expected to accelerate, with models becoming more nuanced in understanding context, irony, and the subtleties of human communication.

Advances in Computer Vision

Progress in computer vision will likely yield models that are not only more accurate but also more efficient, capable of running on devices like smartphones and Internet of Things (IoT) devices. This can lead to more real-time applications, such as instant visual translations or advanced augmented reality experiences.

Ethics and Regulation

As AI becomes more pervasive, there will be a greater need for ethical guidelines and regulatory frameworks to manage issues of bias, privacy, and fairness. Expect more institutions to form AI ethics boards, and governments to begin drafting and implementing regulations to control AI deployment, especially in sensitive areas such as surveillance, facial recognition, and personal data usage.

Automated Customer Service

AI-driven chatbots and virtual assistants will become more sophisticated, handling a wider array of customer inquiries and transactions, which will help financial institutions reduce costs and improve customer satisfaction.

Fraud Detection and Security

Enhanced machine learning models will provide more accurate detection of fraudulent activities by recognising patterns across vast datasets that human analysts might miss.

Algorithmic Trading

AI will continue to be integrated into trading strategies, improving the speed and efficiency of market transactions, and enabling high-frequency trading firms to capitalise on minute market changes.

Regulatory Compliance (RegTech)

With advanced technologies like artificial intelligence and machine learning, RegTech can provide real-time monitoring of transactions and activities. This helps in early detection of suspicious activities and ensures timely reporting to regulatory bodies

Read more

Low-code and No-code AI Platforms

As AI improves, we will see transformative tools designed to make AI accessible to a broader audience, including those without specialised technical skills. By employing graphical interfaces and pre-built elements, low-code platforms offer a balance that appeals to users with some technical knowledge, allowing for minimal coding in developing AI applications.

Cyber Security

The deployment of AI offers potent tools for real-time threat detection and monitoring, fundamentally transforming how financial institutions protect their assets and assess the security of their IT infrastructure. However, the swift integration of AI necessitates that the industry proactively addresses potential vulnerabilities introduced by these systems, such as susceptibility to novel forms of cyber-attacks and challenges to data integrity. To effectively manage these risks, institutions must develop robust AI governance frameworks and invest in specialised cybersecurity measures. This proactive approach is essential for maintaining trust and securing financial transactions in the increasingly digital landscape of financial services.

 

Medium-Term (In the next 4 years):

Personalised AI Services

Personalisation engines will become more adept at predicting individual needs and preferences, enabling hyper-personalised recommendations in retail, adaptive learning plans in education, and customised treatment in healthcare. These systems would leverage continuous feedback loops to improve their accuracy over time.

For example, a financial advisory firm could use AI to provide personalised investment advice to its clients. The technology analyses the information provided by the client, and through continuous monitoring of financial markets and economic indicators alongside the client’s risk appetite and financial goals, will provide personalised investment recommendations. Through continuous learning, AI adapts to changes in the client’s situations and provides alerts to help them stay informed and make proactive decisions.

Credit Scoring and Risk Management

Machine learning models will incorporate a broader set of data points, including non-traditional and unstructured data, to evaluate credit risk more accurately and offer personalised lending rates.

 

Personalised Banking

AI will enable more personalised banking experiences through sophisticated algorithms that analyse an individual’s spending habits, investment preferences, and financial goals to provide customised advice and product recommendations.

 

Regulatory Compliance (RegTech)

AI applications will streamline regulatory compliance by automating the tracking and reporting of financial transactions, assisting institutions in staying abreast of regulatory changes and reducing compliance-related costs.

Long-Term (Over 6 years):

AI Governance

At the international level, we will see the emergence and evolution of AI governance bodies similar to the World Trade Organisation for trade or the International Atomic Energy Agency for nuclear energy, which establish and monitor compliance with global AI standards.

 

Autonomous Systems

Developments in AI could lead to autonomous systems designed to manage client’s investments over the long term with minimal human intervention.

For example, using autonomous systems, the client provides their financial goals, risk tolerance and investment preferences during the onboarding process and the AI uses this data to create a personalised investment strategy, automatically rebalancing portfolios based on intelligence from financial markets and the client’s evolving goals. The system can identify and capitalise on short-term market opportunities while adhering to the long-term investment strategy.

 

Breakthroughs in Artificial General Intelligence (AGI)

Steps towards AGI could be seen in the form of more ‘transferable’ intelligence, where an AI trained in one domain can apply its understanding to another without starting from scratch. This cross-domain learning ability would be a significant step toward more generalised forms of AI.

 

Wealth Management and Robo-advisors

The sophistication of AI in analysing market trends and managing investment portfolios will lead to more widespread adoption of robo-advisors, which will provide personalised investment advice at a fraction of the cost of human advisors.

 

Quantum Computing in Finance

If quantum computing advances as anticipated, it could solve complex financial models exponentially faster than classical computers, potentially revolutionising areas like asset pricing, portfolio optimisation, and risk assessment.

 

Decentralised Finance (DeFi)

DeFi is a financial system that operates on a decentralised network of computers rather than a central authority such as a bank or government institution. AI might play a pivotal role in this emerging space by providing intelligent contract management, risk assessment, and liquidity analysis, further removing the need for traditional financial intermediaries.

From transparency issues to concerns surrounding bias, privacy, accountability, and systemic risks, the ethical landscape of AI in finance is varied and complex.

Addressing these ethical considerations is not only a task for individual firms but requires action from multiple stakeholders, including regulators, industry professionals, developers, and governments, to ensure responsible deployment and maximise societal benefits.

With these challenges come opportunities, and Jersey, with our close connections between the industry, the JFSC and the Government is ideally suited to build on our strong reputation of being a trusted provider of professional services to becoming a leader in the ethical use of AI in financial services.

 

 

Key aspects to consider in relation to the use of AI in an ethical manner:

Transparency

Transparency refers to the ability to understand and explain the decisions made by AI algorithms. As an example, AI models used for investment trading can be designed to produce not only trading recommendations but also explanations for the underlying reasoning behind the recommendation. This can involve generating an auditable report that outlines the key factors, indicators, and market conditions that influenced an investment decision.

This transparency can help in validating the integrity of the AI-driven decisions and actions, ensuring that they align with regulatory requirements and market expectations. Having the ability to review the decision-making rationale can assist in identifying potential biases or anomalies in automated actions and decisions allowing for measures to be taken to address any issues.

Data Traceability

Understanding the origin and quality of the data used to train and validate algorithms is key to the ethics of AI. These datasets may contain sensitive information and may be subject to biases or inaccuracies. For instance, AI systems can track the lineage of financial data, documenting which sources it has received its information from such as market feeds, customer transactions, or economic indicators. Additionally, AI algorithms can be trained to flag potential biases or inaccuracies within the data, providing transparency into the quality and integrity of the information used to arrive at a decision or action.

Decision Traceability

Tracing the decision-making process of AI systems to understand how inputs are transformed into outputs is also key for transparency. This includes documenting the sequence of calculations, rules, or features used by AI algorithms to generate predictions or recommendations.

In risk assessment, AI algorithms can provide a clear audit trail of the factors and data points that contributed to the assessment of financial risk, offering transparency into the decision-making process. This can include the identification of key variables, statistical models used, and the underlying rationale for risk predictions.

User Interface Design

AI systems should have user-friendly interfaces that facilitate understanding and interaction. Visualisations, dashboards, and interactive tools can help users explore AI outputs, interpret results, and gain insights into underlying patterns or trends.

Communication and Disclosure

When using AI, financial institutions should provide transparent explanations to external stakeholders, including clients, regulators, and the public of how AI is used in their products and services, including its limitations, risks, and potential impacts.

Continuous Monitoring and Auditing

Regular assessments of model performance, data quality, and compliance with ethical and regulatory standards is needed to maintain transparency and ensure the responsible deployment of AI. This could form part of a compliance monitoring programme, or similar review, to ensure businesses are complying and managing their risks on an ongoing basis.

Bias and Fairness

Bias in AI refers to systematic errors or inaccuracies in algorithmic predictions that typically result from skewed or unrepresentative training data, imbalanced data, algorithmic design choices, or implicit assumptions. These biases can manifest in various forms, such as underrepresentation or misrepresentation of certain groups, unequal treatment based on protected attributes, or reinforcement of existing societal stereotypes.

Fairness in AI refers to the absence of discrimination or bias in algorithmic decision-making processes. A fair AI system ensures that individuals from different groups are treated similarly under similar circumstances. This may involve defining appropriate fairness criteria, such as ensuring that the false rejection rates are balanced across different demographic segments, and developing AI algorithms that satisfy these criteria.

One of the key concerns in the use of AI algorithms is the potential for biases. These biases can stem from historical data, societal prejudices, or even the design of the algorithms themselves. A common example of this is how historical lending data disproportionately favours certain demographic groups and that AI algorithms trained on this data may perpetuate those biases. As a result, individuals from historically disadvantaged groups may face obstacles in accessing credit, despite their creditworthiness. To address concerns like these, financial institutions need to implement measures to mitigate biases in AI algorithms used as part of their operations.

One approach involves using bias detection and mitigation techniques to assess the fairness of AI algorithms and rectify any potential disparities. As part of the development of systems, AI developers can incorporate fairness-aware machine learning methods to actively address biases during the training and deployment of AI algorithms. In addition, regulatory bodies can work with industry stakeholders to establish guidelines and standards for fair and transparent use of AI within financial services. This may involve requiring financial institutions to regularly assess and report on the fairness of their AI algorithms, ensuring that decisions are free from discriminatory biases.

Diverse and Representative Data

The first step in addressing bias in AI models is ensuring that training data used to develop algorithms is diverse and representative of the full range of population. This involves collecting data from a wide range of sources and curating datasets accordingly. For example, in the context of facial recognition technology, ensuring diversity includes collecting a wide range of facial data from individuals with varying skin tones, ethnicities, ages, and gender identities. By curating these datasets, AI algorithms can be trained to recognise and classify facial features with greater accuracy and fairness, minimising the risk of biased outcomes.

Bias Detection Techniques

Statistical methods and analysis tools can be used to identify and quantify biases present in AI algorithms. These techniques may involve analysing the distribution of outcomes across different demographic groups, assessing the impact of sensitive features on algorithmic predictions, or conducting fairness audits to identify disparities or discriminatory patterns. Common fairness metrics include disparate impact, equalised odds, and predictive parity, which evaluate whether the outcomes of an algorithm exhibit parity or fairness across different groups.

For example, for automated CV screening for job applications, bias detection techniques may involve analysing the distribution of interview call-backs across various demographic groups to identify potential disparities.

Human Oversight and Intervention

Integrating human oversight and intervention mechanisms into AI systems can help detect and correct biases that may not be apparent from data alone. Instead of relying solely on accuracy, using alternative evaluation metrics can provide a more comprehensive assessment of model performance, especially on imbalanced datasets.

When including human oversight as a bias prevention mechanism, it is important to recognise that humans can also be blind to biases and therefore shouldn’t be the sole measure used to defend against biases in AI algorithms.

Top