AI: What are LLMs – and what can they do for asset management

AI models such as Large Language Models (LLMs) will revolutionise working and learning methods. This also applies to asset management. In this interview, Stefan Fröhlich explains how these models work and where their opportunities and challenges lie.

Interview with Stefan Fröhlich, Portfolio Manager Systematic Equities

Stefan Fröhlich, Portfolio Manager Systematic Equities

Stefan, Large Language Models (LLMs) were developed to understand and generate human language. How do they do this?

LLMs are neural networks that specialise in predicting the next word in a text. They are based on the so-called Transformer architecture, which was propagated by Google in 2017 in the paper "Attention is All You Need". The model consists of several components, including "word embedding", i.e. the conversion of words into number vectors. An encoder then processes these numbers and a decoder generates text from them. However, the key element that helped LLMs achieve their breakthrough is the "Attention Head". This evaluates the importance of each word relative to all other words in the sequence in order to understand the context and filter out relevant information.

Sources: Zürcher Kantonalbank, Vaswani et al. (2017): «Attention Is All You Need»

Training LLMs requires powerful graphics processing units (GPUs). How does the training work?

Training is a two-stage process consisting of the "pre-training" of the basic model and the subsequent "fine-tuning". During pretraining, the model is typically trained once a year with large amounts of data, which often come from the internet. This requires thousands of GPUs, which is associated with high costs. During "fine-tuning", experts create sample examples and evaluate the LLM's responses. Finetuning is carried out once a week, for example, and human feedback enables continuous improvement of the model. Through this combination of machine learning and human feedback and monitoring, LLMs can be optimised to learn better answers and ethically correct behaviour.

The human brain stores information by changing the synapses between brain cells. Where is knowledge stored in LLMs?

An LLM consists of many interconnected neuronal layers. These connections have different strengths, which are represented by so-called weights. Each weight determines how strongly the signal is transmitted from one neuron to the next. These weights are optimised during the training process with large amounts of data. The knowledge in large LLMs is stored in the weights and connections as well as in the structure of the neural networks. Chat GPT-4 has around 1.7 trillion weights. By comparison, the human brain has 100 trillion synapses. However, this gap could melt away quickly, as the development of LLMs is progressing rapidly and the models are getting bigger and bigger.

The psychologist Daniel Kahneman describes human thinking in terms of two systems. The instinctive system works quickly and automatically, without conscious effort. The logical system, on the other hand, enables us to solve complex tasks step by step. What does this look like with LLMs?

Current LLMs such as Chat GPT-4 are impressively powerful in the area of the first system. They can process large amounts of text quickly and are particularly suitable for tasks that require fast automatic responses. However, LLMs lack a true equivalent to Kahneman's logical system. They cannot solve problems in the same step-by-step and logical way as a human being who proceeds consciously and methodically. AI research is increasingly moving towards AI operating systems (AI-OS), which integrate various specialised artificial intelligences and applications and coordinate their collaboration. These specialised apps are focused on specific tasks, and by integrating them with the AI-OS, complex problems can be solved quickly and intuitively as well as logically and methodically.

Are AI systems actually intelligent in our sense?

This is a hotly debated topic with sometimes contradictory opinions. AI models do not have an opinion or awareness of their own. But they can offer an informed perspective based on current discussions and research. AI pioneer Geoffrey Hinton illustrates this with the following example:

Sources: CBS Mornings, 1.3.23, Zürcher Kantonalbank

The correct answer from Chat GPT-4 shows that the LLMs now provide amazing answers. In order to correctly predict the next word in a sentence, an LLM must understand the text. However, LLMs are not yet able to understand the world as comprehensively and deeply as humans. A major difference lies in how LLMs and humans learn. LLMs are trained exclusively with text data. Humans, on the other hand, learn not only through language, but above all through direct experience. These enable us to understand complex relationships, feel emotions and find creative solutions.

LLMs not only open up opportunities, but also harbour dangers. The AI mastermind you mentioned, Geoffrey Hinton, recently warned that superintelligence will arrive sooner than expected. How do you categorise this?

This warning is all the more remarkable given that Geoffrey Hinton is known as a cautious and thoughtful researcher. His warning is aimed at motivating governments and companies to take a serious look at the development and regulation of artificial intelligence. The main danger Hinton sees lies in the immortal nature of AI models, which can be reproduced at will and copied in seconds. Humans, on the other hand, can only exchange information slowly and their knowledge is lost when they die. This asymmetric dynamic harbours significant risks, as powerful AI systems could be misused without adequate controls, for example by being used to spread disinformation, manipulate opinions, carry out cyberattacks or violate privacy.

Asset management could also benefit from the use of transformers and LLMs. What are application examples?

The transformer architecture of the LLMs can be used to predict time series and future trends. In the Systematic Equity Team in Asset Management at Zürcher Kantonalbank, we are currently working on developing a model for forecasting equity returns with the help of transformers. LLMs can also be used to analyse companies' quarterly reports. By assessing sentiment in company reports, news and conference calls, investors can gain a better understanding of a company's sentiment and potential future developments.

So LLM also supports equity research?

In fact, support for stock analyses is another important area of application for LLMs. LLMs can process and summarise large volumes of data from various reports. This provides analysts with a comprehensive source of information and facilitates the processing and evaluation of the available data. This enables them to make more informed decisions and base their recommendations on a broader database.