AI in Investment Management

 

Artificial intelligence (AI) has emerged as a significant factor in investment management. The intersection between fundamental investment and machine learning prompts a crucial question: What can each learn from the other? The answer is that each field has much to gain from the other. BALANSTONE, a leading research organization, has undertaken a comprehensive analysis of machine learning and deep neural network technologies in recent years. This research effort has culminated in the development of a new AI tool for fundamental investment. In this article, we will present our findings and discuss the critical elements involved in the development of AI for fundamental investment research.

Evolution of AI (ML and DNN)

The realm of Artificial Intelligence (AI) has witnessed significant progress in recent years. In congruence with this, Machine Learning (ML) has also demonstrated notable advancements. Text Analysis, including Recommender Systems, has significantly evolved in the last decade. Notably, Deep Neural Networks (DNN) have made remarkable strides, with numerous successful cases reported for selected applications such as natural language processing, image recognition, and text analysis. The current success of Deep Reinforcement Learning is remarkable, having outperformed the world’s top-ranked professionals. This feat is a significant milestone in the development of super-human level AI.

The success of artificial intelligence (AI) has been driven by various catalysts, including advancements in model design, the availability of large datasets for training, and improvements in computing power. These elements have coalesced to give rise to the theme of “AI replaces humans.” 

The model design has evolved to incorporate sophisticated algorithms that enable machines to perform tasks that were once exclusively the domain of humans. This has been facilitated by the availability of vast amounts of data that can be used to train machine learning models. This data has become increasingly accessible due to the advent of technologies such as cloud computing and the Internet of Things.

https://goo.gl/FBX5j3

Fundamental Active Investment

The realm of artificial intelligence (AI) in investment is not a new concept, but professionals in the fundamental investment industry are still wondering how AI technology will transform their work. They are pondering over the ways in which it can revolutionize active investment. The potential impact of AI is evident in the stock market, and the industry is undergoing a structural shift. To increase returns and precision while minimizing risk, they have intensified the bottom-up information gathering process, introducing new strategies like focus and engagement. However, some have reached the limit of acquiring legitimate early information, leading to insider trading cases. Meanwhile, rule-based and low-cost funds are gaining popularity, leading to increased scrutiny of high fees for hedge funds and private equity. Although the fundamental approach to investment has diversified, the core approach of fundamental research remains mostly unchanged. Innovations in fundamental investment research have not progressed at the same pace as other industries, even as the industry seeks new opportunities like ESG and other initiatives.

During our exploration into the feasibility of utilizing AI for fundamental investment, we encountered a question that persisted both before and after our research. It is important to note that fundamental investment differs structurally and conceptionally from what is typically referred to as ‘quantitative investment.’ Through our journey of development, we have gained several unique insights and clues that have helped us better understand the role of AI in this field.

Artificial Fundamental Investment Research Intelligence (AFIRI) 

Over the years, algorithmic program trading has made significant progress, and the application of AI to such algorithms has produced outstanding results. The technology swiftly evaluates trading positions and selects the most appropriate next moves without any human intervention. This system is capable of executing high-speed trading that is impossible for any human to accomplish.

There are even performance comparison sites available that evaluate the effectiveness of AI trading platforms. These sites provide a development environment and allow participants to compete, be rewarded, and even provide crucial information. As technology continues to advance, trading will become more and more automated, and human traders will have to adapt to new roles as developers and managers of trading machines.

AI-powered systems are increasingly being used in investment research. However, it is important to note that AI for fundamental investment research differs from automated AI trading systems. Our primary objective was to develop a process for intrinsic value-oriented investment. This means that we did not aim to learn from successful AI trading strategies and apply them to our process. 

It is essential to understand that intrinsic-value-oriented fundamental investment is not based on consecutive successful trades. Rather, intrinsic value is the dominant factor that governs the process of fundamental investment management. Price, on the other hand, is just an attached tag to the business as a system to deliver excess returns. Due to speculative activities and the market-making function, the price fluctuates by nature to balance demand and supply continuously. Therefore, price movements must be understood as a crucial factor of action, but they are not the dominant and primary factors that govern fundamental investing decisions.

To clarify further, let us consider what would happen if a portfolio’s objectives were set to maximize the short-term risk-adjusted return. Such objectives would be based on the rationale that short-term gains should be prioritized over long-term value creation. This approach would likely lead to a focus on short-term trading strategies that aim to achieve quick profits, without considering the intrinsic value of the underlying assets. Consequently, this approach may not lead to sustainable long-term returns.

“In order to achieve success in the long-term, it is imperative to succeed in the short-term as well. This is because the long-term return is essentially a cumulative sum of short-term returns. As a result, it is critical to manage and regulate short-term returns and evaluate them as the most vital performance indicator. By doing so, we can guarantee a prosperous investment process in the long-term, which can lead to fruitful outcomes and desirable results.”

Do you want me to rewrite the same text again to be more descriptive? If so, here it is:

In the original text, the author was questioning the idea that there is no difference between short-term trading and long-term investment. They explained that the process and approach of short-term trading are completely different from those of fundamental investment, and the main difference between the two is the time horizon. The author emphasized that the best investment is not necessarily the same as a successful trading strategy. They explained that a trading strategy is focused on predicting the price, and it is mainly a price-oriented decision generator. The intrinsic value of an asset does not play a significant role in determining the decision. Finally, the author mentioned that AI is often used to analyze and predict the relative price behaviors across different assets.

The research has revealed that the AI framework which is designed and optimized for trading purposes cannot be used for fundamental investment research because of the difference in their objectives. However, AI has the potential to assist traders in executing orders, but this depends on the execution process. To ensure that AI is associated with a specific task, it is important to have a distinct name for it. For instance, AI algorithmic trading can be termed Artificial Trading Intelligence (ATI) when it is automated. Similarly, when AI is used for fundamental investment management and research, it should be referred to as Artificial Fundamental Investment Research Intelligence (AFIRI). This approach ensures that AI is associated with the specific process it is being utilized for, and that the process is accurately modeled for optimal performance.

Framing the General Process of AI

Gaining a comprehensive understanding of the AI process flow is crucial for effective data processing in each step of the pipeline. It can help to clarify the types of data required and how they should be processed. The AI process flow typically consists of five fundamental steps that must be followed to achieve the desired results.

  1. Environment
  2. Representation of Environment
  3. ML/DL Process
  4. Reasoning
  5. Prediction

Creating an AI model is a complex process that involves several steps. The first step involves selecting a specific activity to focus on and creating a representation of it that can be processed by machine learning frameworks or deep neural networks. This representation is created by converting the activity into a format that the model can understand.

Once the representation is created, hyperparameters are set and tuned to optimize the model’s performance. A dataset is then ingested into the training process to develop the model’s knowledge while minimizing errors. This is done by processing the reasoning behind the data and using it to update the model’s parameters.

After the model is trained, it is tested to ensure that it can accurately predict outcomes on new datasets. The model’s performance is evaluated, and any necessary adjustments are made to optimize its accuracy.

AFIRI is an investment AI that understands the importance of framing the investment process and identifying its distinct characteristics to select the right environment for the model. Instead of simply expanding the dataset, AFIRI focuses on the right elements to reduce the time required to fine-tune the model’s hyperparameters and create a more robust neural network. By carefully selecting the environment and optimizing the model’s parameters, AFIRI can create an investment AI that can make accurate predictions and deliver valuable insights.

Artificial intellectualization is a challenging process that requires a balance between the cognitive functions of artificial intelligence and the human intelligence as the corresponding role model. To create AFIRI, we need to carefully analyze the current process of value-oriented investment decisions and understand its properties. This approach will give the intellectualization approach of AI a solid and consistent framework.

We also need to understand the gap in the decision process between fundamental investment research and ML/DNN tasks. By doing so, we can compare and contrast the two steps of the decision-making process that are similar but distinct from each other. This understanding will help us create a more effective investment AI that can provide valuable insights and make accurate predictions.

AI in Investment Management 2

Difficult Judgement and Complex Decision

In order to make sound decisions, two crucial steps must be understood: difficult judgment and complex decision-making. This framework can also be applied to assess the feasibility of autonomous driving functions, which have been hailed by some as the harbinger of a perfect world. However, I remain skeptical about this claim. Despite the apparent similarity between difficult judgment and complex decision-making, it is important to ask the right questions when evaluating AFIRI’s approach. Specifically, we need to consider:

“Is fundamental investment research a difficult judgment or a complex decision?”

To gain a deeper understanding of the intricacies involved in making complex decisions and difficult judgments, one can look to the field of artificial intelligence. In particular, examining the cases of image recognition and AlphaGo can provide valuable insights into the challenges and capabilities of AI. By delving into these examples, we can gain a greater appreciation for the incredible feats that can be accomplished through the power of advanced computing and machine learning.

Image Recognition

Developing a pipeline that can effectively handle the parameters of the original data is one of the most challenging aspects of training a network for image recognition. To overcome this challenge, the introduction of convolutional neural network architecture and other network structure components, such as activation function, regularization, and residual learning, has been a significant breakthrough.

Two-dimensional image data is represented digitally in the format of a vector of pixel data. However, using a plain vanilla deep neural network for image recognition would result in an excessively large network that would be difficult to handle. For instance, a high-definition 1920 × 1080 image comprises over 2 million pixels. Consequently, the first layer of a plain vanilla DNN would require over 100k neurons (assuming a 1/20 reduction from 2 million to 100k in the first layer), followed by multiple layers of neurons with a decreasing number of neurons in general.

Training such a network with millions of pictures as a training dataset would require an unrealistically large computing capacity, making it practically impossible to complete the job. Therefore, convolutional neural networks were introduced as a solution to the problem. They are designed to handle the parameters of image data more effectively than traditional deep neural networks.

AlphaGo

Board games such as Go pose a significant challenge due to the sheer number of legal moves available, which can be as many as bd possible sequences of moves, where b is the game’s breadth and d is its depth. For instance, Go has 250150 possible moves, which even the largest and most advanced computers find difficult to handle.

To address this challenge, AlphaGO introduced a unique hybrid approach that combined Monte Carlo Tree Search and a trained neural network, which served as an intuitive predictive engine for future legal positions of stones. Additionally, AlphaGO employed a convolutional layer that treated the board position as a 19 × 19 image to create a representation of the position, reducing the search tree’s effective depth and breadth.

Over time, AlphaGO evolved into an unsupervised version known as AlphaGo Zero, which later became the generalized version for board games, AlphaZero. The network is trained using reinforcement learning and random play without any prior domain knowledge.

One of the special features of board games is their simple and static rule set, which makes them an ideal task for training algorithms. Self-play simulations and self-generated data allow for unlimited reinforcement learning, making unsupervised reinforcement learning an effective approach. However, it is worth noting that this approach is not feasible for the real world due to its complexity and unpredictability.

What definition should be given to those tasks of difficult judgment?

Both tasks were successful in cracking the key bottleneck of processing large volumes of data, which was achieved by leveraging the increased processing power of computing systems. AlphaGo utilized a unique hybrid network architecture and a convolutional layer to achieve its success, while image processing relied on various innovative architectural approaches, such as ReLU activation functions, dropout, short-cut connections, and convolutional layer.

It is worth noting that these approaches were uniquely developed to handle the specific datasets involved, rather than being general techniques for processing large volumes of data. The datasets in both cases shared the feature of ingesting all variables with possible causality, resulting in a complete and isolated dataset without the need for finding or adding missing relevant variables.

Furthermore, the datasets in both cases were homogeneous, with Go being represented by the locations of black and white stones while image recognition used pixel data. While big data is commonly characterized by the 3Vs of volume, velocity, and variety, the datasets in these cases only had two Vs, leaving no need for new techniques to handle heterogeneous compositions of data.

Difficult Judgment

Our definition of difficult judgment for artificial intelligence is one that has the following three attributes: 1) a large processing volume, 2) complete representation, and 3) a homogeneous dataset.

While 2) and 3) are given, 1) presents the main challenge in making a difficult decision. However, the development of new pipelines and techniques on advanced hardware/semiconductor has helped overcome the challenge of processing large volumes.

The advancements in mathematical approaches have been remarkable, but they have also been enhanced by engineered pipelines and novel functions inspired by the cognitive and logical reasoning processes of humans. We will delve into this topic later. Additionally, the unique closed and controlled conditions have been a major driving force. Hence, comprehending the approaches and conditions that have successfully resolved difficult judgments can serve as a guide for promising new ideas and directions in the future.

Complex Decision

It’s important to understand that making complex decisions is not the same as making difficult judgments. Decision-making process involves two key steps: judgment and decision. It’s crucial to note that the judgment stage is just a part of the entire process, and there are several other factors that need to be considered before reaching a final decision.

There are several key characteristics of a complex decision in machine learning and deep neural networks. These elements include:

  • Incomplete Representation

There are two types of incompleteness when it comes to making complex decisions. Type 1 refers to the lack of explicit breadth of input data. Remember that the second feature of difficult judgment is a complete representation and is opposite to this type of incompleteness. Complex decisions rely on the aggregated dots of fragmented information and views, so the definition of explicitness is generally unnecessary. In contrast, complete input data covers all the information and views that have any causality with the final decision.

However, in the real world, it is impossible to gather data that is explicit and complete enough to cover everything and process the dataset ideally to train an ML/DNN model. This is where Type 2 incompleteness comes in. This type of incompleteness relates to the unstructured and descriptive information that is difficult to convert into a dataset with limited consistency and availability. Consider the impression of an interview with a person. If an interviewer is responsible for reporting and uses a template of the scores of stereotypical categories, how accurately and effectively can the stylized format capture all the relevant nuances of the impression for the best decision-making? Is there a proven system to measure the effectiveness and to continuously improve it for better results?

Some might question if the unstructured text data mining system of sentiment analysis such as LDA could be applied to Type 2 to process the conversion. While it does convert analog and descriptive information, and has already established successful cases of AI’s unstructured data processing, those technologies have a limited scope of applications. They are intended to classify large unstructured data or find commonalities from it, making it hard to identify significant information with infrequent appearances. They are especially good at finding popularity among big data. Human professionals with domain insight can identify a small clue of logic with significance from a small piece of unstructured data in the big data lake very well by presumably connecting multiple sets of independent cognitive networks.

  • The requirement for Body of Highly Intellectual Knowledge

Making complex decisions involves dealing with uncertainty. To improve the quality of decision-making, humans undergo education to gain knowledge and expertise in multiple subjects. For example, a certified investment professional must have a body of knowledge (BoK) of investment management and research, covering areas such as economics, accounting, financial management, derivatives, and corporate governance. The BoK integrates these subjects and applies them to make informed predictions.

The BoK is not static but is shaped by the professional experience and is a continuous system. This has led to questions about the cognitive ability of humans to learn from experience and gain expert insight. However, complex decision-making involves advancing its network through intellectual reinforcement learning as professional work experience.

From the perspective of a neural network structure, this means that multiple neural networks exist, each processing a specific subject. To integrate these subjects, the neural network should be in the form of an ensemble learning structure. However, this can make the processing pipeline complicated and difficult to set up and compute.

The neural network also needs the ability to self-improve and learn from the dynamic nature of BoK. There is growing interest in new architectural approaches such as progressive and sequential machine learning to improve neural network training.

  • Uncertainty

Uncertainty is a complex and challenging factor that can make decision-making difficult. In the investment industry, investors often differentiate between risk and uncertainty. While risk can be quantified and measured, uncertainty is difficult to predict and can be triggered by unforeseen events. For instance, past market crashes have been caused by a rise in fear of uncertainty.

To distinguish uncertainty from risk, we propose the ‘Haunted House Theory.’ The theory suggests that when faced with an uncertain event, one may be unable to measure or predict what would happen. In such a situation, events may have unmeasurable outcomes that are unpredictable, leading to an antithetical viewpoint.

In complex decision-making, it can be challenging to choose the best course of action when faced with an unmeasurable uncertain event. For instance, when predicting using artificial intelligence, the lack of data volume and reliability can lead to a large margin of error. This is similar to walking in total darkness and having to decide which way to go when the floor may disappear at any moment.

Ultimately, uncertainty can lead to a high possibility of a large market crash. Thus, it’s essential to understand the difference between risk and uncertainty and to develop strategies to manage uncertainty effectively.

Back to the Original Question

“Is fundamental investment research a difficult judgment or a complex decision?”

When considering an investment decision, it is essential to review the fundamental research activity. This can prove to be a challenging task, but it becomes much more manageable if you are an experienced bottom-up analyst or portfolio manager with a long-term investment horizon. Traders, on the other hand, might find it more difficult.

While the final decision will depend on the valuation and reliability of the forecast, making financial forecasts alone depends on various fragmented elements. These include qualitative and subjective information, as well as quantitative data and framework of the financial model. The qualitative information is descriptive and is formed by combining insights from various activities, such as meeting with company executives, evaluating the competitiveness and strategic risks of individual products/services, and analyzing their financial results.

It is essential to understand that each investment case requires a different mix of information and data. Therefore, it is virtually impossible to define a complete data representation for investment judgment. Even if someone could set a complete dataset in the future, it would need significant cleansing or updating with new information/data.

Investment research requires a customized approach for every investment case and cannot be done using a one-size-fits-all method. As a result, it does not meet the Type 1 conditions and is unlikely to ever do so.

In the case of Type 2, imagine presenting a fundamental investment case to a team. The aim of the presentation is to communicate the recommended action (or rating), valuation, and risk assessment, and receive approval from the team or portfolio manager. However, the data used in the investment decision-making process is not limited to figures alone. It also includes the structured logic supporting the certainty of those figures, which is even more important.

The information available is descriptive and lacks a clear structure. It is composed of multiple nuanced ideas that require careful consideration. However, relying on standardized categories to score this information is an approximation that might overlook the unique distinctiveness of fundamental investment research. This, in turn, could result in losing the invaluable essence of the descriptive information that supports intrinsic value.

Furthermore, there is a conflict between being explicit and being nuanced. The representation of data is often incomplete, unreliable, and inaccurate when it comes to capturing the subtleties of ideas. As a result, advanced fundamental investors face a significant challenge when trying to utilize new cutting-edge information technology while thinking and acting explicitly. They must be able to navigate these complexities while ensuring that they make informed decisions based on accurate and reliable data.

I would like to share some valuable information with you regarding an innovative application of Latent Dirichlet Allocation (LDA). This application involves analyzing unstructured data from investment articles and market news, with the aim of enhancing Automated Technical Investment (ATI) rather than Automated Fundamental Investment Research (AFIRI).

The machine learning algorithm used in this application analyzes the market herding activity, predicts market moves, and identifies what influences market trends. It is important to note that this analysis is not related to the intrinsic value of assets. Instead, the algorithm is designed to provide insights into the activities of the market, rather than building an AI perspective on the fundamental value.

Investment research is a vast field that encompasses many categories, and it requires a solid understanding of analytical frameworks, tools, and the latest industry insights. Moreover, having in-depth knowledge of specific industries is crucial to make informed investment decisions. Continuous learning is also necessary to keep up with the constantly changing landscape of industries and companies, as well as the latest analytical techniques.

Investment decisions are made in a world of uncertainty, where the outcome can never be predicted with complete accuracy. However, investors not only consider this uncertainty, but also evaluate and investigate it using a specific brain network. The result of this analysis is then used as an input vector for another decision network. Instead of being a condition of the decision, uncertainty becomes a target of analysis for the decision-making process.

In cases where uncertainty is substantial, such as during a major market crash, evaluation becomes challenging due to the lack of relevant information and data. Investors face the daunting task of making decisions with incomplete and uncertain information.

Overall, fundamental investment research is a complex decision-making process. It involves analyzing a vast amount of data and information to make informed decisions. This complexity has significant implications for developing DNN/ML as AFIRI. While new AI can answer difficult judgments by adopting a new network architecture and functions, complex decisions cannot be made easier using the same approach. Different approaches and expertise are required for development.

Is the development of AFIRI deemed an unattainable target? If not, what measures can be put in place to surmount the challenge? A valuable approach to unearthing clues that could aid in tackling this challenge lies in an exploration of the history of AI.

 

AI in Investment Management 3

Catalysts for Success: Cognitive and Logical Process

David Hubel and Torsten Wiesel were awarded the Nobel Prize in Physiology or Medicine in 1981 for their groundbreaking discoveries about how the human visual system processes information. Their findings were instrumental in solving the complex task of image recognition in AI.

Their research revealed that the visual cortex is made up of numerous small, independent fields that work together to recognize complex patterns. They also discovered that visual neurons have specific line orientations and different functions that work in unison to recognize images.

Their research led to the creation of the neocognitron, an artificial neural network that was used to recognize handwritten characters in the 1980s. The neocognitron model was further developed into the LeNet-5 architecture in 1998, which has had a profound influence on the current convolutional neural network (CNN) used in image recognition AI.

The original Alpha GO, which used CNN, was developed with two independent networks to create a pipeline for playing GO. The policy network determined the next position, while the value network evaluated the entire position. This dual-process architecture is similar to the logical architecture used by human players to play GO.

 

The Original Alpha Go

Source: DeepMind Technologies Limited

 

Analysis of the Logical & Cognitive Process of Human

The LeNet-5 and Alpha GO are two examples of AI models that were developed by taking inspiration from the human brain’s logical processes. These models were designed to tackle challenging decision-making problems that are difficult for both humans and machines. This is because the human brain has the advantage of millions of years of evolution, which has made it an efficient processor of complex tasks. However, despite having fewer neurons than the human brain, AI models can still learn and be inspired by human processes to develop more efficient decision-making models.

It’s important to note that AI models are not exact replicas of the human brain. They work differently, and we still don’t fully understand how human neurons work. However, we can still learn from and be inspired by human processes to develop successful AI models.

This approach is particularly relevant for complex decision-making, which is challenging even for top human experts. Unlike image or voice recognition tasks, complex decision-making requires a deeper understanding of human logic and expertise. As such, it’s crucial for fundamental investment managers and data scientists to work together to develop AI models that can accurately predict investment outcomes.

However, this is not a simple process. Investment managers tend to focus on individualized research processes, where they carefully analyze and identify investment opportunities. In contrast, data scientists prioritize big data and market forecasting techniques to make predictions. To create a successful AI model, both groups need to think strategically and acquire missing expertise from the other end. They must also define the value-orientation to prevent the AI from becoming a purely quantitative process tool. 

Overall, the success of AI models for investment research depends on breaking down silos and merging the expertise of investment managers and data scientists. This will require a collaborative effort and a willingness to learn from each other’s expertise. In the end, this will lead to the development of more efficient and accurate AI models that can enhance the fundamental investment research process.

Future of AI in Investment Management

The advent of a new generation of professionals who possess a profound understanding of investment research and technology expertise is poised to revolutionize the fundamental investment business. These individuals possess a unique blend of skills that can drive fundamental investment to new heights, making the goal of “AI in investment management” a reality. We are dedicated to continuing our research and development efforts, and we are eager to showcase our capabilities to partners who share our vision for the future of next-generation active fundamental investment.