Enhancing Business Verification with AI: A New Frontier in the Global Fight Against Fraud?
Damian Borth, Professor of Artificial Intelligence & Machine Learning at the University of St. Gallen, explains how harnessing open, standardized, and high-quality legal entity data within AI models can enable more transparent, efficient, and secure business interactions across the globe.
Author: Damian Borth, Professor of AI & ML at the University of St. Gallen
Date: 2024-04-29
Views:
In today's global digital economy, verifying the identity of legal entities has never been more critical or challenging. In response, there is growing interest in the potential for artificial intelligence (AI) technology to automate entity verification and monitoring. By increasing the efficiency and effectiveness of critical processes, the risk of fraud and other criminal activity can be reduced—contributing to a safer business environment for all.
Yet challenges remain. Many current AI applications are inhibited because the underlying data is not standardized, readily consumable, and shareable. This not only wastes valuable computing power but also compounds systemic errors.
Several recent trends, developments, and initiatives across data standards and transaction monitoring address the challenges that have traditionally inhibited anti-money laundering (AML) efforts. Can you explain the role AI and machine learning technology can play?
AI and machine learning hold considerable promise in addressing AML challenges by enhancing the efficiency and effectiveness of transaction monitoring and compliance processes. They can analyze vast datasets to identify complex patterns or anomalies and find fraudulent activity, significantly improving the detection of suspicious transactions. Moreover, AI can adapt and learn from new data, making it invaluable in the evolving AML landscapes where fraudsters and monitoring entities try to outperform each other.
How can combining AI technology and high-quality data help quantify and manage global business risks better?
AI and high-quality external data sources can significantly enhance risk management by improving the accuracy of business verification. AI algorithms can also automate the monitoring of legal entities' data, contributing to a safer global financial environment by minimizing the risk of fraud.
As business operations become more automated, how would you assess the balance between the added value and risks of AI technology?
The automation of identification processes – enabled by AI and machine learning – offers increased efficiencies and improves the accuracy of given data, enhancing regulatory compliance and bolstering trust in business transactions.
However, it might also introduce risks such as potential systemic errors, biases, or cybersecurity vulnerabilities. Balancing automation with human oversight and ensuring robust security measures is essential to mitigating these risks.
How important is open, reliable, standardized, and high-quality data for the AI community?
Simply put, it is foundational and has always been considered vital to successfully developing AI systems. Such data ensures that AI models are trained on accurate information, leading to more effective and trustworthy outcomes. Standardization facilitates interoperability among different AI systems and enhances the reproducibility of AI research. High-quality data also reduces biases and improves the models' decision-making capabilities, which is crucial for applications in sensitive areas such as finance and legal systems.
You have collaborated with GLEIF on a model to identify and suggest the correct legal form of an entity. Could you summarize the key findings?
It is, firstly, important to recognize that identifying and understanding an entity's legal form is key to various financial and business-related processes. Still, the many different legal forms between and within jurisdictions introduce significant complexity. The ability to automatically identify the legal form of a company and link it to its corresponding Entity Legal Form (ELF) code can, therefore, unlock myriad benefits—increasing transparency, lowering risk, and increasing operational efficiencies.
Our collaboration with GLEIF resulted in an AI model—known as Legal Entity Name Understanding (LENU)—capable of accurately predicting an entity's legal form using just its name and jurisdiction. The language model we trained was able to link specific patterns in legal names and jurisdiction-specific nomenclature to derive legal forms from these. The model's high accuracy showcases AI's potential to enhance the reliability of business data. This has the potential to not only speed up the LEI issuance process but also significantly reduce manual verification efforts.
We summarized our findings in a scientific research paper, “Transformer-based Entity Legal Form Classification.” The study highlights the significant potential of Transformer-based models in advancing data standardization and data integration. Introducing the entity’s legal form via standardized data items adds more confidence to entity linkage tasks, enabling robust mapping pairs across multiple datasets, as each entity can only have one legal form.
The model's high accuracy showcases AI's potential to enhance the reliability of business data. This has the potential to not only speed up the LEI issuance process but also significantly reduce manual verification efforts.
How could standardized legal entity data, such as the LEI, contribute to the AI research and development ecosystem?
Standardized LEI data enriches AI research by providing a global, consistent dataset for training and testing AI models in financial and legal contexts. This uniformity improves model reliability across jurisdictions and enhances the performance of AI solutions. LEI datasets can also facilitate AI research in areas such as fraud detection, entity verification, and regulatory compliance. By serving as a benchmark, LEI data can play a key role in evaluating AI models in the financial industry.
What is your message for the future?
The future of AI, empowered by standardized and open data, holds immense potential for transforming the financial and legal sectors. By harnessing this synergy, we will enable fast improvements that ensure more transparent, efficient, and secure global financial systems. This evolution promises enhanced regulatory compliance, reduced fraud, and a deeper understanding of complex financial networks.
In summary, we will see and understand more about a world that is not easily accessible now.
If you would like to comment on a blog post, please identify yourself with your first and last name. Your name will appear next to your comment. Email addresses will not be published. Please note that by accessing or contributing to the discussion board you agree to abide by the terms of the GLEIF Blogging Policy, so please read them carefully.
Prof. Dr. Damian Borth is director of the Institute of Computer Science at the University of St.Gallen, where he holds a full professorship in Artificial Intelligence and Machine Learning (AIML). Previously, Damian was the founding director of the Deep Learning Competence Center at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern, where he was also PI of the NVIDIA AI Lab at the DFKI.
Damian‘s research focuses on representation learning with deep neural networks in domains such as computer vision, remote sensing, and financial audit. His work has been awarded the ACM SIGMM Test of Time Award in 2023, the Google Research Scholar Award in 2022, the NVIDIA AI Lab at GTC 2016, the Best Paper Award at ACM ICMR 2012, and the McKinsey Business Technology Award in 2011. Currently, Damian serves as a member of the board of trustees at the International Computer Science Institute (ICSI) in Berkeley, California, the board of the German Data Science Society, the advisory committee of the Roman Herzog Institute, and the advisory board of the HSG Institute of Behavior Science & Technology.
Damian did his postdoctoral research at UC Berkeley and the International Computer Science Institute (ICSI) in Berkeley, where he was involved in big data projects at the Lawrence Livermore National Laboratory. He received his PhD from the University of Kaiserslautern and the German Research Center for Artificial Intelligence (DFKI). During that time, Damian was as a visiting researcher at the Digital Video and Multimedia Lab at Columbia University, New York City, USA.