Computer vision, NLP, and other domains have seen notable success with deep machine learning (ML) approaches based on deep neural networks (NN). However, the age-old problem of interpretability versus efficiency presents several formidable obstacles. The ability to question, understand, and trust deep ML approaches depends on their interpretability, often described as the degree to which a person can grasp the source of a conclusion.
Bayesian networks, Boltzmann machines, and other probabilistic machine learning models are considered “white boxes” since they are inherently interpretable. One way these models aim to interpret is by using probabilistic reasoning to uncover hidden causal links; This aligns with the way human minds work statistically. Unfortunately, state-of-the-art deep NNs outperform these probabilistic models by a considerable margin. It seems that current ML models cannot simultaneously achieve high efficiency and interpretability.
Thanks to the exponential growth of quantum and conventional computing, a new tool has emerged to solve the efficiency versus interpretability conundrum: the tensor network (TN). The contraction of more than one tensor is called TN. The way tensors contract is defined by their network structure.
TO new paper from Capital Normal University and the University of Chinese Academy of Sciences examined the encouraging developments in TN toward efficient and interpretable quantum-inspired ML. “TN ML butterfly” lists the benefits of TNs for ML. The benefits of TNs for machine learning with a quantum twist can be summarized in two main areas: the interpretability of quantum theories and the efficiency of quantum procedures. A probabilistic framework for interpretability can be built that can go beyond classical information description or statistical approaches using TN with quantum theories such as entanglement theories and statistics.
In contrast, quantum-inspired TN ML approaches will be able to operate efficiently on both classical computing platforms and quantum computing platforms thanks to robust quantum mechanical TN algorithms and substantially improved quantum computing technology. In particular, pretrained generative transformers have recently achieved remarkable development, leading to unprecedented increases in computational power and model complexity, presenting both potential and challenges for TN ML. In the face of new artificial intelligence (ai) from pre-trained generative transformers, the ability to interpret results will be more important than ever, allowing for more effective investigations, safer control and better utilization.
Researchers believe that as we enter the period of true quantum computing and the current NISQ era, TN is rapidly becoming a leading mathematical tool to investigate quantum ai from various angles, including theories, models, algorithms, software, hardware and applications.
Dhanshree Shenwai is a Computer Science Engineer and has good experience in FinTech companies covering Finance, Cards & Payments and Banking with a keen interest in ai applications. He is excited to explore new technologies and advancements in today's evolving world that makes life easier for everyone.
<!– ai CONTENT END 2 –>