Tensor contradictions are used to solve problems related to different fields of research, including counting models, quantum circuits, graph problems, and machine learning. But to minimize the computational cost, it is important to find a contradiction order. If one sees the result of calculating the product of a sequence of matrices A, B and C, then the result will always be the same, but there will be different computational costs based on the dimensions of the matrix. Furthermore, the cost of contraction scales for tensor networks increases with increasing number of tensors. The path used to find which two tensors contract with each other is important to improve calculation time.
Previous work has focused on finding efficient contraction paths (CPs) for tensor hypernetworks. To compute tensor contraction paths, one of the existing methods is to use simulated annealing and a genetic algorithm that outperforms the standard greedy approach for smaller networks. The second method is graph decomposition in which line graph (LG) and factor tree (FT) methods are used. LG uses structured graph analysis to find a contraction order, while FT is used in preprocessing to handle high-rank tensors. The third method, in which reinforcement learning (RL) and graph neural networks (GNN) are combined and used to find an efficient path, includes real and synthetic quantum circuits.
A team of researchers has introduced a novel method to improve tensor contraction paths using a modified standard greedy algorithm with an improved cost function. The cost function used by the standard greedy algorithm (SGA) to find the pairwise contractions for the path at each step is straight and depends on the size of two input tensors and the output tensor. To overcome this, the proposed method finds the costs of pairwise contractions using more information, such as providing different cost functions to cover a wide range of problems. The method outperforms state-of-the-art greedy implementations of Optimized Einsum (opt_einsum) and in some cases outperforms methods such as hypergraph partitioning combined with greedy.
The researchers used SGA in opt_einsum to efficiently find CP for a large number of tensors. There are three phases in which CP is calculated:
- The calculation of Hadamard products, which are, element by element, multiplication of tensors with the same set of indices.
- Shrink the remaining tensioners until all shrink rates end by selecting the lowest cost pair at each step.
- Computing outer products by selecting the pair that minimizes the sum of the input sizes at each step.
Furthermore, the modified greedy algorithm uses cost functions as parameters, unlike the SGA that uses only one cost function. Then, different cost functions are used at runtime and the most appropriate cost function is selected to generate more CP.
CPs are calculated for 10 problems to compute the multiple cost function approach, several algorithms are compared, and for each algorithm, failures are measured. The researchers conducted two experiments. In the first experiment, 128 routes are calculated with each algorithm for each problem example. The objective is to calculate the quality of the solution without considering the calculation time. In the second experiment, the limitation is not in the number of routes but in the calculation time, which is limited to 1 second. The goal is to show a balance between time and quality to quickly find an efficient path for practical scenarios.
In conclusion, the researchers proposed a novel approach to improve tensor contraction paths using a modified standard greedy algorithm. A multiple cost function approach is used where each cost function is calculated for each problem example and the best cost function is selected to calculate the CP. Compared with the standard opt_einsum greedy and random algorithms, and the greedy algorithm and hypergraph partitioning method, the proposed method can find efficient CPs in less time and solve complex problems, but other methods fail to accomplish the task.
Review the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter. Join our Telegram channel, Discord channeland LinkedIn Grabove.
If you like our work, you will love our Newsletter..
Don't forget to join our 42k+ ML SubReddit
Sajjad Ansari is a final year student of IIT Kharagpur. As a technology enthusiast, he delves into the practical applications of ai with a focus on understanding the impact of ai technologies and their real-world implications. His goal is to articulate complex ai concepts in a clear and accessible way.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>