# Tutorial

What are tensors and tensor networks to begin with? A tensor is a generalization of a matrix, i.e., a multidimensional array. The rank of a tensor is the number of indices required to label a component. Thus matrices have rank 2 (row and column index), and vectors have rank 1 (single index for addressing a component). In GuiTeNet, tensors are abstractly represented as shown in the figure. Each orange leg represents one dimension; thus, the rank of the tensor in the figure is 4. The zero inside the black disk serves as unique identification number for the tensor.

A tensor network encompasses a collection of tensors together with certain operations on them, like contractions (which generalize matrix-matrix multiplications) or splitting of a tensor via so-called SVD or QR-decompositions. Tensor networks (like for example matrix product states) are an essential framework for the analysis and simulation of strongly correlated quantum systems; see for example the introduction by Orus (2013) or the paper by Verstraete, Murg, Cirac (2008) for a more in-depth introduction, or the review by Schollwöck (2011).

GuiTeNet facilitates the graphical construction of tensor networks and associated operations, and simultaneously generates source code for these operations.

## Create a new tensor

Add a new tensor to the network by drag-and-dropping the blue circle. Initially, the new tensor has no legs yet. The blue "create tensor" symbol reappears at its default location afterwards.

## Attach tensor legs

Attach a new leg to a tensor by "pulling" it out of the tensor, i.e., drag-and-dropping the tensor and simultaneously holding the Shift key. Each tensor and its legs can still be freely moved around within the GUI window.

## Contract tensors

Tensor contractions merge tensors together by summing over shared indices, generalizing matrix-matrix multiplications. Recall that a "conventional" matrix-matrix multiplication $$C = A B$$ is defined as $$c_{ik} = \sum_j a_{ij} b_{jk}$$: the second "leg" (dimension) of $$A$$ and the first "leg" (dimension) of $$B$$ are contracted (summed over). The following figure shows a matrix-matrix multiplication as it appears in GuiTeNet; the matrices $$A$$, $$B$$ and $$C$$ are the tensors labeled 0, 1 and 2, respectively. The tips of to-be-contracted (connected) legs change color from green to red.

A general contraction allows several indices (legs) to be contracted simultaneously. Specify a contraction by connecting the tips of tensor legs, and click the "Contract" button to perform the contraction. The output tensor inherits the free legs (with green tips) from the input tensors; the ordering of these legs (red labels) follows the identification numbers of the input tensors, that is, first the free legs of the tensor with the smallest ID, then the free legs of the tensor with the second smallest ID etc.

Besides visualizing tensor network operations, GuiTeNet also generates corresponding source code (currently Python/NumPy, with additional programming languages planned for the future). Contractions are conveniently translated into NumPy's einsum function. For the above example, it reads

T4 = np.einsum(T0, (0, 1, 2), T1, (0, 1, 3), T2, (0, 4), T3, (4, 5), (2, 3, 5))

The manuscript (arXiv:1808.00532) explains the logic of the code in more detail.

## Split a tensor

Splitting a tensor by a so-called QR or singular value decomposition (SVD) is a ubiquitous operation in tensor network algorithms. The first step is the "matricization" of the tensor: a subset of legs is grouped together into one "fat" leg and the remaining (complementary) legs into a second "fat" leg. The two fat legs are interpreted as the rows and columns of a matrix, which is then decomposed.

The figure below illustrates this process (as it appears in GuiTeNet) for the QR decomposition of a tensor with initially 5 legs. First right-click on a tensor to initiate the splitting operation. An overlay window then asks for the ordering and partitioning of dimensions attributed to the rows and columns in the matricization process. In the example, the "row" consists of dimensions 0, 3, 2 (in this order) and the "column" of dimensions 1, 4 (in this order). After the decomposition, the resulting Q and R matrices are finally reshaped to restore the original dimensions, with an additional dimension for the shared bond (last dimension of Q, first dimension of R). Thus the dimensions 0, 1, 2 of Q match the original dimensions 0, 3, 2, and dimensions 1, 2 of R the original dimensions 1, 4.