# How to compute this tensor contraction in numpy?

I have a tensor `A` in numpy which is `N1 x .. x Nn x M1 x ... x Mm` and a tensor `B` which is `M1 x ... x Mm`. How do I compute the tensor contraction `C` of `A` and `B`, which should be `N1 x ... x Nn?` I tried doing various permutations of

``````np.tensordot(A, B, ...)
``````

But I'm not really familiar with it.

For an example, if `A` was a `N x M` matrix and `B` an `M`-vector, I could just do `np.dot(A, B)`, but I'm not sure how to generalize this.

``````In [78]: A=np.arange(2*3*4*5).reshape(2,3,4,5)
In [79]: B=np.arange(4*5).reshape(4,5)

In [81]: np.einsum('...ij,ij',A,B)
Out[81]:
array([[ 2470,  6270, 10070],
[13870, 17670, 21470]])

In [82]: np.tensordot(A,B,((2,3),(0,1)))
Out[82]:
array([[ 2470,  6270, 10070],
[13870, 17670, 21470]])
``````

`tensordot` uses reshape (and axis swap) to reduce the problem to 2d that `dot` can handle:

``````In [83]: A1=A.reshape(2*3,4*5)
In [84]: B1=B.reshape(4*5)
In [85]: C1=np.dot(A1,B1)
In [86]: C1.reshape(2,3)
Out[86]:
array([[ 2470,  6270, 10070],
[13870, 17670, 21470]])
``````

If overall dimensions, and hence array size is too large, `einsum` will have memory problems. Well, both can.

While `...` can handle the variable number of `N` dimensions, we have to be specific about the `M` dimensions. (We could in theory construct an `ij` string programatically.)