25

How do I convert a torch tensor to numpy?

7 Answers 7

42

copied from pytorch doc:

a = torch.ones(5)
print(a)

tensor([1., 1., 1., 1., 1.])

b = a.numpy()
print(b)

[1. 1. 1. 1. 1.]


Following from the below discussion with @John:

In case the tensor is (or can be) on GPU, or in case it (or it can) require grad, one can use

t.detach().cpu().numpy()

I recommend to uglify your code only as much as required.

9
  • 1
    In my copy of torch better make that a.detach().cpu().numpy() Commented Aug 13, 2019 at 2:37
  • @LarsEricson why?
    – Gulzar
    Commented Jan 26, 2020 at 8:37
  • what would the complexity be to convert tensor to NumPy like this?
    – Sid
    Commented Jan 4, 2021 at 17:23
  • @Sid I believe o(1) in most cases, but not always. See github.com/pytorch/pytorch/blob/master/torch/csrc/utils/… and numpy.org/devdocs/reference/c-api/…
    – Gulzar
    Commented Jan 4, 2021 at 17:54
  • 1
    This is true, although I believe both are noops if unnecessary so the overkill is only in the typing and there's some value if writing a function that accepts a Tensor of unknown provenance. I apologize for misunderstanding your original question to Lars. To summarize, detach and cpu are not necessary in every case, but are necessary in perhaps the most common case (so there's value in mentioning them). numpy is necessary in every case but is often insufficient on its own. Any future persons should reference the question linked above or the pytorch documentation for more information.
    – John
    Commented Jan 26, 2021 at 14:34
12

You can try following ways

1. torch.Tensor().numpy()
2. torch.Tensor().cpu().data.numpy()
3. torch.Tensor().cpu().detach().numpy()
5

Another useful way :

a = torch(0.1, device='cuda')

a.cpu().data.numpy()

Answer

array(0.1, dtype=float32)

1
  • What is the benefit of including .data. ?
    – omsrisagar
    Commented Mar 8, 2022 at 1:26
4

This is a function from fastai core:

def to_np(x):
    "Convert a tensor to a numpy array."
    return apply(lambda o: o.data.cpu().numpy(), x)

Possible using a function from prospective PyTorch library is a nice choice.

If you look inside PyTorch Transformers you will find this code:

preds = logits.detach().cpu().numpy()

So you may ask why the detach() method is needed? It is needed when we would like to detach the tensor from AD computational graph.

Still note that the CPU tensor and numpy array are connected. They share the same storage:

import torch
tensor = torch.zeros(2)
numpy_array = tensor.numpy()
print('Before edit:')
print(tensor)
print(numpy_array)

tensor[0] = 10

print()
print('After edit:')
print('Tensor:', tensor)
print('Numpy array:', numpy_array)

Output:

Before edit:
tensor([0., 0.])
[0. 0.]

After edit:
Tensor: tensor([10.,  0.])
Numpy array: [10.  0.]

The value of the first element is shared by the tensor and the numpy array. Changing it to 10 in the tensor changed it in the numpy array as well.

This is why we need to be careful, since altering the numpy array my alter the CPU tensor as well.

1

You may find the following two functions useful.

  1. torch.Tensor.numpy()
  2. torch.from_numpy()
1

Sometimes if there's "applied" gradient, you'll first have to put .detach() function before the .numpy() function.

loss = loss_fn(preds, labels)
print(loss.detach().numpy())
0
x = torch.tensor([0.1,0.32], device='cuda:0')

x.detach().cpu().data.numpy()
1
  • Code only answer are not great. Please explain why you consider this code to answer the question. Commented May 10, 2023 at 7:27

Not the answer you're looking for? Browse other questions tagged or ask your own question.