pytorch tensor张量维度转换(tensor维度转换)

您所在的位置:网站首页 T3转换成T4 pytorch tensor张量维度转换(tensor维度转换)

pytorch tensor张量维度转换(tensor维度转换)

2023-08-30 00:44| 来源: 网络整理| 查看: 265

# view() 转换维度 # reshape() 转换维度 # permute() 坐标系变换 # squeeze()/unsqueeze() 降维/升维 # expand() 扩张张量 # narraw() 缩小张量 # resize_() 重设尺寸 # repeat(), unfold() 重复张量 # cat(), stack() 拼接张量

 

一. tensor.view(n1,n2,...,ni)

转换前后张量中的元素个数不变。view()中若存在某一维的维度是-1,则表示该维的维度根据总元素个数和其他维度尺寸自适应调整。注意,view()中最多只能有一个维度的维数设置成-1。

在卷积神经网络中,经常会在全连接层用到view进行张量的维度拉伸:

dst_t= src_t.view([src_t.size()[0], -1])

假设输入特征是B*C*H*W的4维张量,其中B表示batchsize,C表示特征通道数,H和W表示特征的高和宽,在将特征送入全连接层之前,会用.view将转换为B*(C*H*W)的2维张量,即保持batch不变,但将每个特征转换为一维向量。

# tensor.view tensor_01 = (torch.rand([2, 3, 4]) * 10).int() print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[[1, 5, 2, 7], # [2, 0, 7, 1], # [0, 6, 7, 9]], # # [[5, 1, 7, 2], # [9, 0, 8, 3], # [7, 3, 3, 5]]], dtype=torch.int32) # tensor size: torch.Size([2, 3, 4]) # 将tensor_01转换为2*3*2*2的张量 tensor_02 = tensor_01.view([2, 3, -1, 2]) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[[[1, 5], # [2, 7]], # # [[2, 0], # [7, 1]], # # [[0, 6], # [7, 9]]], # # # [[[5, 1], # [7, 2]], # # [[9, 0], # [8, 3]], # # [[7, 3], # [3, 5]]]], dtype=torch.int32) # tensor size: torch.Size([2, 3, 2, 2]) # 将tensor_01转换为2*12的张量 tensor_03 = tensor_01.view([tensor_01.size()[0], -1]) print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size()) # 输出 # tensor_03: # tensor([[1, 5, 2, 7, 2, 0, 7, 1, 0, 6, 7, 9], # [5, 1, 7, 2, 9, 0, 8, 3, 7, 3, 3, 5]], dtype=torch.int32) # tensor size: torch.Size([2, 12])

 

二. tensor.reshape(n1,n2,...,ni)

与view使用方法相同

tensor_04 = tensor_01.reshape([tensor_01.size()[0], -1]) print('\ntensor_04:\n', tensor_04, '\ntensor size:', tensor_04.size()) # 输出结果 # tensor_04: # tensor([[1, 5, 2, 7, 2, 0, 7, 1, 0, 6, 7, 9], # [5, 1, 7, 2, 9, 0, 8, 3, 7, 3, 3, 5]], dtype=torch.int32) # tensor size: torch.Size([2, 12])

 

三.tensor.squeeze()和tensor.unsqueeze()

1.tensor.squeeze() 降维

(1)若squeeze()括号内为空,则将张量中所有维度为1的维数进行压缩,如将2*1*3*1的张量降维到2*3维;若维度中无1维的维数,则保持源维度不变,如将2*3*4维的张量进行squeeze,则转换后维度不会变。

(2)若squeeze(idx),则将张量中对应的第idx维de的维度进行压缩,如2*1*3*1的张量做squeeze(1),则会降维到2*3*1维的张量;若第idx维度的维数不为1,则squeeze后维度不会变化。

2.tensor.unsqueeze(idx)升维

在第idx维进行升维,将tensor由原本的维度n,升维至n+1维。如张量的维度维2*3,经unsqueeze(1)后,变为2*1*3维度的张量。

# tensor.squeeze/unsqueeze tensor_01 = torch.arange(1, 19).reshape(1, 2, 1, 9) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[[[ 1, 2, 3, 4, 5, 6, 7, 8, 9]], # # [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]]) # tensor size: torch.Size([1, 2, 1, 9]) tensor_02 = tensor_01.squeeze(0) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[[ 1, 2, 3, 4, 5, 6, 7, 8, 9]], # # [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]) # tensor size: torch.Size([2, 1, 9]) tensor_03 = tensor_01.squeeze(1) print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size()) # 输出 # tensor_03: # tensor([[[[ 1, 2, 3, 4, 5, 6, 7, 8, 9]], # # [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]]) # tensor size: torch.Size([1, 2, 1, 9]) tensor_04 = tensor_01.squeeze() print('\ntensor_04:\n', tensor_04, '\ntensor size:', tensor_04.size()) # 输出 # tensor_04: # tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9], # [10, 11, 12, 13, 14, 15, 16, 17, 18]]) # tensor size: torch.Size([2, 9]) tensor_05 = tensor_04.view([2, 3, -1]).unsqueeze(2) print('\ntensor_05:\n', tensor_05, '\ntensor size:', tensor_05.size()) # 输出 # tensor_05: # tensor([[[[ 1, 2, 3]], # # [[ 4, 5, 6]], # # [[ 7, 8, 9]]], # # # [[[10, 11, 12]], # # [[13, 14, 15]], # # [[16, 17, 18]]]]) # tensor size: torch.Size([2, 3, 1, 3])

 

4.tensor.permute()

坐标系转换,即矩阵转置,使用方法与numpy array的transpose相同。permute()括号内的sh是深数字指的是各维度的索引值。permute是深度学习中经常需要使用的技巧,一般的会将BCHW的特征张量,通过转置转化为BHWC的特征张量,即将特征深度转换到最后一个维度,通过调用tensor.permute(0, 2, 3, 1)实现。

permute和view/reshape虽然都能将张量转化为特定的维度,但原理完全不同,注意区分。view和reshape处理后,张量中元素顺序都不会有变化,而permute转置后元素的排列会发生变化,因为坐标系变化了。

# tensor.permute tensor_01 = (torch.rand([2, 3, 2, 4]) * 10).int() print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[[[3, 4, 6, 2], # [1, 9, 8, 0]], # # [[4, 9, 4, 2], # [4, 2, 1, 0]], # # [[7, 2, 9, 5], # [5, 1, 9, 2]]], # # # [[[6, 0, 8, 8], # [3, 4, 8, 1]], # # [[7, 6, 4, 5], # [1, 4, 9, 7]], # # [[5, 7, 9, 8], # [6, 5, 2, 4]]]], dtype=torch.int32) # tensor size: torch.Size([2, 3, 2, 4]) tensor_02 = tensor_01.permute([0, 2, 3, 1]) # [2, 4, 4, 3] print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[[[3, 4, 7], # [4, 9, 2], # [6, 4, 9], # [2, 2, 5]], # # [[1, 4, 5], # [9, 2, 1], # [8, 1, 9], # [0, 0, 2]]], # # # [[[6, 7, 5], # [0, 6, 7], # [8, 4, 9], # [8, 5, 8]], # # [[3, 1, 6], # [4, 4, 5], # [8, 9, 2], # [1, 7, 4]]]], dtype=torch.int32) # tensor size: torch.Size([2, 2, 4, 3])

 

5.torch.cat(a,b,dim)

在第dim维度进行张量拼接,要注意维度保持一致。

假设a为h1*w1的二维张量,b为h2*w2的二维张量,torch.cat(a,b,0)表示在第一维进行拼接,即在列方向拼接,所以w1和w2必须相等。torch.cat(a,b,1)表示在第二维进行拼接,即在行方向拼接,所以h1和h2必须相等。

假设a为c1*h1*w1的二维张量,b为c2*h2*w2的二维张量,torch.cat(a,b,0)表示在第一维进行拼接,即在特征的通道维度进行拼接,其他维度必须保持一致,即w1=w2,h1=h2。torch.cat(a,b,1)表示在第二维进行拼接,即在列方向拼接,必须保证w1=w2,c1=c2;torch.cat(a,b,2)表示在第三维进行拼接,即在行方向拼接,必须保证h1=h2,c1=c2;

# torch.cat tensor_01 = (torch.randn(2, 3) * 10).int() print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[ 1, -8, -2], # [ 2, 10, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 3]) tensor_02 = torch.cat((tensor_01, torch.IntTensor([[0, 0, 0], [0, 0, 0], [0, 0, 0]])), 0) # 列方向拼接 print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[ 1, -8, -2], # [ 2, 10, 3], # [ 0, 0, 0], # [ 0, 0, 0], # [ 0, 0, 0]], dtype=torch.int32) # tensor size: torch.Size([5, 3]) tensor_03 = torch.cat((tensor_01, torch.IntTensor([[0, 0], [0, 0]])), 1) # 列方向拼接 print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size()) # 输出 # tensor_03: # tensor([[ 1, -8, -2, 0, 0], # [ 2, 10, 3, 0, 0]], dtype=torch.int32) # tensor size: torch.Size([2, 5])

 

6.tensor.expand()

扩展张量,以二维张量为例:

tensor是1*n或n*1维的张量,分别调用tensor.expand(s, n)或tensor.expand(n, s)在行方向和列方向进行扩展。

# tensor.expand tensor_01 = torch.IntTensor([[1, 2, 3]]) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[1, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([1, 3]) tensor_02 = tensor_01.expand([2, 3]) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[1, 2, 3], # [1, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 3])

 

7.tensor.narrow(dim, start, len)

缩小张量,将第dim维由start位置处开始取len长的张量。

# tensor.narraw tensor_01 = torch.IntTensor([[1, 2, 1, 3], [3, 2, 3, 4]]) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[1, 2, 1, 3], # [3, 2, 3, 4]], dtype=torch.int32) # tensor size: torch.Size([2, 4]) tensor_02 = tensor_01.narrow(1, 1, 2) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[2, 1], # [2, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 2])

 

8. tensor.resize_()

尺寸变化,将tensor截断为resize_后的维度

# tensor.resize_ tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]]) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[1, 2, 1], # [3, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 3]) tensor_02 = tensor_01.resize_(2, 2) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[1, 2], # [1, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 2])

 

9.tensor.repeat()

tensor.repeat(a,b)将tensor整体在行方向复制a份,在列方向上复制b份

# tensor.repeat tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]]) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[1, 2, 1], # [3, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 3]) tensor_02 = tensor_01.repeat([2, 3]) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[1, 2, 1, 1, 2, 1, 1, 2, 1], # [3, 2, 3, 3, 2, 3, 3, 2, 3], # [1, 2, 1, 1, 2, 1, 1, 2, 1], # [3, 2, 3, 3, 2, 3, 3, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([4, 9])

 

10.tensor.unfold(dim, start, step)

tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]]) print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size()) # 输出 # tensor_01: # tensor([[1, 2, 1], # [3, 2, 3]], dtype=torch.int32) # tensor size: torch.Size([2, 3]) tensor_02 = tensor_01.unfold(1, 2, 2) print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size()) # 输出 # tensor_02: # tensor([[[1, 2]], # # [[3, 2]]], dtype=torch.int32) # tensor size: torch.Size([2, 1, 2])

 



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3