如何在PyTorch中对给定的多通道时间、空间或体数据进行上采样?


时间数据可以表示为一维张量,空间数据可以表示为二维张量,而体数据可以表示为三维张量。torch.nn模块提供的**Upsample**类支持对这些类型的数据进行**上采样**。但是这些数据必须是**N ☓ C ☓ D (可选) ☓ H (可选) ☓ W (可选)**的形式,其中**N**是minibatch大小,**C**是通道数,**D、H**和**W**分别是数据的深度、高度和宽度。因此,要对时间数据(一维)进行上采样,需要将其转换为**N ☓ C ☓ W**的3D形式;空间数据(二维)需要转换为**N ☓ C ☓ H ☓ W**的4D形式;而体数据(三维)需要转换为**N ☓ C ☓ D ☓ H ☓ W**的5D形式。

它支持不同的缩放因子和模式。在**三维(时间)**张量上,我们可以应用**mode='linear'**和**'nearest'**。在**四维(空间)**张量上,我们可以应用**mode='nearest','bilinear'**和**'bicubic'**。在**五维(体积)**张量上,我们可以应用**mode='nearest'**和**'trilinear'**。

语法

torch.nn.Upsample()

步骤

您可以使用以下步骤对时间、空间或体数据进行上采样

  • 导入所需的库。在以下所有示例中,所需的Python库是**torch**。请确保您已安装它。

import torch
  • 定义时间(3D)、空间(4D)或体积(5D)张量并打印它们。

input = torch.tensor([[1., 2.],[3., 4.]]).view(1,2,2)
print(input.size())
print("Input Tensor:
", input)
  • 使用**scale_factor**和**mode**创建一个**Upsample**实例,以对给定的多通道数据进行上采样。

upsample = torch.nn.Upsample(scale_factor=3, mode='nearest')
  • 使用创建的实例对上面定义的时间、空间或体积张量进行上采样。

output = upsample(input)
  • 打印以上采样后的张量。

print("Upsample by a scale_factor=3 with mode='nearest':
",output)

示例1

在这个程序中,我们使用不同的**scale_factor**和**mode**对**时间**数据进行上采样。

# Python program to upsample a 3D (Temporal) tensor
# 3D tensor we can apply mode='linear' and 'nearest'
import torch

# define a tensor and view as a 3D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,2,2)
print(input.size())
print("Input Tensor:
", input) # create an instance of Upsample with scale_factor and mode upsample1 = torch.nn.Upsample(scale_factor=2) output1 = upsample1(input) print("Upsample by a scale_factor=2
", output1) # define upsample with scale_factor and mode upsample2 = torch.nn.Upsample(scale_factor=3) output2 = upsample2(input) print("Upsample by a scale_factor=3 with default mode:
", output2) upsample2 = torch.nn.Upsample(scale_factor=3, mode='nearest') output2 = upsample2(input) print("Upsample by a scale_factor=3 mode='nearest':
", output2) upsample_linear = torch.nn.Upsample(scale_factor=3, mode='linear') output_linear = upsample_linear(input) print("Upsample by a scale_factor=3, mode='linear':
", output_linear)

输出

torch.Size([1, 2, 2])
Input Tensor:
   tensor([[[1., 2.],[3., 4.]]])
Upsample by a scale_factor=2
   tensor([[[1., 1., 2., 2.],[3., 3., 4., 4.]]])
Upsample by a scale_factor=3 with default mode:
   tensor([[[1., 1., 1., 2., 2., 2.],[3., 3., 3., 4., 4., 4.]]])
Upsample by a scale_factor=3 mode='nearest':
   tensor([[[1., 1., 1., 2., 2., 2.],[3., 3., 3., 4., 4., 4.]]])
Upsample by a scale_factor=3, mode='linear':
   tensor([[[1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000],[3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000]]])

注意使用不同的**scale_factor**和**mode**时输出张量之间的差异。

示例2

在下面的Python程序中,我们使用不同的**scale_factor**和**mode**对**四维(空间)**张量进行上采样。

# Python program to upsample a 4D (Spatial) tensor
# on 4D(Spatial) tensor we can apply mode='nearest', 'bilinear' and 'bicubic'

import torch

# define a tensor and view as a 4D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,1,2,2)
print(input.size())
print("Input Tensor:
", input) # upsample using mode='nearest' upsample_nearest = torch.nn.Upsample(scale_factor=3, mode='nearest') output_nearest = upsample_nearest(input) # upsample using mode='bilinear' upsample_bilinear = torch.nn.Upsample(scale_factor=3, mode='bilinear') output_bilinear = upsample_bilinear(input) # upsample using mode='bicubic' upsample_bicubic = torch.nn.Upsample(scale_factor=3, mode='bicubic') output_bicubic = upsample_bicubic(input) # display the outputs print("Upsample by a scale_factor=3, mode='nearest':
", output_nearest) print("Upsample by a scale_factor=3, mode='bilinear':
", output_bilinear) print("Upsample by a scale_factor=3, mode='bicubic':
", output_bicubic)

输出

torch.Size([1, 1, 2, 2])
Input Tensor:
   tensor([[[[1., 2.],[3., 4.]]]])
Upsample by a scale_factor=3, mode='nearest':
   tensor([[[[1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.]]]])
Upsample by a scale_factor=3, mode='bilinear':
   tensor([[[[1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000],
      [1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000],
      [1.6667, 1.6667, 2.0000, 2.3333, 2.6667, 2.6667],
      [2.3333, 2.3333, 2.6667, 3.0000, 3.3333, 3.3333],
      [3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000],
      [3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000]]]])
Upsample by a scale_factor=3, mode='bicubic':
   tensor([[[[0.6667, 0.7778, 1.0926, 1.4630, 1.7778, 1.8889],
      [0.8889, 1.0000, 1.3148, 1.6852, 2.0000, 2.1111],
      [1.5185, 1.6296, 1.9444, 2.3148, 2.6296, 2.7407],
      [2.2593, 2.3704, 2.6852, 3.0556, 3.3704, 3.4815],
      [2.8889, 3.0000, 3.3148, 3.6852, 4.0000, 4.1111],
      [3.1111, 3.2222, 3.5370, 3.9074, 4.2222, 4.3333]]]])

注意使用不同的**scale_factor**和**mode**时输出张量之间的差异。

示例3

在这个程序中,我们使用不同的**scale_factor**和**mode**对五维(体积)张量进行上采样。

# Python program to upsample a 5D (Volumetric) tensor
# on 5D (Volumetric) tensor we can apply mode='nearest' and 'trilinear'

import torch

# define a tensor and view as a 5D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,1,1,2,2)
print(input.size())
print("Input Tensor:
", input) # use mode='nearest', factor=2 upsample_nearest = torch.nn.Upsample(scale_factor=2, mode='nearest') output_nearest = upsample_nearest(input) print("Upsample by a scale_factor=2, mode='nearest'
", output_nearest) # use mode='nearest', factor=3 upsample_nearest = torch.nn.Upsample(scale_factor=3, mode='nearest') output_nearest = upsample_nearest(input) print("Upsample by a scale_factor=3, mode='nearest'
", output_nearest) # use mode='trilinear' upsample_trilinear = torch.nn.Upsample(scale_factor=2, mode='trilinear') output_trilinear = upsample_trilinear(input) print("Upsample by a scale_factor=2, mode='trilinear':
", output_trilinear)

输出

torch.Size([1, 1, 1, 2, 2])
Input Tensor:
   tensor([[[[[1., 2.],[3., 4.]]]]])
Upsample by a scale_factor=2, mode='nearest'
   tensor([[[[[1., 1., 2., 2.],
      [1., 1., 2., 2.],
      [3., 3., 4., 4.],
      [3., 3., 4., 4.]],
      [[1., 1., 2., 2.],
      [1., 1., 2., 2.],
      [3., 3., 4., 4.],
      [3., 3., 4., 4.]]]]])
Upsample by a scale_factor=3, mode='nearest'
   tensor([[[[[1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.]],
      [[1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.]],
      [[1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [1., 1., 1., 2., 2., 2.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.],
      [3., 3., 3., 4., 4., 4.]]]]])
Upsample by a scale_factor=2, mode='trilinear':
   tensor([[[[[1.0000, 1.2500, 1.7500, 2.0000],
      [1.5000, 1.7500, 2.2500, 2.5000],
      [2.5000, 2.7500, 3.2500, 3.5000],
      [3.0000, 3.2500, 3.7500, 4.0000]],
      [[1.0000, 1.2500, 1.7500, 2.0000],
      [1.5000, 1.7500, 2.2500, 2.5000],
      [2.5000, 2.7500, 3.2500, 3.5000],
      [3.0000, 3.2500, 3.7500, 4.0000]]]]])

更新于:2022年1月20日

1K+ 次浏览

开启您的职业生涯

完成课程获得认证

开始学习
广告
© . All rights reserved.