Module pearl.neural_networks.common.residual_wrapper

Expand source code
import torch
from torch import nn


class ResidualWrapper(nn.Module):
    """
    A wrapper block for residual networks. It is used to wrap a single layer of the network.

    Example:
    layers = []
    for layer in layer_generator:
        layers.append(ResidualWrapper(layer))
    model = torch.nn.Sequential(*layers)
    """

    def __init__(self, module: nn.Module) -> None:
        super().__init__()
        self.module = module

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return x + self.module(x)

Classes

class ResidualWrapper (module: torch.nn.modules.module.Module)

A wrapper block for residual networks. It is used to wrap a single layer of the network.

Example: layers = [] for layer in layer_generator: layers.append(ResidualWrapper(layer)) model = torch.nn.Sequential(*layers)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Expand source code
class ResidualWrapper(nn.Module):
    """
    A wrapper block for residual networks. It is used to wrap a single layer of the network.

    Example:
    layers = []
    for layer in layer_generator:
        layers.append(ResidualWrapper(layer))
    model = torch.nn.Sequential(*layers)
    """

    def __init__(self, module: nn.Module) -> None:
        super().__init__()
        self.module = module

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return x + self.module(x)

Ancestors

  • torch.nn.modules.module.Module

Methods

def forward(self, x: torch.Tensor) ‑> torch.Tensor

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the :class:Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Expand source code
def forward(self, x: torch.Tensor) -> torch.Tensor:
    return x + self.module(x)