Title: Deep Learning with 'mlr3'
Version: 0.3.0
Description: Deep Learning library that extends the mlr3 framework by building upon the 'torch' package. It allows to conveniently build, train, and evaluate deep learning models without having to worry about low level details. Custom architectures can be created using the graph language defined in 'mlr3pipelines'.
License: LGPL (≥ 3)
BugReports: https://github.com/mlr-org/mlr3torch/issues
URL: https://mlr3torch.mlr-org.com/, https://github.com/mlr-org/mlr3torch/
Depends: mlr3 (≥ 1.0.1), mlr3pipelines (≥ 0.6.0), torch (≥ 0.15.0), R (≥ 3.5.0)
Imports: backports, checkmate (≥ 2.2.0), data.table, lgr, methods, mlr3misc (≥ 0.14.0), paradox (≥ 1.0.0), R6, withr
Suggests: callr, curl, future, ggplot2, igraph, jsonlite, knitr, mlr3tuning (≥ 1.0.0), progress, rmarkdown, rpart, viridis, visNetwork, testthat (≥ 3.0.0), tfevents, torchvision (≥ 0.6.0), waldo
Config/testthat/edition: 3
NeedsCompilation: no
ByteCompile: yes
Encoding: UTF-8
RoxygenNote: 7.3.2
Collate: 'CallbackSet.R' 'aaa.R' 'TorchCallback.R' 'CallbackSetCheckpoint.R' 'CallbackSetEarlyStopping.R' 'CallbackSetHistory.R' 'CallbackSetLRScheduler.R' 'CallbackSetProgress.R' 'CallbackSetTB.R' 'CallbackSetUnfreeze.R' 'ContextTorch.R' 'DataBackendLazy.R' 'utils.R' 'DataDescriptor.R' 'LearnerFTTransformer.R' 'LearnerTorch.R' 'LearnerTorchFeatureless.R' 'LearnerTorchImage.R' 'LearnerTorchMLP.R' 'task_dataset.R' 'shape.R' 'PipeOpTorchIngress.R' 'LearnerTorchModel.R' 'LearnerTorchModule.R' 'LearnerTorchTabResNet.R' 'LearnerTorchVision.R' 'ModelDescriptor.R' 'PipeOpModule.R' 'PipeOpTorch.R' 'PipeOpTaskPreprocTorch.R' 'PipeOpTorchActivation.R' 'PipeOpTorchAdaptiveAvgPool.R' 'PipeOpTorchAvgPool.R' 'PipeOpTorchBatchNorm.R' 'PipeOpTorchBlock.R' 'PipeOpTorchCallbacks.R' 'PipeOpTorchConv.R' 'PipeOpTorchConvTranspose.R' 'PipeOpTorchDropout.R' 'PipeOpTorchFTCLS.R' 'PipeOpTorchFTTransformerBlock.R' 'PipeOpTorchFn.R' 'PipeOpTorchHead.R' 'PipeOpTorchIdentity.R' 'PipeOpTorchLayerNorm.R' 'PipeOpTorchLinear.R' 'TorchLoss.R' 'PipeOpTorchLoss.R' 'PipeOpTorchMaxPool.R' 'PipeOpTorchMerge.R' 'PipeOpTorchModel.R' 'PipeOpTorchOptimizer.R' 'PipeOpTorchReshape.R' 'PipeOpTorchSoftmax.R' 'PipeOpTorchTokenizer.R' 'Select.R' 'TaskClassif_cifar.R' 'TaskClassif_lazy_iris.R' 'TaskClassif_melanoma.R' 'TaskClassif_mnist.R' 'TaskClassif_tiny_imagenet.R' 'TorchDescriptor.R' 'TorchOptimizer.R' 'bibentries.R' 'cache.R' 'lazy_tensor.R' 'learner_torch_methods.R' 'materialize.R' 'merge_graphs.R' 'multi_tensor_dataset.R' 'nn.R' 'nn_graph.R' 'paramset_torchlearner.R' 'preprocess.R' 'rd_info.R' 'with_torch_settings.R' 'zzz.R'
Packaged: 2025-07-07 12:11:28 UTC; sebi
Author: Sebastian Fischer ORCID iD [cre, aut], Bernd Bischl ORCID iD [ctb], Lukas Burk ORCID iD [ctb], Martin Binder [aut], Florian Pfisterer ORCID iD [ctb], Carson Zhang [ctb]
Maintainer: Sebastian Fischer <sebf.fischer@gmail.com>
Repository: CRAN
Date/Publication: 2025-07-07 12:40:02 UTC

mlr3torch: Deep Learning with 'mlr3'

Description

Deep Learning library that extends the mlr3 framework by building upon the 'torch' package. It allows to conveniently build, train, and evaluate deep learning models without having to worry about low level details. Custom architectures can be created using the graph language defined in 'mlr3pipelines'.

Options

Author(s)

Maintainer: Sebastian Fischer sebf.fischer@gmail.com (ORCID)

Authors:

Other contributors:

See Also

Useful links:


Compare lazy tensors

Description

Compares lazy tensors using their indices and the data descriptor's hash. This means that if two lazy_tensors:

Usage

## S3 method for class 'lazy_tensor'
x == y

Arguments

x, y

(lazy_tensor)
Values to compare.


Data Descriptor

Description

A data descriptor is a rather internal data structure used in the lazy_tensor data type. In essence it is an annotated torch::dataset and a preprocessing graph (consisting mosty of PipeOpModule operators). The additional meta data (e.g. pointer, shapes) allows to preprocess lazy_tensors in an mlr3pipelines::Graph just like any (non-lazy) data types. The preprocessing is applied when materialize() is called on the lazy_tensor.

To create a data descriptor, you can also use the as_data_descriptor() function.

Details

While it would be more natural to define this as an S3 class, we opted for an R6 class to avoid the usual trouble of serializing S3 objects. If each row contained a DataDescriptor as an S3 class, this would copy the object when serializing.

Public fields

dataset

(torch::dataset)
The dataset.

graph

(Graph)
The preprocessing graph.

dataset_shapes

(named list() of (integer() or NULL))
The shapes of the output.

input_map

(character())
The input map from the dataset to the preprocessing graph.

pointer

(character(2))
The output pointer.

pointer_shape

(integer() | NULL)
The shape of the output indicated by pointer.

dataset_hash

(character(1))
Hash for the wrapped dataset.

hash

(character(1))
Hash for the data descriptor.

graph_input

(character())
The input channels of the preprocessing graph (cached to save time).

pointer_shape_predict

(integer() or NULL)
Internal use only.

Methods

Public methods


Method new()

Creates a new instance of this R6 class.

Usage
DataDescriptor$new(
  dataset,
  dataset_shapes = NULL,
  graph = NULL,
  input_map = NULL,
  pointer = NULL,
  pointer_shape = NULL,
  pointer_shape_predict = NULL,
  clone_graph = TRUE
)
Arguments
dataset

(torch::dataset)
The torch dataset. It should return a named list() of torch_tensor objects.

dataset_shapes

(named list() of (integer() or NULL))
The shapes of the output. Names are the elements of the list returned by the dataset. If the shape is not NULL (unknown, e.g. for images of different sizes) the first dimension must be NA to indicate the batch dimension.

graph

(Graph)
The preprocessing graph. If left NULL, no preprocessing is applied to the data and input_map, pointer, pointer_shape, and pointer_shape_predict are inferred in case the dataset returns only one element.

input_map

(character())
Character vector that must have the same length as the input of the graph. Specifies how the data from the dataset is fed into the preprocessing graph.

pointer

(character(2) | NULL)
Points to an output channel within graph: Element 1 is the PipeOp's id and element 2 is that PipeOp's output channel.

pointer_shape

(integer() | NULL)
Shape of the output indicated by pointer.

pointer_shape_predict

(integer() or NULL)
Internal use only. Used in a Graph to anticipate possible mismatches between train and predict shapes.

clone_graph

(logical(1))
Whether to clone the preprocessing graph.


Method print()

Prints the object

Usage
DataDescriptor$print(...)
Arguments
...

(any)
Unused


Method clone()

The objects of this class are cloneable with this method.

Usage
DataDescriptor$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

ModelDescriptor, lazy_tensor

Examples


# Create a dataset
ds = dataset(
  initialize = function() self$x = torch_randn(10, 3, 3),
  .getitem = function(i) list(x = self$x[i, ]),
  .length = function() nrow(self$x)
)()
dd = DataDescriptor$new(ds, list(x = c(NA, 3, 3)))
dd
# is the same as using the converter:
as_data_descriptor(ds, list(x = c(NA, 3, 3)))


Represent a Model with Meta-Info

Description

Represents a model; possibly a complete model, possibly one in the process of being built up.

This model takes input tensors of shapes shapes_in and pipes them through graph. Input shapes get mapped to input channels of graph. Output shapes are named by the output channels of graph; it is also possible to represent no-ops on tensors, in which case names of input and output should be identical.

ModelDescriptor objects typically represent partial models being built up, in which case the pointer slot indicates a specific point in the graph that produces a tensor of shape pointer_shape, on which the graph should be extended. It is allowed for the graph in this structure to be modified by-reference in different parts of the code. However, these modifications may never add edges with elements of the Graph as destination. In particular, no element of graph$input may be removed by reference, e.g. by adding an edge to the Graph that has the input channel of a PipeOp that was previously without parent as its destination.

In most cases it is better to create a specific ModelDescriptor by training a Graph consisting (mostly) of operators PipeOpTorchIngress, PipeOpTorch, PipeOpTorchLoss, PipeOpTorchOptimizer, and PipeOpTorchCallbacks.

A ModelDescriptor can be converted to a nn_graph via model_descriptor_to_module.

Usage

ModelDescriptor(
  graph,
  ingress,
  task,
  optimizer = NULL,
  loss = NULL,
  callbacks = NULL,
  pointer = NULL,
  pointer_shape = NULL
)

Arguments

graph

(Graph)
Graph of PipeOpModule and PipeOpNOP operators.

ingress

(uniquely named list of TorchIngressToken)
List of inputs that go into graph. Names of this must be a subset of graph$input$name.

task

(Task)
(Training)-Task for which the model is being built. May be necessary for for some aspects of what loss to use etc.

optimizer

(TorchOptimizer | NULL)
Additional info: what optimizer to use.

loss

(TorchLoss | NULL)
Additional info: what loss to use.

callbacks

(A list of CallbackSet or NULL)
Additional info: what callbacks to use.

pointer

(character(2) | NULL)
Indicating an element on which a model is. Points to an output channel within graph: Element 1 is the PipeOp's id and element 2 is that PipeOp's output channel.

pointer_shape

(integer | NULL)
Shape of the output indicated by pointer.

Value

(ModelDescriptor)

See Also

Other Model Configuration: mlr_pipeops_torch_callbacks, mlr_pipeops_torch_loss, mlr_pipeops_torch_optimizer, model_descriptor_union()

Other Graph Network: TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()


Selector Functions for Character Vectors

Description

A Select function subsets a character vector. They are used by the callback CallbackSetUnfreeze to select parameters to freeze or unfreeze during training.

Usage

select_all()

select_none()

select_grep(pattern, ignore.case = FALSE, perl = FALSE, fixed = FALSE)

select_name(param_names, assert_present = TRUE)

select_invert(select)

Arguments

pattern

See grep()

ignore.case

See grep()

perl

See grep()

fixed

See grep()

param_names

The names of the parameters that you want to select

assert_present

Whether to check that param_names is a subset of the full vector of names

select

A Select

Functions

Examples

select_all()(c("a", "b"))
select_none()(c("a", "b"))
select_grep("b$")(c("ab", "ac"))
select_name("a")(c("a", "b"))
select_invert(select_all())(c("a", "b"))

Torch Callback

Description

This wraps a CallbackSet and annotates it with metadata, most importantly a ParamSet. The callback is created for the given parameter values by calling the ⁠$generate()⁠ method.

This class is usually used to configure the callback of a torch learner, e.g. when constructing a learner of in a ModelDescriptor.

For a list of available callbacks, see mlr3torch_callbacks. To conveniently retrieve a TorchCallback, use t_clbk().

Parameters

Defined by the constructor argument param_set. If no parameter set is provided during construction, the parameter set is constructed by creating a parameter for each argument of the wrapped loss function, where the parametes are then of type ParamUty.

Super class

mlr3torch::TorchDescriptor -> TorchCallback

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
TorchCallback$new(
  callback_generator,
  param_set = NULL,
  id = NULL,
  label = NULL,
  packages = NULL,
  man = NULL,
  additional_args = NULL
)
Arguments
callback_generator

(R6ClassGenerator)
The class generator for the callback that is being wrapped.

param_set

(ParamSet or NULL)
The parameter set. If NULL (default) it is inferred from callback_generator.

id

(character(1))
The id for of the new object.

label

(character(1))
Label for the new instance.

packages

(character())
The R packages this object depends on.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.

additional_args

(any)
Additional arguments if necessary. For learning rate schedulers, this is the torch::LRScheduler.


Method clone()

The objects of this class are cloneable with this method.

Usage
TorchCallback$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()

Other Torch Descriptor: TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()

Examples


# Create a new torch callback from an existing callback set
torch_callback = TorchCallback$new(CallbackSetCheckpoint)
# The parameters are inferred
torch_callback$param_set

# Retrieve a torch callback from the dictionary
torch_callback = t_clbk("checkpoint",
  path = tempfile(), freq = 1
)
torch_callback
torch_callback$label
torch_callback$id

# open the help page of the wrapped callback set
# torch_callback$help()

# Create the callback set
callback = torch_callback$generate()
callback
# is the same as
CallbackSetCheckpoint$new(
  path = tempfile(), freq = 1
)

# Use in a learner
learner = lrn("regr.mlp", callbacks = t_clbk("checkpoint"))
# the parameters of the callback are added to the learner's parameter set
learner$param_set


Base Class for Torch Descriptors

Description

Abstract Base Class from which TorchLoss, TorchOptimizer, and TorchCallback inherit. This class wraps a generator (R6Class Generator or the torch version of such a generator) and annotates it with metadata such as a ParamSet, a label, an ID, packages, or a manual page.

The parameters are the construction arguments of the wrapped generator and the parameter ⁠$values⁠ are passed to the generator when calling the public method ⁠$generate()⁠.

Parameters

Defined by the constructor argument param_set. All parameters are tagged with "train", but this is done automatically during initialize.

Public fields

label

(character(1))
Label for this object. Can be used in tables, plot and text output instead of the ID.

param_set

(ParamSet)
Set of hyperparameters.

packages

(character(1))
Set of required packages. These packages are loaded, but not attached.

id

(character(1))
Identifier of the object. Used in tables, plot and text output.

generator

The wrapped generator that is described.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object.

Active bindings

phash

(character(1))
Hash (unique identifier) for this partial object, excluding some components which are varied systematically (e.g. the parameter values).

Methods

Public methods


Method new()

Creates a new instance of this R6 class.

Usage
TorchDescriptor$new(
  generator,
  id = NULL,
  param_set = NULL,
  packages = NULL,
  label = NULL,
  man = NULL,
  additional_args = NULL
)
Arguments
generator

The wrapped generator that is described.

id

(character(1))
The id for of the new object.

param_set

(ParamSet)
The parameter set.

packages

(character())
The R packages this object depends on.

label

(character(1))
Label for the new instance.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.

additional_args

(list())
Additional arguments if necessary. For learning rate schedulers, this is the torch::LRScheduler.


Method print()

Prints the object

Usage
TorchDescriptor$print(...)
Arguments
...

any


Method generate()

Calls the generator with the given parameter values.

Usage
TorchDescriptor$generate()

Method help()

Displays the help file of the wrapped object.

Usage
TorchDescriptor$help()

Method clone()

The objects of this class are cloneable with this method.

Usage
TorchDescriptor$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Torch Descriptor: TorchCallback, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()


Torch Ingress Token

Description

This function creates an S3 class of class "TorchIngressToken", which is an internal data structure. It contains the (meta-)information of how a batch is generated from a Task and fed into an entry point of the neural network. It is stored as the ingress field in a ModelDescriptor.

Usage

TorchIngressToken(features, batchgetter, shape = NULL)

Arguments

features

(character or mlr3pipelines::Selector)
Features on which the batchgetter will operate or a selector (such as mlr3pipelines::selector_type).

batchgetter

(function)
Function with two arguments: data and device. This function is given the output of Task$data(rows = batch_indices, cols = features) and it should produce a tensor of shape shape_out.

shape

(integer)
Shape that batchgetter will produce. Batch dimension must be included as NA (but other dimensions can also be NA, i.e., unknown).

Value

TorchIngressToken object.

See Also

Other Graph Network: ModelDescriptor(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


# Define a task for which we want to define an ingress token
task = tsk("iris")

# We create an ingress token for two feature Sepal.Length and Petal.Length:
# We have to specify the features, the batchgetter and the shape
features = c("Sepal.Length", "Petal.Length")
# As a batchgetter we use batchgetter_num

batch_dt = task$data(rows = 1:10, cols =features)
batch_dt
batch_tensor = batchgetter_num(batch_dt, "cpu")
batch_tensor

# The shape is unknown in the first dimension (batch dimension)

ingress_token = TorchIngressToken(
  features = features,
  batchgetter = batchgetter_num,
  shape = c(NA, 2)
)
ingress_token


Torch Loss

Description

This wraps a torch::nn_loss and annotates it with metadata, most importantly a ParamSet. The loss function is created for the given parameter values by calling the ⁠$generate()⁠ method.

This class is usually used to configure the loss function of a torch learner, e.g. when construcing a learner or in a ModelDescriptor.

For a list of available losses, see mlr3torch_losses. Items from this dictionary can be retrieved using t_loss().

Parameters

Defined by the constructor argument param_set. If no parameter set is provided during construction, the parameter set is constructed by creating a parameter for each argument of the wrapped loss function, where the parametes are then of type ParamUty.

Super class

mlr3torch::TorchDescriptor -> TorchLoss

Public fields

task_types

(character())
The task types this loss supports.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
TorchLoss$new(
  torch_loss,
  task_types = NULL,
  param_set = NULL,
  id = NULL,
  label = NULL,
  packages = NULL,
  man = NULL
)
Arguments
torch_loss

(nn_loss or function)
The loss module or function that generates the loss module. Can have arguments task that will be provided when the loss is instantiated.

task_types

(character())
The task types supported by this loss.

param_set

(ParamSet or NULL)
The parameter set. If NULL (default) it is inferred from torch_loss.

id

(character(1))
The id for of the new object.

label

(character(1))
Label for the new instance.

packages

(character())
The R packages this object depends on.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.


Method print()

Prints the object

Usage
TorchLoss$print(...)
Arguments
...

any


Method generate()

Instantiates the loss function.

Usage
TorchLoss$generate(task = NULL)
Arguments
task

(Task)
The task. Must be provided if the loss function requires a task.

Returns

torch_loss


Method clone()

The objects of this class are cloneable with this method.

Usage
TorchLoss$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()

Examples


# Create a new torch loss
torch_loss = TorchLoss$new(torch_loss = nn_mse_loss, task_types = "regr")
torch_loss
# the parameters are inferred
torch_loss$param_set

# Retrieve a loss from the dictionary:
torch_loss = t_loss("mse", reduction = "mean")
# is the same as
torch_loss
torch_loss$param_set
torch_loss$label
torch_loss$task_types
torch_loss$id

# Create the loss function
loss_fn = torch_loss$generate()
loss_fn
# Is the same as
nn_mse_loss(reduction = "mean")

# open the help page of the wrapped loss function
# torch_loss$help()

# Use in a learner
learner = lrn("regr.mlp", loss = t_loss("mse"))
# The parameters of the loss are added to the learner's parameter set
learner$param_set


Torch Optimizer

Description

This wraps a torch::torch_optimizer_generator and annotates it with metadata, most importantly a ParamSet. The optimizer is created for the given parameter values by calling the ⁠$generate()⁠ method.

This class is usually used to configure the optimizer of a torch learner, e.g. when constructing a learner or in a ModelDescriptor.

For a list of available optimizers, see mlr3torch_optimizers. Items from this dictionary can be retrieved using t_opt().

Parameters

Defined by the constructor argument param_set. If no parameter set is provided during construction, the parameter set is constructed by creating a parameter for each argument of the wrapped loss function, where the parameters are then of type ParamUty.

Super class

mlr3torch::TorchDescriptor -> TorchOptimizer

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
TorchOptimizer$new(
  torch_optimizer,
  param_set = NULL,
  id = NULL,
  label = NULL,
  packages = NULL,
  man = NULL
)
Arguments
torch_optimizer

(torch_optimizer_generator)
The torch optimizer.

param_set

(ParamSet or NULL)
The parameter set. If NULL (default) it is inferred from torch_optimizer.

id

(character(1))
The id for of the new object.

label

(character(1))
Label for the new instance.

packages

(character())
The R packages this object depends on.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.


Method generate()

Instantiates the optimizer.

Usage
TorchOptimizer$generate(params)
Arguments
params

(named list() of torch_tensors)
The parameters of the network.

Returns

torch_optimizer


Method clone()

The objects of this class are cloneable with this method.

Usage
TorchOptimizer$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()

Examples


# Create a new torch optimizer
torch_opt = TorchOptimizer$new(optim_ignite_adam, label = "adam")
torch_opt
# If the param set is not specified, parameters are inferred but are of class ParamUty
torch_opt$param_set

# open the help page of the wrapped optimizer
# torch_opt$help()

# Retrieve an optimizer from the dictionary
torch_opt = t_opt("sgd", lr = 0.1)
torch_opt
torch_opt$param_set
torch_opt$label
torch_opt$id

# Create the optimizer for a network
net = nn_linear(10, 1)
opt = torch_opt$generate(net$parameters)

# is the same as
optim_sgd(net$parameters, lr = 0.1)

# Use in a learner
learner = lrn("regr.mlp", optimizer = t_opt("sgd"))
# The parameters of the optimizer are added to the learner's parameter set
learner$param_set


Convert to Data Descriptor

Description

Converts the input to a DataDescriptor.

Usage

as_data_descriptor(x, dataset_shapes, ...)

Arguments

x

(any)
Object to convert.

dataset_shapes

(named list() of (integer() or NULL))
The shapes of the output. Names are the elements of the list returned by the dataset. If the shape is not NULL (unknown, e.g. for images of different sizes) the first dimension must be NA to indicate the batch dimension.

...

(any)
Further arguments passed to the DataDescriptor constructor.

Examples


ds = dataset("example",
  initialize = function() self$iris = iris[, -5],
  .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))),
  .length = function() nrow(self$iris)
)()
as_data_descriptor(ds, list(x = c(NA, 4L)))

# if the dataset has a .getbatch method, the shapes are inferred
ds2 = dataset("example",
  initialize = function() self$iris = iris[, -5],
  .getbatch = function(i) list(x = torch_tensor(as.matrix(self$iris[i, ]))),
  .length = function() nrow(self$iris)
)()
as_data_descriptor(ds2)


Convert to Lazy Tensor

Description

Convert a object to a lazy_tensor.

Usage

as_lazy_tensor(x, ...)

## S3 method for class 'dataset'
as_lazy_tensor(x, dataset_shapes = NULL, ids = NULL, ...)

Arguments

x

(any)
Object to convert to a lazy_tensor

...

(any)
Additional arguments passed to the method.

dataset_shapes

(named list() of (integer() or NULL))
The shapes of the output. Names are the elements of the list returned by the dataset. If the shape is not NULL (unknown, e.g. for images of different sizes) the first dimension must be NA to indicate the batch dimension.

ids

(integer())
Which ids to include in the lazy tensor.

Examples


iris_ds = dataset("iris",
  initialize = function() {
    self$iris = iris[, -5]
  },
  .getbatch = function(i) {
    list(x = torch_tensor(as.matrix(self$iris[i, ])))
  },
  .length = function() nrow(self$iris)
)()
# no need to specify the dataset shapes as they can be inferred from the .getbatch method
# only first 5 observations
as_lazy_tensor(iris_ds, ids = 1:5)
# all observations
head(as_lazy_tensor(iris_ds))

iris_ds2 = dataset("iris",
  initialize = function() self$iris = iris[, -5],
  .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))),
  .length = function() nrow(self$iris)
)()
# if .getitem is implemented we cannot infer the shapes as they might vary,
# so we have to annotate them explicitly
as_lazy_tensor(iris_ds2, dataset_shapes = list(x = c(NA, 4L)))[1:5]

# Convert a matrix
lt = as_lazy_tensor(matrix(rnorm(100), nrow = 20))
materialize(lt[1:5], rbind = TRUE)


Convert to CallbackSetLRScheduler

Description

Convert a torch scheduler generator to a CallbackSetLRScheduler.

Usage

as_lr_scheduler(x, step_on_epoch)

Arguments

x

(function)
The torch scheduler generator defined using torch::lr_scheduler().

step_on_epoch

(logical(1))
Whether the scheduler steps after every epoch


Convert to a TorchCallback

Description

Converts an object to a TorchCallback.

Usage

as_torch_callback(x, clone = FALSE, ...)

Arguments

x

(any)
Object to be converted.

clone

(logical(1))
Whether to make a deep clone.

...

(any)
Additional arguments

Value

TorchCallback.

See Also

Other Callback: TorchCallback, as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()


Convert to a list of Torch Callbacks

Description

Converts an object to a list of TorchCallback.

Usage

as_torch_callbacks(x, clone, ...)

Arguments

x

(any)
Object to convert.

clone

(logical(1))
Whether to create a deep clone.

...

(any)
Additional arguments.

Value

list() of TorchCallbacks

See Also

Other Callback: TorchCallback, as_torch_callback(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()


Convert to TorchLoss

Description

Converts an object to a TorchLoss.

Usage

as_torch_loss(x, clone = FALSE, ...)

Arguments

x

(any)
Object to convert to a TorchLoss.

clone

(logical(1))
Whether to make a deep clone.

...

(any)
Additional arguments. Currently used to pass additional constructor arguments to TorchLoss for objects of type nn_loss.

Value

TorchLoss.

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()


Convert to TorchOptimizer

Description

Converts an object to a TorchOptimizer.

Usage

as_torch_optimizer(x, clone = FALSE, ...)

Arguments

x

(any)
Object to convert to a TorchOptimizer.

clone

(logical(1))
Whether to make a deep clone. Default is FALSE.

...

(any)
Additional arguments. Currently used to pass additional constructor arguments to TorchOptimizer for objects of type torch_optimizer_generator.

Value

TorchOptimizer

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()


Assert Lazy Tensor

Description

Asserts whether something is a lazy tensor.

Usage

assert_lazy_tensor(x)

Arguments

x

(any)
Object to check.


Auto Device

Description

First tries cuda, then cpu.

Usage

auto_device(device = NULL)

Arguments

device

(character(1))
The device. If not NULL, is returned as is.


Batchgetter for Categorical data

Description

Converts a data frame of categorical data into a long tensor by converting the data to integers. No input checks are performed.

Usage

batchgetter_categ(data, ...)

Arguments

data

(data.table)
data.table to be converted to a tensor.

...

(any)
Unused.


Batchgetter for Numeric Data

Description

Converts a data frame of numeric data into a float tensor by calling as.matrix(). No input checks are performed

Usage

batchgetter_num(data, ...)

Arguments

data

(data.table())
data.table to be converted to a tensor.

...

(any)
Unused.


Create a Set of Callbacks for Torch

Description

Creates an R6ClassGenerator inheriting from CallbackSet. Additionally performs checks such as that the stages are not accidentally misspelled. To create a TorchCallback use torch_callback().

In order for the resulting class to be cloneable, the private method ⁠$deep_clone()⁠ must be provided.

Usage

callback_set(
  classname,
  on_begin = NULL,
  on_end = NULL,
  on_exit = NULL,
  on_epoch_begin = NULL,
  on_before_valid = NULL,
  on_epoch_end = NULL,
  on_batch_begin = NULL,
  on_batch_end = NULL,
  on_after_backward = NULL,
  on_batch_valid_begin = NULL,
  on_batch_valid_end = NULL,
  on_valid_end = NULL,
  state_dict = NULL,
  load_state_dict = NULL,
  initialize = NULL,
  public = NULL,
  private = NULL,
  active = NULL,
  parent_env = parent.frame(),
  inherit = CallbackSet,
  lock_objects = FALSE
)

Arguments

classname

(character(1))
The class name.

on_begin, on_end, on_epoch_begin, on_before_valid, on_epoch_end, on_batch_begin, on_batch_end, on_after_backward, on_batch_valid_begin, on_batch_valid_end, on_valid_end, on_exit

(function)
Function to execute at the given stage, see section Stages.

state_dict

(⁠function()⁠)
The function that retrieves the state dict from the callback. This is what will be available in the learner after training.

load_state_dict

(⁠function(state_dict)⁠)
Function that loads a callback state.

initialize

(⁠function()⁠)
The initialization method of the callback.

public, private, active

(list())
Additional public, private, and active fields to add to the callback.

parent_env

(environment())
The parent environment for the R6Class.

inherit

(R6ClassGenerator)
From which class to inherit. This class must either be CallbackSet (default) or inherit from it.

lock_objects

(logical(1))
Whether to lock the objects of the resulting R6Class. If FALSE (default), values can be freely assigned to self without declaring them in the class definition.

Value

CallbackSet

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()


Cross Entropy Loss

Description

The cross_entropy loss function selects the multi-class (nn_cross_entropy_loss) or binary (nn_bce_with_logits_loss) cross entropy loss based on the number of classes. Because of this, there is a slight reparameterization of the loss arguments, see Parameters.

Parameters

Examples


loss = t_loss("cross_entropy")
# multi-class
multi_ce = loss$generate(tsk("iris"))
multi_ce

# binary
binary_ce = loss$generate(tsk("sonar"))
binary_ce


Infer Shapes

Description

Infer the shapes of the output of a function based on the shapes of the input. This is done as follows:

  1. All NAs are replaced with values 1, 2, 3.

  2. Three tensors are generated for the three shapes of step 1.

  3. The function is called on these three tensors and the shapes are calculated.

  4. If:

    • the number of dimensions varies, an error is thrown.

    • the number of dimensions is the same, values are set to NA if the dimension is varying between the three tensors and otherwise set to the unique value.

Usage

infer_shapes(shapes_in, param_vals, output_names, fn, rowwise, id)

Arguments

shapes_in

(list())
A list of shapes of the input tensors.

param_vals

(list())
A list of named parameters for the function.

output_names

(character())
The names of the output tensors.

fn

(⁠function()⁠)
The function to infer the shapes for.

rowwise

(logical(1))
Whether the function is rowwise.

id

(character(1))
The id of the PipeOp (for error messages).

Value

(list())
A list of shapes of the output tensors.


Ingress Token for Categorical Features

Description

Represents an entry point representing a tensor containing all categorical (factor(), ordered(), logical()) features of a task.

Usage

ingress_categ(shape = NULL)

Arguments

shape

(integer() or NULL)
Shape that batchgetter will produce. Batch-dimension should be included as NA.

Value

TorchIngressToken


Ingress Token for Lazy Tensor Feature

Description

Represents an entry point representing a tensor containing a single lazy tensor feature.

Usage

ingress_ltnsr(feature_name = NULL, shape = NULL)

Arguments

feature_name

(character(1))
Which lazy tensor feature to select if there is more than one.

shape

(integer() or NULL)
Shape that batchgetter will produce. Batch-dimension should be included as NA.

Value

TorchIngressToken


Ingress Token for Numeric Features

Description

Represents an entry point representing a tensor containing all numeric (integer() and double()) features of a task.

Usage

ingress_num(shape = NULL)

Arguments

shape

(integer() or NULL)
Shape that batchgetter will produce. Batch-dimension should be included as NA.

Value

TorchIngressToken


Check for lazy tensor

Description

Checks whether an object is a lazy tensor.

Usage

is_lazy_tensor(x)

Arguments

x

(any)
Object to check.


Shape of Lazy Tensor

Description

Shape of a lazy tensor. Might be NULL if the shapes is not known or varying between rows. Batch dimension is always NA.

Usage

lazy_shape(x)

Arguments

x

(lazy_tensor)
Lazy tensor.

Value

(integer() or NULL)

Examples


lt = as_lazy_tensor(1:10)
lazy_shape(lt)
lt = as_lazy_tensor(matrix(1:10, nrow = 2))
lazy_shape(lt)


Create a lazy tensor

Description

Create a lazy tensor.

Usage

lazy_tensor(data_descriptor = NULL, ids = NULL)

Arguments

data_descriptor

(DataDescriptor or NULL)
The data descriptor or NULL for a lazy tensor of length 0.

ids

(integer())
The elements of the data_descriptor to be included in the lazy tensor.

Examples


ds = dataset("example",
  initialize = function() self$iris = iris[, -5],
  .getitem = function(i) list(x = torch_tensor(as.numeric(self$iris[i, ]))),
  .length = function() nrow(self$iris)
)()
dd = as_data_descriptor(ds, list(x = c(NA, 4L)))
lt = as_lazy_tensor(dd)


Materialize Lazy Tensor Columns

Description

This will materialize a lazy_tensor() or a data.frame() / list() containing – among other things – lazy_tensor() columns. I.e. the data described in the underlying DataDescriptors is loaded for the indices in the lazy_tensor(), is preprocessed and then put unto the specified device. Because not all elements in a lazy tensor must have the same shape, a list of tensors is returned by default. If all elements have the same shape, these tensors can also be rbinded into a single tensor (parameter rbind).

Usage

materialize(x, device = "cpu", rbind = FALSE, ...)

## S3 method for class 'list'
materialize(x, device = "cpu", rbind = FALSE, cache = "auto", ...)

Arguments

x

(any)
The object to materialize. Either a lazy_tensor or a list() / data.frame() containing lazy_tensor columns.

device

(character(1))
The torch device.

rbind

(logical(1))
Whether to rbind the lazy tensor columns (TRUE) or return them as a list of tensors (FALSE). In the second case, there is no batch dimension.

...

(any)
Additional arguments.

cache

(character(1) or environment() or NULL)
Optional cache for (intermediate) materialization results. Per default, caching will be enabled when the same dataset or data descriptor (with different output pointer) is used for more than one lazy tensor column.

Details

Materializing a lazy tensor consists of:

  1. Loading the data from the internal dataset of the DataDescriptor.

  2. Processing these batches in the preprocessing Graphs.

  3. Returning the result of the PipeOp pointed to by the DataDescriptor (pointer).

With multiple lazy_tensor columns we can benefit from caching because: a) Output(s) from the dataset might be input to multiple graphs. b) Different lazy tensors might be outputs from the same graph.

For this reason it is possible to provide a cache environment. The hash key for a) is the hash of the indices and the dataset. The hash key for b) is the hash of the indices, dataset and preprocessing graph.

Value

(list() of lazy_tensors or a lazy_tensor)

Examples


lt1 = as_lazy_tensor(torch_randn(10, 3))
materialize(lt1, rbind = TRUE)
materialize(lt1, rbind = FALSE)
lt2 = as_lazy_tensor(torch_randn(10, 4))
d = data.table::data.table(lt1 = lt1, lt2 = lt2)
materialize(d, rbind = TRUE)
materialize(d, rbind = FALSE)


Materialize a Lazy Tensor

Description

Convert a lazy_tensor to a torch_tensor.

Usage

materialize_internal(x, device = "cpu", cache = NULL, rbind)

Arguments

x

(lazy_tensor())
The lazy tensor to materialize.

device

(character(1L))
The device to put the materialized tensor on (after running the preprocessing graph).

cache

(NULL or environment())
Whether to cache the (intermediate) results of the materialization. This can make data loading faster when multiple lazy_tensors reference the same dataset or graph.

rbind

(logical(1))
Whtether to rbind the resulting tensors (TRUE) or return them as a list of tensors (FALSE).

Details

Materializing a lazy tensor consists of:

  1. Loading the data from the internal dataset of the DataDescriptor.

  2. Processing these batches in the preprocessing Graphs.

  3. Returning the result of the PipeOp pointed to by the DataDescriptor (pointer).

When materializing multiple lazy_tensor columns, caching can be useful because: a) Output(s) from the dataset might be input to multiple graphs. (in task_dataset this is shoudl rarely be the case because because we try to merge them). b) Different lazy tensors might be outputs from the same graph.

For this reason it is possible to provide a cache environment. The hash key for a) is the hash of the indices and the dataset. The hash key for b) is the hash of the indices dataset and preprocessing graph.

Value

lazy_tensor()


Dictionary of Torch Callbacks

Description

A mlr3misc::Dictionary of torch callbacks. Use t_clbk() to conveniently retrieve callbacks. Can be converted to a data.table using as.data.table.

Usage

mlr3torch_callbacks

Format

An object of class DictionaryMlr3torchCallbacks (inherits from Dictionary, R6) of length 12.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()

Other Dictionary: mlr3torch_losses, mlr3torch_optimizers, t_opt()

Examples


mlr3torch_callbacks$get("checkpoint")
# is the same as
t_clbk("checkpoint")
# convert to a data.table
as.data.table(mlr3torch_callbacks)


Loss Functions

Description

Dictionary of torch loss descriptors. See t_loss() for conveniently retrieving a loss function. Can be converted to a data.table using as.data.table.

Usage

mlr3torch_losses

Format

An object of class DictionaryMlr3torchLosses (inherits from Dictionary, R6) of length 12.

Available Loss Functions

cross_entropy, l1, mse

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_optimizers, t_clbk(), t_loss(), t_opt()

Other Dictionary: mlr3torch_callbacks, mlr3torch_optimizers, t_opt()

Examples


mlr3torch_losses$get("mse")
# is equivalent to
t_loss("mse")
# convert to a data.table
as.data.table(mlr3torch_losses)


Optimizers

Description

Dictionary of torch optimizers. Use t_opt for conveniently retrieving optimizers. Can be converted to a data.table using as.data.table.

Usage

mlr3torch_optimizers

Format

An object of class DictionaryMlr3torchOptimizers (inherits from Dictionary, R6) of length 12.

Available Optimizers

adagrad, adam, adamw, rmsprop, sgd

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, t_clbk(), t_loss(), t_opt()

Other Dictionary: mlr3torch_callbacks, mlr3torch_losses, t_opt()

Examples


mlr3torch_optimizers$get("adam")
# is equivalent to
t_opt("adam")
# convert to a data.table
as.data.table(mlr3torch_optimizers)


Lazy Data Backend

Description

This lazy data backend wraps a constructor that lazily creates another backend, e.g. by downloading (and caching) some data from the internet. This backend should be used, when some metadata of the backend is known in advance and should be accessible before downloading the actual data. When the backend is first constructed, it is verified that the provided metadata was correct, otherwise an informative error message is thrown. After the construction of the lazily constructed backend, calls like ⁠$data()⁠, ⁠$missings()⁠, ⁠$distinct()⁠, or ⁠$hash()⁠ are redirected to it.

Information that is available before the backend is constructed is:

Beware that accessing the backend's hash also contructs the backend.

Note that while in most cases the data contains lazy_tensor columns, this is not necessary and the naming of this class has nothing to do with the lazy_tensor data type.

Important

When the constructor generates factor() variables it is important that the ordering of the levels in data corresponds to the ordering of the levels in the col_info argument.

Super class

mlr3::DataBackend -> DataBackendLazy

Active bindings

backend

(DataBackend)
The wrapped backend that is lazily constructed when first accessed.

nrow

(integer(1))
Number of rows (observations).

ncol

(integer(1))
Number of columns (variables), including the primary key column.

rownames

(integer())
Returns vector of all distinct row identifiers, i.e. the contents of the primary key column.

colnames

(character())
Returns vector of all column names, including the primary key column.

is_constructed

(logical(1))
Whether the backend has already been constructed.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
DataBackendLazy$new(constructor, rownames, col_info, primary_key)
Arguments
constructor

(function)
A function with argument backend (the lazy backend), whose return value must be the actual backend. This function is called the first time the field ⁠$backend⁠ is accessed.

rownames

(integer())
The row names. Must be a permutation of the rownames of the lazily constructed backend.

col_info

(data.table::data.table())
A data.table with columns id, type and levels containing the column id, type and levels. Note that the levels must be provided in the correct order.

primary_key

(character(1))
Name of the primary key column.


Method data()

Returns a slice of the data in the specified format. The rows must be addressed as vector of primary key values, columns must be referred to via column names. Queries for rows with no matching row id and queries for columns with no matching column name are silently ignored. Rows are guaranteed to be returned in the same order as rows, columns may be returned in an arbitrary order. Duplicated row ids result in duplicated rows, duplicated column names lead to an exception.

Accessing the data triggers the construction of the backend.

Usage
DataBackendLazy$data(rows, cols)
Arguments
rows

(integer())
Row indices.

cols

(character())
Column names.


Method head()

Retrieve the first n rows. This triggers the construction of the backend.

Usage
DataBackendLazy$head(n = 6L)
Arguments
n

(integer(1))
Number of rows.

Returns

data.table::data.table() of the first n rows.


Method distinct()

Returns a named list of vectors of distinct values for each column specified. If na_rm is TRUE, missing values are removed from the returned vectors of distinct values. Non-existing rows and columns are silently ignored.

This triggers the construction of the backend.

Usage
DataBackendLazy$distinct(rows, cols, na_rm = TRUE)
Arguments
rows

(integer())
Row indices.

cols

(character())
Column names.

na_rm

(logical(1))
Whether to remove NAs or not.

Returns

Named list() of distinct values.


Method missings()

Returns the number of missing values per column in the specified slice of data. Non-existing rows and columns are silently ignored.

This triggers the construction of the backend.

Usage
DataBackendLazy$missings(rows, cols)
Arguments
rows

(integer())
Row indices.

cols

(character())
Column names.

Returns

Total of missing values per column (named numeric()).


Method print()

Printer.

Usage
DataBackendLazy$print()

Examples


# We first define a backend constructor
constructor = function(backend) {
  cat("Data is constructed!\n")
  DataBackendDataTable$new(
    data.table(x = rnorm(10), y = rnorm(10), row_id = 1:10),
    primary_key = "row_id"
  )
}

# to wrap this backend constructor in a lazy backend, we need to provide the correct metadata for it
column_info = data.table(
  id = c("x", "y", "row_id"),
  type = c("numeric", "numeric", "integer"),
  levels = list(NULL, NULL, NULL)
)
backend_lazy = DataBackendLazy$new(
  constructor = constructor,
  rownames = 1:10,
  col_info = column_info,
  primary_key = "row_id"
)

# Note that the constructor is not called for the calls below
# as they can be read from the metadata
backend_lazy$nrow
backend_lazy$rownames
backend_lazy$ncol
backend_lazy$colnames
col_info(backend_lazy)

# Only now the backend is constructed
backend_lazy$data(1, "x")
# Is the same as:
backend_lazy$backend$data(1, "x")


Base Class for Callbacks

Description

Base class from which callbacks should inherit (see section Inheriting). A callback set is a collection of functions that are executed at different stages of the training loop. They can be used to gain more control over the training process of a neural network without having to write everything from scratch.

When used a in torch learner, the CallbackSet is wrapped in a TorchCallback. The latters parameter set represents the arguments of the CallbackSet's ⁠$initialize()⁠ method.

Inheriting

For each available stage (see section Stages) a public method ⁠$on_<stage>()⁠ can be defined. The evaluation context (a ContextTorch) can be accessed via self$ctx, which contains the current state of the training loop. This context is assigned at the beginning of the training loop and removed afterwards. Different stages of a callback can communicate with each other by assigning values to ⁠$self⁠.

State: To be able to store information in the ⁠$model⁠ slot of a LearnerTorch, callbacks support a state API. You can overload the ⁠$state_dict()⁠ public method to define what will be stored in ⁠learner$model$callbacks$<id>⁠ after training finishes. This then also requires to implement a ⁠$load_state_dict(state_dict)⁠ method that defines how to load a previously saved callback state into a different callback. Note that the ⁠$state_dict()⁠ should not include the parameter values that were used to initialize the callback.

For creating custom callbacks, the function torch_callback() is recommended, which creates a CallbackSet and then wraps it in a TorchCallback. To create a CallbackSet the convenience function callback_set() can be used. These functions perform checks such as that the stages are not accidentally misspelled.

Stages

Terminate Training

If training is to be stopped, it is possible to set the field ⁠$terminate⁠ of ContextTorch. At the end of every epoch this field is checked and if it is TRUE, training stops. This can for example be used to implement custom early stopping.

Public fields

ctx

(ContextTorch or NULL)
The evaluation context for the callback. This field should always be NULL except during the ⁠$train()⁠ call of the torch learner.

Active bindings

stages

(character())
The active stages of this callback set.

Methods

Public methods


Method print()

Prints the object.

Usage
CallbackSet$print(...)
Arguments
...

(any)
Currently unused.


Method state_dict()

Returns information that is kept in the the LearnerTorch's state after training. This information should be loadable into the callback using ⁠$load_state_dict()⁠ to be able to continue training. This returns NULL by default.

Usage
CallbackSet$state_dict()

Method load_state_dict()

Loads the state dict into the callback to continue training.

Usage
CallbackSet$load_state_dict(state_dict)
Arguments
state_dict

(any)
The state dict as retrieved via ⁠$state_dict()⁠.


Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSet$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()


Checkpoint Callback

Description

Saves the optimizer and network states during training. The final network and optimizer are always stored.

Details

Saving the learner itself in the callback with a trained model is impossible, as the model slot is set after the last callback step is executed.

Super class

mlr3torch::CallbackSet -> CallbackSetCheckpoint

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetCheckpoint$new(path, freq, freq_type = "epoch")
Arguments
path

(character(1))
The path to a folder where the models are saved.

freq

(integer(1))
The frequency how often the model is saved. Frequency is either per step or epoch, which can be configured through the freq_type parameter.

freq_type

(character(1))
Can be be either "epoch" (default) or "step".


Method on_epoch_end()

Saves the network and optimizer state dict. Does nothing if freq_type or freq are not met.

Usage
CallbackSetCheckpoint$on_epoch_end()

Method on_batch_end()

Saves the selected objects defined in save. Does nothing if freq_type or freq are not met.

Usage
CallbackSetCheckpoint$on_batch_end()

Method on_exit()

Saves the learner.

Usage
CallbackSetCheckpoint$on_exit()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetCheckpoint$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()

Examples


cb = t_clbk("checkpoint", freq = 1)
task = tsk("iris")

pth = tempfile()
learner = lrn("classif.mlp", epochs = 3, batch_size = 1, callbacks = cb)
learner$param_set$set_values(cb.checkpoint.path = pth)

learner$train(task)

list.files(pth)


History Callback

Description

Saves the training and validation history during training. The history is saved as a data.table where the validation measures are prefixed with "valid." and the training measures are prefixed with "train.".

Super class

mlr3torch::CallbackSet -> CallbackSetHistory

Methods

Public methods

Inherited methods

Method on_begin()

Initializes lists where the train and validation metrics are stored.

Usage
CallbackSetHistory$on_begin()

Method state_dict()

Converts the lists to data.tables.

Usage
CallbackSetHistory$state_dict()

Method load_state_dict()

Sets the field ⁠$train⁠ and ⁠$valid⁠ to those contained in the state dict.

Usage
CallbackSetHistory$load_state_dict(state_dict)
Arguments
state_dict

(callback_state_history)
The state dict as retrieved via ⁠$state_dict()⁠.


Method on_before_valid()

Add the latest training scores to the history.

Usage
CallbackSetHistory$on_before_valid()

Method on_epoch_end()

Add the latest validation scores to the history.

Usage
CallbackSetHistory$on_epoch_end()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetHistory$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Examples



cb = t_clbk("history")
task = tsk("iris")

learner = lrn("classif.mlp", epochs = 3, batch_size = 1,
  callbacks = t_clbk("history"), validate = 0.3)
learner$param_set$set_values(
  measures_train = msrs(c("classif.acc", "classif.ce")),
  measures_valid = msr("classif.ce")
)
learner$train(task)

print(learner$model$callbacks$history)


Learning Rate Scheduling Callback

Description

Changes the learning rate based on the schedule specified by a torch::lr_scheduler.

As of this writing, the following are available:

Super class

mlr3torch::CallbackSet -> CallbackSetLRScheduler

Public fields

scheduler_fn

(lr_scheduler_generator)
The torch function that creates a learning rate scheduler

scheduler

(LRScheduler)
The learning rate scheduler wrapped by this callback

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetLRScheduler$new(.scheduler, step_on_epoch, ...)
Arguments
.scheduler

(lr_scheduler_generator)
The torch scheduler generator (e.g. torch::lr_step).

step_on_epoch

(logical(1))
Whether the scheduler steps after every epoch (otherwise every batch).

...

(any)
The scheduler-specific initialization arguments.


Method on_begin()

Creates the scheduler using the optimizer from the context

Usage
CallbackSetLRScheduler$on_begin()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetLRScheduler$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


OneCycle Learning Rate Scheduling Callback

Description

Changes the learning rate based on the 1cycle learning rate policy.

Wraps torch::lr_one_cycle(), where the default values for epochs and steps_per_epoch are the number of training epochs and the number of batches per epoch.

Super classes

mlr3torch::CallbackSet -> mlr3torch::CallbackSetLRScheduler -> CallbackSetLRSchedulerOneCycle

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetLRSchedulerOneCycle$new(...)
Arguments
...

(any)
The scheduler-specific initialization arguments.


Method on_begin()

Creates the scheduler using the optimizer from the context

Usage
CallbackSetLRSchedulerOneCycle$on_begin()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetLRSchedulerOneCycle$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


Reduce On Plateau Learning Rate Scheduler

Description

Reduces the learning rate when the first validation metric stops improving for patience epochs. Wraps torch::lr_reduce_on_plateau()

Super classes

mlr3torch::CallbackSet -> mlr3torch::CallbackSetLRScheduler -> CallbackSetLRSchedulerReduceOnPlateau

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetLRSchedulerReduceOnPlateau$new(...)
Arguments
...

(any)
The scheduler-specific initialization arguments.


Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetLRSchedulerReduceOnPlateau$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


Progress Callback

Description

Prints a progress bar and the metrics for training and validation.

Super class

mlr3torch::CallbackSet -> CallbackSetProgress

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetProgress$new(digits = 2)
Arguments
digits

integer(1)
The number of digits to print for the measures.


Method on_epoch_begin()

Initializes the progress bar for training.

Usage
CallbackSetProgress$on_epoch_begin()

Method on_batch_end()

Increments the training progress bar.

Usage
CallbackSetProgress$on_batch_end()

Method on_before_valid()

Creates the progress bar for validation.

Usage
CallbackSetProgress$on_before_valid()

Method on_batch_valid_end()

Increments the validation progress bar.

Usage
CallbackSetProgress$on_batch_valid_end()

Method on_epoch_end()

Prints a summary of the training and validation process.

Usage
CallbackSetProgress$on_epoch_end()

Method on_end()

Prints the time at the end of training.

Usage
CallbackSetProgress$on_end()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetProgress$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()

Examples


task = tsk("iris")

learner = lrn("classif.mlp", epochs = 5, batch_size = 1,
  callbacks = t_clbk("progress"), validate = 0.3)
learner$param_set$set_values(
  measures_train = msrs(c("classif.acc", "classif.ce")),
  measures_valid = msr("classif.ce")
)

learner$train(task)


TensorBoard Logging Callback

Description

Logs training loss, training measures, and validation measures as events. To view them, use TensorBoard with tensorflow::tensorboard() (requires tensorflow) or the CLI.

Details

Logs events at most every epoch.

Super class

mlr3torch::CallbackSet -> CallbackSetTB

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetTB$new(path, log_train_loss)
Arguments
path

(character(1))
The path to a folder where the events are logged. Point TensorBoard to this folder to view them.

log_train_loss

(logical(1))
Whether we log the training loss.


Method on_epoch_end()

Logs the training loss, training measures, and validation measures as TensorBoard events.

Usage
CallbackSetTB$on_epoch_end()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetTB$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk(), torch_callback()


Unfreezing Weights Callback

Description

Unfreeze some weights (parameters of the network) after some number of steps or epochs.

Super class

mlr3torch::CallbackSet -> CallbackSetUnfreeze

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
CallbackSetUnfreeze$new(starting_weights, unfreeze)
Arguments
starting_weights

(Select)
A Select denoting the weights that are trainable from the start.

unfreeze

(data.table)
A data.table with a column weights (a list column of Selects) and a column epoch or batch. The selector indicates which parameters to unfreeze, while the epoch or batch column indicates when to do so.


Method on_begin()

Sets the starting weights

Usage
CallbackSetUnfreeze$on_begin()

Method on_epoch_begin()

Unfreezes weights if the training is at the correct epoch

Usage
CallbackSetUnfreeze$on_epoch_begin()

Method on_batch_begin()

Unfreezes weights if the training is at the correct batch

Usage
CallbackSetUnfreeze$on_batch_begin()

Method clone()

The objects of this class are cloneable with this method.

Usage
CallbackSetUnfreeze$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_context_torch, t_clbk(), torch_callback()

Examples


task = tsk("iris")
cb = t_clbk("unfreeze")
mlp = lrn("classif.mlp", callbacks = cb,
 cb.unfreeze.starting_weights = select_invert(
   select_name(c("0.weight", "3.weight", "6.weight", "6.bias"))
 ),
 cb.unfreeze.unfreeze = data.table(
   epoch = c(2, 5),
   weights = list(select_name("0.weight"), select_name(c("3.weight", "6.weight")))
 ),
 epochs = 6, batch_size = 150, neurons = c(1, 1, 1)
)

mlp$train(task)


Context for Torch Learner

Description

Context for training a torch learner. This is the - mostly read-only - information callbacks have access to through the argument ctx. For more information on callbacks, see CallbackSet.

Public fields

learner

(Learner)
The torch learner.

task_train

(Task)
The training task.

task_valid

(Task or NULL)
The validation task.

loader_train

(torch::dataloader)
The data loader for training.

loader_valid

(torch::dataloader)
The data loader for validation.

measures_train

(list() of Measures)
Measures used for training.

measures_valid

(list() of Measures)
Measures used for validation.

network

(torch::nn_module)
The torch network.

optimizer

(torch::optimizer)
The optimizer.

loss_fn

(torch::nn_module)
The loss function.

total_epochs

(integer(1))
The total number of epochs the learner is trained for.

last_scores_train

(named list() or NULL)
The scores from the last training batch. Names are the ids of the training measures. If LearnerTorch sets eval_freq different from 1, this is NULL in all epochs that don't evaluate the model.

last_scores_valid

(list())
The scores from the last validation batch. Names are the ids of the validation measures. If LearnerTorch sets eval_freq different from 1, this is NULL in all epochs that don't evaluate the model.

last_loss

(numeric(1))
The loss from the last trainings batch.

y_hat

(torch_tensor)
The model's prediction for the current batch.

epoch

(integer(1))
The current epoch.

step

(integer(1))
The current iteration.

prediction_encoder

(⁠function()⁠)
The learner's prediction encoder.

batch

(named list() of torch_tensors)
The current batch.

terminate

(logical(1))
If this field is set to TRUE at the end of an epoch, training stops.

device

(torch::torch_device)
The device.

Methods

Public methods


Method new()

Creates a new instance of this R6 class.

Usage
ContextTorch$new(
  learner,
  task_train,
  task_valid = NULL,
  loader_train,
  loader_valid = NULL,
  measures_train = NULL,
  measures_valid = NULL,
  network,
  optimizer,
  loss_fn,
  total_epochs,
  prediction_encoder,
  eval_freq = 1L,
  device
)
Arguments
learner

(Learner)
The torch learner.

task_train

(Task)
The training task.

task_valid

(Task or NULL)
The validation task.

loader_train

(torch::dataloader)
The data loader for training.

loader_valid

(torch::dataloader or NULL)
The data loader for validation.

measures_train

(list() of Measures or NULL)
Measures used for training. Default is NULL.

measures_valid

(list() of Measures or NULL)
Measures used for validation.

network

(torch::nn_module)
The torch network.

optimizer

(torch::optimizer)
The optimizer.

loss_fn

(torch::nn_module)
The loss function.

total_epochs

(integer(1))
The total number of epochs the learner is trained for.

prediction_encoder

(⁠function()⁠)
The learner's prediction encoder. See section Inheriting of LearnerTorch.

eval_freq

(integer(1))
The evaluation frequency.

device

(character(1))
The device.


Method clone()

The objects of this class are cloneable with this method.

Usage
ContextTorch$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, t_clbk(), torch_callback()


FT-Transformer

Description

Feature-Tokenizer Transformer for tabular data that can either work on lazy_tensor inputs or on standard tabular features.

Some differences from the paper implementation: no attention compression, no option to have prenormalization in the first layer.

If training is unstable, consider a combination of standardizing features (e.g. using po("scale")), using an adaptive optimizer (e.g. Adam), reducing the learning rate, and using a learning rate scheduler (see CallbackSetLRScheduler for options).

Dictionary

This Learner can be instantiated using the sugar function lrn():

lrn("classif.ft_transformer", ...)
lrn("regr.ft_transformer", ...)

Properties

Parameters

Parameters from LearnerTorch and PipeOpTorchFTTransformerBlock, as well as:

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchFTTransformer

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchFTTransformer$new(
  task_type,
  optimizer = NULL,
  loss = NULL,
  callbacks = list()
)
Arguments
task_type

(character(1))
The task type, either ⁠"classif⁠" or "regr".

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchFTTransformer$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

References

Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.

See Also

Other Learner: mlr_learners.mlp, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Examples


# Define the Learner and set parameter values
learner = lrn("classif.ft_transformer")
learner$param_set$set_values(
  epochs = 1, batch_size = 16, device = "cpu",
  n_blocks = 2, d_token = 32, ffn_d_hidden_multiplier = 4/3
)

# Define a Task
task = tsk("iris")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()


Multi Layer Perceptron

Description

Fully connected feed forward network with dropout after each activation function. The features can either be a single lazy_tensor or one or more numeric columns (but not both).

Dictionary

This Learner can be instantiated using the sugar function lrn():

lrn("classif.mlp", ...)
lrn("regr.mlp", ...)

Properties

Parameters

Parameters from LearnerTorch, as well as:

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchMLP

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchMLP$new(
  task_type,
  optimizer = NULL,
  loss = NULL,
  callbacks = list()
)
Arguments
task_type

(character(1))
The task type, either ⁠"classif⁠" or "regr".

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchMLP$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

References

Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Examples


# Define the Learner and set parameter values
learner = lrn("classif.mlp")
learner$param_set$set_values(
  epochs = 1, batch_size = 16, device = "cpu",
  neurons = 10
)

# Define a Task
task = tsk("iris")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()


Learner Torch Module

Description

Create a torch learner from a torch module.

Dictionary

This Learner can be instantiated using the sugar function lrn():

lrn("classif.module", ...)
lrn("regr.module", ...)

Properties

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchModule

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchModule$new(
  module_generator = NULL,
  param_set = NULL,
  ingress_tokens = NULL,
  task_type,
  properties = NULL,
  optimizer = NULL,
  loss = NULL,
  callbacks = list(),
  packages = character(0),
  feature_types = NULL,
  predict_types = NULL
)
Arguments
module_generator

(function or nn_module_generator)
A nn_module_generator or function returning an nn_module. Both must take as argument the task for which to construct the network. Other arguments to its initialize method can be provided as parameters.

param_set

(NULL or ParamSet)
If provided, contains the parameters for the module_generator. If NULL, parameters will be inferred from the module_generator.

ingress_tokens

(list of TorchIngressToken())
A list with ingress tokens that defines how the dataset will be defined. The names must correspond to the arguments of the network's forward method. For numeric, categorical, and lazy tensor features, you can use ingress_num(), ingress_categ(), and ingress_ltnsr() to create them.

task_type

(character(1))
The task type, either ⁠"classif⁠" or "regr".

task_type

(character(1))
The task type.

properties

(NULL or character())
The properties of the learner. Defaults to all available properties for the given task type.

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.

packages

(character())
The R packages this object depends on.

feature_types

(NULL or character())
The feature types. Defaults to all available feature types.

predict_types

(character())
The predict types. See mlr_reflections$learner_predict_types for available values.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchModule$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Examples


nn_one_layer = nn_module("nn_one_layer",
  initialize = function(task, size_hidden) {
    self$first = nn_linear(task$n_features, size_hidden)
    self$second = nn_linear(size_hidden, output_dim_for(task))
  },
  # argument x corresponds to the ingress token x
  forward = function(x) {
    x = self$first(x)
    x = nnf_relu(x)
    self$second(x)
  }
)
learner = lrn("classif.module",
  module_generator = nn_one_layer,
  ingress_tokens = list(x = ingress_num()),
  epochs = 10,
  size_hidden = 20,
  batch_size = 16
)
task = tsk("iris")
learner$train(task)
learner$network


Tabular ResNet

Description

Tabular resnet.

Dictionary

This Learner can be instantiated using the sugar function lrn():

lrn("classif.tab_resnet", ...)
lrn("regr.tab_resnet", ...)

Properties

Parameters

Parameters from LearnerTorch, as well as:

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchTabResNet

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchTabResNet$new(
  task_type,
  optimizer = NULL,
  loss = NULL,
  callbacks = list()
)
Arguments
task_type

(character(1))
The task type, either ⁠"classif⁠" or "regr".

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchTabResNet$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

References

Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.module, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Examples


# Define the Learner and set parameter values
learner = lrn("classif.tab_resnet")
learner$param_set$set_values(
  epochs = 1, batch_size = 16, device = "cpu",
  n_blocks = 2, d_block = 10, d_hidden = 20, dropout1 = 0.3, dropout2 = 0.3
)

# Define a Task
task = tsk("iris")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()


Featureless Torch Learner

Description

Featureless torch learner. Output is a constant weight that is learned during training. For classification, this should (asymptoptically) result in a majority class prediction when using the standard cross-entropy loss. For regression, this should result in the median for L1 loss and in the mean for L2 loss.

Dictionary

This Learner can be instantiated using the sugar function lrn():

lrn("classif.torch_featureless", ...)
lrn("regr.torch_featureless", ...)

Properties

Parameters

Only those from LearnerTorch.

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchFeatureless

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchFeatureless$new(
  task_type,
  optimizer = NULL,
  loss = NULL,
  callbacks = list()
)
Arguments
task_type

(character(1))
The task type, either ⁠"classif⁠" or "regr".

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchFeatureless$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model

Examples


# Define the Learner and set parameter values
learner = lrn("classif.torch_featureless")
learner$param_set$set_values(
  epochs = 1, batch_size = 16, device = "cpu"
  
)

# Define a Task
task = tsk("iris")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()


AlexNet Image Classifier

Description

Classic image classification networks from torchvision.

Parameters

Parameters from LearnerTorchImage and

Properties

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> mlr3torch::LearnerTorchImage -> LearnerTorchVision

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchVision$new(
  name,
  module_generator,
  label,
  optimizer = NULL,
  loss = NULL,
  callbacks = list(),
  jittable = FALSE
)
Arguments
name

(character(1))
The name of the network.

module_generator

(⁠function(pretrained, num_classes)⁠)
Function that generates the network.

label

(character(1))
The label of the network.

optimizer

(TorchOptimizer)
The optimizer to use for training. Per default, adam is used.

loss

(TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.

callbacks

(list() of TorchCallbacks)
The callbacks. Must have unique ids.

jittable

(logical(1))
Whether to use jitting.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchVision$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

References

Krizhevsky, Alex, Sutskever, Ilya, Hinton, E. G (2017). “Imagenet classification with deep convolutional neural networks.” Communications of the ACM, 60(6), 84–90. Sandler, Mark, Howard, Andrew, Zhu, Menglong, Zhmoginov, Andrey, Chen, Liang-Chieh (2018). “Mobilenetv2: Inverted residuals and linear bottlenecks.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 4510–4520. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, Sun, Jian (2016). “Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Simonyan, Karen, Zisserman, Andrew (2014). “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556.


Base Class for Torch Learners

Description

This base class provides the basic functionality for training and prediction of a neural network. All torch learners should inherit from this class.

Validation

To specify the validation data, you can set the ⁠$validate⁠ field of the Learner, which can be set to:

This validation data can also be used for early stopping, see the description of the Learner's parameters.

Saving a Learner

In order to save a LearnerTorch for later usage, it is necessary to call the ⁠$marshal()⁠ method on the Learner before writing it to disk, as the object will otherwise not be saved correctly. After loading a marshaled LearnerTorch into R again, you then need to call ⁠$unmarshal()⁠ to transform it into a useable state.

Early Stopping and Internal Tuning

In order to prevent overfitting, the LearnerTorch class allows to use early stopping via the patience and min_delta parameters, see the Learner's parameters. When tuning a LearnerTorch it is also possible to combine the explicit tuning via mlr3tuning and the LearnerTorch's internal tuning of the epochs via early stopping. To do so, you just need to include ⁠epochs = to_tune(upper = <upper>, internal = TRUE)⁠ in the search space, where ⁠<upper>⁠ is the maximally allowed number of epochs, and configure the early stopping.

Network Head and Target Encoding

Torch learners are expected to have the following output:

Furthermore, the target encoding is expected to be as follows:

Model

The Model is a list of class "learner_torch_model" with the following elements:

Parameters

General:

The parameters of the optimizer, loss and callbacks, prefixed with "opt.", "loss." and "cb.<callback id>." respectively, as well as:

Evaluation:

Early Stopping:

Dataloader:

Also see torch::dataloder for more information.

Inheriting

There are no seperate classes for classification and regression to inherit from. Instead, the task_type must be specified as a construction argument. Currently, only classification and regression are supported.

When inheriting from this class, one should overload the following methods:

It is also possible to overwrite the private .dataloader() method. This must respect the dataloader parameters from the ParamSet.

To change the predict types, it is possible to overwrite the method below:

While it is possible to add parameters by specifying the param_set construction argument, it is currently not possible to remove existing parameters, i.e. those listed in section Parameters. None of the parameters provided in param_set can have an id that starts with "loss.", ⁠"opt.", or ⁠"cb."', as these are preserved for the dynamically constructed parameters of the optimizer, the loss function, and the callbacks.

To perform additional input checks on the task, the private .check_train_task(task, param_vals) and .check_predict_task(task, param_vals) can be overwritten. These should return TRUE if the input task is valid and otherwise a string with an error message.

For learners that have other construction arguments that should change the hash of a learner, it is required to implement the private ⁠$.additional_phash_input()⁠.

Super class

mlr3::Learner -> LearnerTorch

Active bindings

validate

How to construct the internal validation data. This parameter can be either NULL, a ratio in $(0, 1)$, "test", or "predefined".

loss

(TorchLoss)
The torch loss.

optimizer

(TorchOptimizer)
The torch optimizer.

callbacks

(list() of TorchCallbacks)
List of torch callbacks. The ids will be set as the names.

internal_valid_scores

Retrieves the internal validation scores as a named list(). Specify the ⁠$validate⁠ field and the measures_valid parameter to configure this. Returns NULL if learner is not trained yet.

internal_tuned_values

When early stopping is active, this returns a named list with the early-stopped epochs, otherwise an empty list is returned. Returns NULL if learner is not trained yet.

marshaled

(logical(1))
Whether the learner is marshaled.

network

(nn_module())
Shortcut for learner$model$network.

param_set

(ParamSet)
The parameter set

hash

(character(1))
Hash (unique identifier) for this object.

phash

(character(1))
Hash (unique identifier) for this partial object, excluding some components which are varied systematically during tuning (parameter values).

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorch$new(
  id,
  task_type,
  param_set,
  properties = character(),
  man,
  label,
  feature_types,
  optimizer = NULL,
  loss = NULL,
  packages = character(),
  predict_types = NULL,
  callbacks = list(),
  jittable = FALSE
)
Arguments
id

(character(1))
The id for of the new object.

task_type

(character(1))
The task type.

param_set

(ParamSet or alist())
Either a parameter set, or an alist() containing different values of self, e.g. alist(private$.param_set1, private$.param_set2), from which a ParamSet collection should be created.

properties

(character())
The properties of the object. See mlr_reflections$learner_properties for available values.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.

label

(character(1))
Label for the new instance.

feature_types

(character())
The feature types. See mlr_reflections$task_feature_types for available values, Additionally, "lazy_tensor" is supported.

optimizer

(NULL or TorchOptimizer)
The optimizer to use for training. Defaults to adam.

loss

(NULL or TorchLoss)
The loss to use for training. Defaults to MSE for regression and cross entropy for classification.

packages

(character())
The R packages this object depends on.

predict_types

(character())
The predict types. See mlr_reflections$learner_predict_types for available values. For regression, the default is "response". For classification, this defaults to "response" and "prob". To deviate from the defaults, it is necessary to overwrite the private ⁠$.encode_prediction()⁠ method, see section Inheriting.

callbacks

(list() of TorchCallbacks)
The callbacks to use for training. Defaults to an empty list(), i.e. no callbacks.

jittable

(logical(1))
Whether the model can be jit-traced. Default is FALSE.


Method format()

Helper for print outputs.

Usage
LearnerTorch$format(...)
Arguments
...

(ignored).


Method print()

Prints the object.

Usage
LearnerTorch$print(...)
Arguments
...

(any)
Currently unused.


Method marshal()

Marshal the learner.

Usage
LearnerTorch$marshal(...)
Arguments
...

(any)
Additional parameters.

Returns

self


Method unmarshal()

Unmarshal the learner.

Usage
LearnerTorch$unmarshal(...)
Arguments
...

(any)
Additional parameters.

Returns

self


Method dataset()

Create the dataset for a task.

Usage
LearnerTorch$dataset(task)
Arguments
task

Task
The task

Returns

dataset


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorch$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch_image, mlr_learners_torch_model


Image Learner

Description

Base Class for Image Learners. The features are assumed to be a single lazy_tensor column in RGB format.

Parameters

Parameters include those inherited from LearnerTorch and the param_set construction argument.

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchImage

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchImage$new(
  id,
  task_type,
  param_set = ps(),
  label,
  optimizer = NULL,
  loss = NULL,
  callbacks = list(),
  packages,
  man,
  properties = NULL,
  predict_types = NULL,
  jittable = FALSE
)
Arguments
id

(character(1))
The id for of the new object.

task_type

(character(1))
The task type.

param_set

(ParamSet)
The parameter set.

label

(character(1))
Label for the new instance.

optimizer

(TorchOptimizer)
The torch optimizer.

loss

(TorchLoss)
The loss to use for training.

callbacks

(list() of TorchCallbacks)
The callbacks used during training. Must have unique ids. They are executed in the order in which they are provided

packages

(character())
The R packages this object depends on.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠.

properties

(character())
The properties of the object. See mlr_reflections$learner_properties for available values.

predict_types

(character())
The predict types. See mlr_reflections$learner_predict_types for available values.

jittable

(logical(1))
Whether the model can be jit-traced.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchImage$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_model


Learner Torch Model

Description

Create a torch learner from an instantiated nn_module(). For classification, the output of the network must be the scores (before the softmax).

Parameters

See LearnerTorch

Super classes

mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchModel

Active bindings

ingress_tokens

(named list() with TorchIngressToken or NULL)
The ingress tokens. Must be non-NULL when calling ⁠$train()⁠.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
LearnerTorchModel$new(
  network = NULL,
  ingress_tokens = NULL,
  task_type,
  properties = NULL,
  optimizer = NULL,
  loss = NULL,
  callbacks = list(),
  packages = character(0),
  feature_types = NULL
)
Arguments
network

(nn_module)
An instantiated nn_module. Is not cloned during construction. For classification, outputs must be the scores (before the softmax).

ingress_tokens

(list of TorchIngressToken())
A list with ingress tokens that defines how the dataloader will be defined.

task_type

(character(1))
The task type.

properties

(NULL or character())
The properties of the learner. Defaults to all available properties for the given task type.

optimizer

(TorchOptimizer)
The torch optimizer.

loss

(TorchLoss)
The loss to use for training.

callbacks

(list() of TorchCallbacks)
The callbacks used during training. Must have unique ids. They are executed in the order in which they are provided

packages

(character())
The R packages this object depends on.

feature_types

(NULL or character())
The feature types. Defaults to all available feature types.


Method clone()

The objects of this class are cloneable with this method.

Usage
LearnerTorchModel$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Learner: mlr_learners.ft_transformer, mlr_learners.mlp, mlr_learners.module, mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


# We show the learner using a classification task

# The iris task has 4 features and 3 classes
network = nn_linear(4, 3)
task = tsk("iris")

# This defines the dataloader.
# It loads all 4 features, which are also numeric.
# The shape is (NA, 4) because the batch dimension is generally NA
ingress_tokens = list(
  input = TorchIngressToken(task$feature_names, batchgetter_num, c(NA, 4))
)

# Creating the learner and setting required parameters
learner = lrn("classif.torch_model",
  network = network,
  ingress_tokens = ingress_tokens,
  batch_size = 16,
  epochs = 1,
  device = "cpu"
)

# A simple train-predict
ids = partition(task)
learner$train(task, ids$train)
learner$predict(task, ids$test)


Center Crop Augmentation

Description

Calls torchvision::transform_center_crop, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_center_crop"")

Parameters

Id Type Default Levels
size untyped -
stages character - train, predict, both
affect_columns untyped selector_all()

Color Jitter Augmentation

Description

Calls torchvision::transform_color_jitter, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_color_jitter"")

Parameters

Id Type Default Levels Range
brightness numeric 0 [0, \infty)
contrast numeric 0 [0, \infty)
saturation numeric 0 [0, \infty)
hue numeric 0 [0, \infty)
stages character - train, predict, both -
affect_columns untyped selector_all() -

Crop Augmentation

Description

Calls torchvision::transform_crop, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_crop"")

Parameters

Id Type Default Levels Range
top integer - (-\infty, \infty)
left integer - (-\infty, \infty)
height integer - (-\infty, \infty)
width integer - (-\infty, \infty)
stages character - train, predict, both -
affect_columns untyped selector_all() -

Horizontal Flip Augmentation

Description

Calls torchvision::transform_hflip, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_hflip"")

Parameters

Id Type Default Levels
stages character - train, predict, both
affect_columns untyped selector_all()

Random Affine Augmentation

Description

Calls torchvision::transform_random_affine, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_affine"")

Parameters

Id Type Default Levels Range
degrees untyped - -
translate untyped NULL -
scale untyped NULL -
resample integer 0 (-\infty, \infty)
fillcolor untyped 0 -
stages character - train, predict, both -
affect_columns untyped selector_all() -

Random Choice Augmentation

Description

Calls torchvision::transform_random_choice, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_choice"")

Parameters

Id Type Default Levels
transforms untyped -
stages character - train, predict, both
affect_columns untyped selector_all()

Random Crop Augmentation

Description

Calls torchvision::transform_random_crop, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_crop"")

Parameters

Id Type Default Levels
size untyped -
padding untyped NULL
pad_if_needed logical FALSE TRUE, FALSE
fill untyped 0L
padding_mode character constant constant, edge, reflect, symmetric
stages character - train, predict, both
affect_columns untyped selector_all()

Random Horizontal Flip Augmentation

Description

Calls torchvision::transform_random_horizontal_flip, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_horizontal_flip"")

Parameters

Id Type Default Levels Range
p numeric 0.5 [0, 1]
stages character - train, predict, both -
affect_columns untyped selector_all() -

Random Order Augmentation

Description

Calls torchvision::transform_random_order, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_order"")

Parameters

Id Type Default Levels
transforms untyped -
stages character - train, predict, both
affect_columns untyped selector_all()

Random Resized Crop Augmentation

Description

Calls torchvision::transform_random_resized_crop, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_resized_crop"")

Parameters

Id Type Default Levels Range
size untyped - -
scale untyped c(0.08, 1) -
ratio untyped c(3/4, 4/3) -
interpolation integer 2 [0, 3]
stages character - train, predict, both -
affect_columns untyped selector_all() -

Random Vertical Flip Augmentation

Description

Calls torchvision::transform_random_vertical_flip, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_random_vertical_flip"")

Parameters

Id Type Default Levels Range
p numeric 0.5 [0, 1]
stages character - train, predict, both -
affect_columns untyped selector_all() -

Resized Crop Augmentation

Description

Calls torchvision::transform_resized_crop, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_resized_crop"")

Parameters

Id Type Default Levels Range
top integer - (-\infty, \infty)
left integer - (-\infty, \infty)
height integer - (-\infty, \infty)
width integer - (-\infty, \infty)
size untyped - -
interpolation integer 2 [0, 3]
stages character - train, predict, both -
affect_columns untyped selector_all() -

Rotate Augmentation

Description

Calls torchvision::transform_rotate, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_rotate"")

Parameters

Id Type Default Levels Range
angle untyped - -
resample integer 0 (-\infty, \infty)
expand logical FALSE TRUE, FALSE -
center untyped NULL -
fill untyped NULL -
stages character - train, predict, both -
affect_columns untyped selector_all() -

Vertical Flip Augmentation

Description

Calls torchvision::transform_vflip, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("augment_vflip"")

Parameters

Id Type Default Levels
stages character - train, predict, both
affect_columns untyped selector_all()

Class for Torch Module Wrappers

Description

PipeOpModule wraps an nn_module or function that is being called during the train phase of this mlr3pipelines::PipeOp. By doing so, this allows to assemble PipeOpModules in a computational mlr3pipelines::Graph that represents either a neural network or a preprocessing graph of a lazy_tensor. In most cases it is easier to create such a network by creating a graph that generates this graph.

In most cases it is easier to create such a network by creating a structurally related graph consisting of nodes of class PipeOpTorchIngress and PipeOpTorch. This graph will then generate the graph consisting of PipeOpModules as part of the ModelDescriptor.

Input and Output Channels

The number and names of the input and output channels can be set during construction. They input and output "torch_tensor" during training, and NULL during prediction as the prediction phase currently serves no meaningful purpose.

State

The state is the value calculated by the public method shapes_out().

Parameters

No parameters.

Internals

During training, the wrapped nn_module / function is called with the provided inputs in the order in which the channels are defined. Arguments are not matched by name.

Super class

mlr3pipelines::PipeOp -> PipeOpModule

Public fields

module

(nn_module)
The torch module that is called during the training phase.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpModule$new(
  id = "module",
  module = nn_identity(),
  inname = "input",
  outname = "output",
  param_vals = list(),
  packages = character(0)
)
Arguments
id

(character(1))
The id for of the new object.

module

(nn_module or ⁠function()⁠)
The torch module or function that is being wrapped.

inname

(character())
The names of the input channels.

outname

(character())
The names of the output channels. If this parameter has length 1, the parameter module must return a tensor. Otherwise it must return a list() of tensors of corresponding length.

param_vals

(named list())
Parameter values to be set after construction.

packages

(character())
The R packages this object depends on.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpModule$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Other PipeOp: mlr_pipeops_torch_callbacks, mlr_pipeops_torch_optimizer

Examples


## creating an PipeOpModule manually

# one input and output channel
po_module = po("module",
  id = "linear",
  module = torch::nn_linear(10, 20),
  inname = "input",
  outname = "output"
)
x = torch::torch_randn(16, 10)
# This calls the forward function of the wrapped module.
y = po_module$train(list(input = x))
str(y)

# multiple input and output channels
nn_custom = torch::nn_module("nn_custom",
  initialize = function(in_features, out_features) {
    self$lin1 = torch::nn_linear(in_features, out_features)
    self$lin2 = torch::nn_linear(in_features, out_features)
  },
  forward = function(x, z) {
    list(out1 = self$lin1(x), out2 = torch::nnf_relu(self$lin2(z)))
  }
)

module = nn_custom(3, 2)
po_module = po("module",
  id = "custom",
  module = module,
  inname = c("x", "z"),
  outname = c("out1", "out2")
)
x = torch::torch_randn(1, 3)
z = torch::torch_randn(1, 3)
out = po_module$train(list(x = x, z = z))
str(out)

# How such a PipeOpModule is usually generated
graph = po("torch_ingress_num") %>>% po("nn_linear", out_features = 10L)
result = graph$train(tsk("iris"))
# The PipeOpTorchLinear generates a PipeOpModule and adds it to a new (module) graph
result[[1]]$graph
linear_module = result[[1L]]$graph$pipeops$nn_linear
linear_module
formalArgs(linear_module$module)
linear_module$input$name

# Constructing a PipeOpModule using a simple function
po_add1 = po("module",
  id = "add_one",
  module = function(x) x + 1
)
input = list(torch_tensor(1))
po_add1$train(input)$output


1D Adaptive Average Pooling

Description

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

nn_module

Calls nn_adaptive_avg_pool1d() during training.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAdaptiveAvgPool -> PipeOpTorchAdaptiveAvgPool1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAdaptiveAvgPool1D$new(
  id = "nn_adaptive_avg_pool1d",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAdaptiveAvgPool1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_adaptive_avg_pool1d")
pipeop
# The available parameters
pipeop$param_set


2D Adaptive Average Pooling

Description

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

nn_module

Calls nn_adaptive_avg_pool2d() during training.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAdaptiveAvgPool -> PipeOpTorchAdaptiveAvgPool2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAdaptiveAvgPool2D$new(
  id = "nn_adaptive_avg_pool2d",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAdaptiveAvgPool2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_adaptive_avg_pool2d")
pipeop
# The available parameters
pipeop$param_set


3D Adaptive Average Pooling

Description

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

nn_module

Calls nn_adaptive_avg_pool3d() during training.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAdaptiveAvgPool -> PipeOpTorchAdaptiveAvgPool3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAdaptiveAvgPool3D$new(
  id = "nn_adaptive_avg_pool3d",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAdaptiveAvgPool3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_adaptive_avg_pool3d")
pipeop
# The available parameters
pipeop$param_set


1D Average Pooling

Description

Applies a 1D average pooling over an input signal composed of several input planes.

nn_module

Calls nn_avg_pool1d() during training.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAvgPool -> PipeOpTorchAvgPool1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAvgPool1D$new(id = "nn_avg_pool1d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAvgPool1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_avg_pool1d")
pipeop
# The available parameters
pipeop$param_set


2D Average Pooling

Description

Applies 2D average-pooling operation in kH * kW regions by step size sH * sW steps. The number of output features is equal to the number of input planes.

nn_module

Calls nn_avg_pool2d() during training.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAvgPool -> PipeOpTorchAvgPool2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAvgPool2D$new(id = "nn_avg_pool2d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAvgPool2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_avg_pool2d")
pipeop
# The available parameters
pipeop$param_set


3D Average Pooling

Description

Applies 3D average-pooling operation in kT * kH * kW regions by step size sT * sH * sW steps. The number of output features is equal to \lfloor \frac{ \mbox{input planes} }{sT} \rfloor.

Internals

Calls nn_avg_pool3d() during training.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchAvgPool -> PipeOpTorchAvgPool3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchAvgPool3D$new(id = "nn_avg_pool3d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchAvgPool3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_avg_pool3d")
pipeop
# The available parameters
pipeop$param_set


1D Batch Normalization

Description

Applies Batch Normalization for each channel across a batch of data.

nn_module

Calls torch::nn_batch_norm1d(). The parameter num_features is inferred as the second dimension of the input shape.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchBatchNorm -> PipeOpTorchBatchNorm1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchBatchNorm1D$new(id = "nn_batch_norm1d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchBatchNorm1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_batch_norm1d")
pipeop
# The available parameters
pipeop$param_set


2D Batch Normalization

Description

Applies Batch Normalization for each channel across a batch of data.

nn_module

Calls torch::nn_batch_norm2d(). The parameter num_features is inferred as the second dimension of the input shape.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchBatchNorm -> PipeOpTorchBatchNorm2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchBatchNorm2D$new(id = "nn_batch_norm2d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchBatchNorm2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_batch_norm2d")
pipeop
# The available parameters
pipeop$param_set


3D Batch Normalization

Description

Applies Batch Normalization for each channel across a batch of data.

nn_module

Calls torch::nn_batch_norm3d(). The parameter num_features is inferred as the second dimension of the input shape.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchBatchNorm -> PipeOpTorchBatchNorm3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchBatchNorm3D$new(id = "nn_batch_norm3d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchBatchNorm3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_batch_norm3d")
pipeop
# The available parameters
pipeop$param_set


Block Repetition

Description

Repeat a block n_blocks times by concatenating it with itself (via ⁠%>>%⁠).

Naming

For the generated module graph, the IDs of the modules are generated by prefixing the IDs of the n_blocks layers with the ID of the PipeOpTorchBlock and postfixing them with ⁠__<layer>⁠.

Parameters

The parameters available for the provided block, as well as

Input and Output Channels

The PipeOp sets its input and output channels to those from the block (Graph) it received during construction.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchBlock

Active bindings

block

(Graph)
The neural network segment that is repeated by this PipeOp.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchBlock$new(block, id = "nn_block", param_vals = list())
Arguments
block

(Graph)
A graph consisting primarily of PipeOpTorch objects that is to be repeated.

id

(character(1))
The id for of the new object.

param_vals

(named list())
Parameter values to be set after construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchBlock$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# repeat a simple linear layer with ReLU activation 3 times, but set the bias for the last
# layer to `FALSE`
block = nn("linear") %>>% nn("relu")

blocks = nn("block", block,
  linear.out_features = 10L, linear.bias = TRUE, n_blocks = 3,
  trafo = function(i, param_vals, param_set) {
    if (i  == param_set$get_values()$n_blocks) {
      param_vals$linear.bias = FALSE
    }
    param_vals
  })
graph = po("torch_ingress_num") %>>%
  blocks %>>%
  nn("head")
md = graph$train(tsk("iris"))[[1L]]
network = model_descriptor_to_module(md)
network


CELU Activation Function

Description

Applies element-wise, CELU(x) = max(0,x) + min(0, \alpha * (exp(x \alpha) - 1)).

nn_module

Calls torch::nn_celu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchCELU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchCELU$new(id = "nn_celu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchCELU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_celu")
pipeop
# The available parameters
pipeop$param_set


1D Convolution

Description

Applies a 1D convolution over an input signal composed of several input planes.

nn_module

Calls torch::nn_conv1d() when trained. The paramter in_channels is inferred from the second dimension of the input tensor.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConv -> PipeOpTorchConv1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConv1D$new(id = "nn_conv1d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConv1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv1d", kernel_size = 10, out_channels = 1)
pipeop
# The available parameters
pipeop$param_set


2D Convolution

Description

Applies a 2D convolution over an input image composed of several input planes.

nn_module

Calls torch::nn_conv2d() when trained. The paramter in_channels is inferred from the second dimension of the input tensor.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConv -> PipeOpTorchConv2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConv2D$new(id = "nn_conv2d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConv2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv2d", kernel_size = 10, out_channels = 1)
pipeop
# The available parameters
pipeop$param_set


3D Convolution

Description

Applies a 3D convolution over an input image composed of several input planes.

nn_module

Calls torch::nn_conv3d() when trained. The paramter in_channels is inferred from the second dimension of the input tensor.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConv -> PipeOpTorchConv3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConv3D$new(id = "nn_conv3d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConv3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv3d", kernel_size = 10, out_channels = 1)
pipeop
# The available parameters
pipeop$param_set


Transpose 1D Convolution

Description

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

nn_module

Calls nn_conv_transpose1d. The parameter in_channels is inferred as the second dimension of the input tensor.

Parameters

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConvTranspose -> PipeOpTorchConvTranspose1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConvTranspose1D$new(id = "nn_conv_transpose1d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConvTranspose1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv_transpose1d", kernel_size = 3, out_channels = 2)
pipeop
# The available parameters
pipeop$param_set


Transpose 2D Convolution

Description

Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution".

nn_module

Calls nn_conv_transpose2d. The parameter in_channels is inferred as the second dimension of the input tensor.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConvTranspose -> PipeOpTorchConvTranspose2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConvTranspose2D$new(id = "nn_conv_transpose2d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConvTranspose2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv_transpose2d", kernel_size = 3, out_channels = 2)
pipeop
# The available parameters
pipeop$param_set


Transpose 3D Convolution

Description

Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution"

nn_module

Calls nn_conv_transpose3d. The parameter in_channels is inferred as the second dimension of the input tensor.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchConvTranspose -> PipeOpTorchConvTranspose3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchConvTranspose3D$new(id = "nn_conv_transpose3d", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchConvTranspose3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_conv_transpose3d", kernel_size = 3, out_channels = 2)
pipeop
# The available parameters
pipeop$param_set


Dropout

Description

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution.

nn_module

Calls torch::nn_dropout() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchDropout

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchDropout$new(id = "nn_dropout", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchDropout$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_dropout")
pipeop
# The available parameters
pipeop$param_set


ELU Activation Function

Description

Applies element-wise,

ELU(x) = max(0,x) + min(0, \alpha * (exp(x) - 1))

.

nn_module

Calls torch::nn_elu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchELU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchELU$new(id = "nn_elu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchELU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_elu")
pipeop
# The available parameters
pipeop$param_set


Flattens a Tensor

Description

For use with nn_sequential.

nn_module

Calls torch::nn_flatten() when trained.

Parameters

start_dim :: integer(1)
At wich dimension to start flattening. Default is 2. end_dim :: integer(1)
At wich dimension to stop flattening. Default is -1.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchFlatten

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchFlatten$new(id = "nn_flatten", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchFlatten$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_flatten")
pipeop
# The available parameters
pipeop$param_set


Custom Function

Description

Applies a user-supplied function to a tensor.

Parameters

By default, these are inferred as all but the first arguments of the function fn. It is also possible to specify these more explicitly via the param_set constructor argument.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchFn

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchFn$new(
  fn,
  id = "nn_fn",
  param_vals = list(),
  param_set = NULL,
  shapes_out = NULL
)
Arguments
fn

(function)
The function to be applied. Takes a torch tensor as first argument and returns a torch tensor.

id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.

param_set

(ParamSet or NULL)
A ParamSet wrapping the arguments to fn. If omitted, then the ParamSet for this PipeOp will be inferred from the function signature.

shapes_out

(function or NULL)
A function that computes the output shapes of the fn. See PipeOpTorch's .shapes_out() method for details on the parameters, and PipeOpTaskPreprocTorch for details on how the shapes are inferred when this parameter is NULL.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchFn$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Examples


custom_fn =  function(x, a) x / a
obj = po("nn_fn", fn = custom_fn, a = 2)
obj$param_set

graph = po("torch_ingress_ltnsr") %>>% obj

task = tsk("lazy_iris")$filter(1)
tnsr = materialize(task$data()$x)[[1]]

md_trained = graph$train(task)
trained = md_trained[[1]]$graph$train(tnsr)

trained[[1]]

custom_fn(tnsr, a = 2)


CLS Token for FT-Transformer

Description

Concatenates a CLS token to the input as the last feature. The input shape is expected to be ⁠(batch, n_features, d_token)⁠ and the output shape is ⁠(batch, n_features + 1, d_token)⁠.

This is used in the LearnerTorchFTTransformer.

nn_module

Calls nn_ft_cls() when trained.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchFTCLS

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchFTCLS$new(id = "nn_ft_cls", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchFTCLS$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_ft_cls")
pipeop
# The available parameters
pipeop$param_set


Single Transformer Block for the FT-Transformer

Description

A transformer block consisting of a multi-head self-attention mechanism followed by a feed-forward network.

This is used in LearnerTorchFTTransformer.

nn_module

Calls nn_ft_transformer_block() when trained.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchFTTransformerBlock

Methods

Public methods

Inherited methods

Method new()

Create a new instance of this R6 class.

Usage
PipeOpTorchFTTransformerBlock$new(
  id = "nn_ft_transformer_block",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchFTTransformerBlock$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_ft_transformer_block")
pipeop
# The available parameters
pipeop$param_set


GeGLU Activation Function

Description

Gaussian Error Linear Unit Gated Linear Unit (GeGLU) activation function, see nn_geglu for details.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchGeGLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchGeGLU$new(id = "nn_geglu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchGeGLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_geglu")
pipeop
# The available parameters
pipeop$param_set


GELU Activation Function

Description

Gelu

nn_module

Calls torch::nn_gelu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchGELU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchGELU$new(id = "nn_gelu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchGELU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_gelu")
pipeop
# The available parameters
pipeop$param_set


GLU Activation Function

Description

The gated linear unit. Computes:

nn_module

Calls torch::nn_glu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchGLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchGLU$new(id = "nn_glu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchGLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_glu")
pipeop
# The available parameters
pipeop$param_set


Hard Shrink Activation Function

Description

Applies the hard shrinkage function element-wise

nn_module

Calls torch::nn_hardshrink() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchHardShrink

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchHardShrink$new(id = "nn_hardshrink", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchHardShrink$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_hardshrink")
pipeop
# The available parameters
pipeop$param_set


Hard Sigmoid Activation Function

Description

Applies the element-wise function \mbox{Hardsigmoid}(x) = \frac{ReLU6(x + 3)}{6}

nn_module

Calls torch::nn_hardsigmoid() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchHardSigmoid

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchHardSigmoid$new(id = "nn_hardsigmoid", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchHardSigmoid$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_hardsigmoid")
pipeop
# The available parameters
pipeop$param_set


Hard Tanh Activation Function

Description

Applies the HardTanh function element-wise.

nn_module

Calls torch::nn_hardtanh() when trained.

Parameters

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchHardTanh

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchHardTanh$new(id = "nn_hardtanh", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchHardTanh$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_hardtanh")
pipeop
# The available parameters
pipeop$param_set


Output Head

Description

Output head for classification and regresssion.

Details

When the method ⁠$shapes_out()⁠ does not have access to the task, it returns c(NA, NA). When this PipeOp is trained however, the model descriptor has the correct output shape.

nn_module

Calls torch::nn_linear() with the input and output features inferred from the input shape / task. For

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchHead

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchHead$new(id = "nn_head", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchHead$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_head")
pipeop
# The available parameters
pipeop$param_set


Identity Layer

Description

A placeholder identity operator that is argument-insensitive.

nn_module

Calls torch::nn_identity() when trained, which passes the input unchanged to the output.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchIdentity

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchIdentity$new(id = "nn_identity", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchIdentity$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_identity")
pipeop
# The available parameters
pipeop$param_set


Layer Normalization

Description

Applies Layer Normalization for last certain number of dimensions.

nn_module

Calls torch::nn_layer_norm() when trained. The parameter normalized_shape is inferred as the dimensions of the last dims dimensions of the input shape.

Parameters

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchLayerNorm

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchLayerNorm$new(id = "nn_layer_norm", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchLayerNorm$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_layer_norm", dims = 1)
pipeop
# The available parameters
pipeop$param_set


Leaky ReLU Activation Function

Description

Applies element-wise, LeakyReLU(x) = max(0, x) + negative_slope * min(0, x)

nn_module

Calls torch::nn_leaky_relu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchLeakyReLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchLeakyReLU$new(id = "nn_leaky_relu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchLeakyReLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_leaky_relu")
pipeop
# The available parameters
pipeop$param_set


Linear Layer

Description

Applies a linear transformation to the incoming data: y = xA^T + b.

nn_module

Calls torch::nn_linear() when trained where the parameter in_features is inferred as the second to last dimension of the input tensor.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchLinear

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchLinear$new(id = "nn_linear", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchLinear$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_linear", out_features = 10)
pipeop
# The available parameters
pipeop$param_set


Log Sigmoid Activation Function

Description

Applies element-wise LogSigmoid(x_i) = log(\frac{1}{1 + exp(-x_i)})

nn_module

Calls torch::nn_log_sigmoid() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchLogSigmoid

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchLogSigmoid$new(id = "nn_log_sigmoid", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchLogSigmoid$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_log_sigmoid")
pipeop
# The available parameters
pipeop$param_set


1D Max Pooling

Description

Applies a 1D max pooling over an input signal composed of several input planes.

nn_module

Calls torch::nn_max_pool1d() during training.

Parameters

Input and Output Channels

If return_indices is FALSE during construction, there is one input channel 'input' and one output channel 'output'. If return_indices is TRUE, there are two output channels 'output' and 'indices'. For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMaxPool -> PipeOpTorchMaxPool1D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMaxPool1D$new(
  id = "nn_max_pool1d",
  return_indices = FALSE,
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

return_indices

(logical(1))
Whether to return the indices. If this is TRUE, there are two output channels "output" and "indices".

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMaxPool1D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_max_pool1d")
pipeop
# The available parameters
pipeop$param_set


2D Max Pooling

Description

Applies a 2D max pooling over an input signal composed of several input planes.

nn_module

Calls torch::nn_max_pool2d() during training.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Input and Output Channels

If return_indices is FALSE during construction, there is one input channel 'input' and one output channel 'output'. If return_indices is TRUE, there are two output channels 'output' and 'indices'. For an explanation see PipeOpTorch.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMaxPool -> PipeOpTorchMaxPool2D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMaxPool2D$new(
  id = "nn_max_pool2d",
  return_indices = FALSE,
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

return_indices

(logical(1))
Whether to return the indices. If this is TRUE, there are two output channels "output" and "indices".

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMaxPool2D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_max_pool2d")
pipeop
# The available parameters
pipeop$param_set


3D Max Pooling

Description

Applies a 3D max pooling over an input signal composed of several input planes.

nn_module

Calls torch::nn_max_pool3d() during training.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Parameters

Input and Output Channels

If return_indices is FALSE during construction, there is one input channel 'input' and one output channel 'output'. If return_indices is TRUE, there are two output channels 'output' and 'indices'. For an explanation see PipeOpTorch.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMaxPool -> PipeOpTorchMaxPool3D

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMaxPool3D$new(
  id = "nn_max_pool3d",
  return_indices = FALSE,
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

return_indices

(logical(1))
Whether to return the indices. If this is TRUE, there are two output channels "output" and "indices".

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMaxPool3D$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_max_pool3d")
pipeop
# The available parameters
pipeop$param_set


Merge Operation

Description

Base class for merge operations such as addition (PipeOpTorchMergeSum), multiplication (PipeOpTorchMergeProd or concatenation (PipeOpTorchMergeCat).

Parameters

See the respective child class.

State

The state is the value calculated by the public method shapes_out().

Input and Output Channels

PipeOpTorchMerges has either a vararg input channel if the constructor argument innum is not set, or input channels "input1", ..., "input<innum>". There is one output channel "output". For an explanation see PipeOpTorch.

Internals

Per default, the private$.shapes_out() method outputs the broadcasted tensors. There are two things to be aware:

  1. NAs are assumed to batch (this should almost always be the batch size in the first dimension).

  2. Tensors are expected to have the same number of dimensions, i.e. missing dimensions are not filled with 1s. The reason is again that the first dimension should be the batch dimension. This private method can be overwritten by PipeOpTorchs inheriting from this class.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchMerge

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMerge$new(
  id,
  module_generator,
  param_set = ps(),
  innum = 0,
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

module_generator

(nn_module_generator)
The torch module generator.

param_set

(ParamSet)
The parameter set.

innum

(integer(1))
The number of inputs. Default is 0 which means there is one vararg input channel.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMerge$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr


Merge by Concatenation

Description

Concatenates multiple tensors on a given dimension. No broadcasting rules are applied here, you must reshape the tensors before to have the same shape.

nn_module

Calls nn_merge_cat() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

PipeOpTorchMerges has either a vararg input channel if the constructor argument innum is not set, or input channels "input1", ..., "input<innum>". There is one output channel "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMerge -> PipeOpTorchMergeCat

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMergeCat$new(id = "nn_merge_cat", innum = 0, param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

innum

(integer(1))
The number of inputs. Default is 0 which means there is one vararg input channel.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method speak()

What does the cat say?

Usage
PipeOpTorchMergeCat$speak()

Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMergeCat$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_merge_cat")
pipeop
# The available parameters
pipeop$param_set


Merge by Product

Description

Calculates the product of all input tensors.

nn_module

Calls nn_merge_prod() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

PipeOpTorchMerges has either a vararg input channel if the constructor argument innum is not set, or input channels "input1", ..., "input<innum>". There is one output channel "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMerge -> PipeOpTorchMergeProd

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMergeProd$new(id = "nn_merge_prod", innum = 0, param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

innum

(integer(1))
The number of inputs. Default is 0 which means there is one vararg input channel.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMergeProd$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_merge_prod")
pipeop
# The available parameters
pipeop$param_set


Merge by Summation

Description

Calculates the sum of all input tensors.

nn_module

Calls nn_merge_sum() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

PipeOpTorchMerges has either a vararg input channel if the constructor argument innum is not set, or input channels "input1", ..., "input<innum>". There is one output channel "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> mlr3torch::PipeOpTorchMerge -> PipeOpTorchMergeSum

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchMergeSum$new(id = "nn_merge_sum", innum = 0, param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

innum

(integer(1))
The number of inputs. Default is 0 which means there is one vararg input channel.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchMergeSum$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_merge_sum")
pipeop
# The available parameters
pipeop$param_set


PReLU Activation Function

Description

Applies element-wise the function PReLU(x) = max(0,x) + weight * min(0,x) where weight is a learnable parameter.

nn_module

Calls torch::nn_prelu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchPReLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchPReLU$new(id = "nn_prelu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchPReLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_prelu")
pipeop
# The available parameters
pipeop$param_set


ReGLU Activation Function

Description

Rectified Gated Linear Unit (ReGLU) activation function. See nn_reglu for details.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchReGLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchReGLU$new(id = "nn_reglu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchReGLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_reglu")
pipeop
# The available parameters
pipeop$param_set


ReLU Activation Function

Description

Applies the rectified linear unit function element-wise.

nn_module

Calls torch::nn_relu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchReLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchReLU$new(id = "nn_relu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchReLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_relu")
pipeop
# The available parameters
pipeop$param_set


ReLU6 Activation Function

Description

Applies the element-wise function ReLU6(x) = min(max(0,x), 6).

nn_module

Calls torch::nn_relu6() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchReLU6

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchReLU6$new(id = "nn_relu6", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchReLU6$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_relu6")
pipeop
# The available parameters
pipeop$param_set


Reshape a Tensor

Description

Reshape a tensor to the given shape.

nn_module

Calls nn_reshape() when trained. This internally calls torch::torch_reshape() with the given shape.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchReshape

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchReshape$new(id = "nn_reshape", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchReshape$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_reshape")
pipeop
# The available parameters
pipeop$param_set


RReLU Activation Function

Description

Randomized leaky ReLU.

nn_module

Calls torch::nn_rrelu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchRReLU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchRReLU$new(id = "nn_rrelu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchRReLU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_rrelu")
pipeop
# The available parameters
pipeop$param_set


SELU Activation Function

Description

Applies element-wise,

SELU(x) = scale * (max(0,x) + min(0, \alpha * (exp(x) - 1)))

, with \alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946.

nn_module

Calls torch::nn_selu() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSELU

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSELU$new(id = "nn_selu", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSELU$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_selu")
pipeop
# The available parameters
pipeop$param_set


Sigmoid Activation Function

Description

Applies element-wise Sigmoid(x_i) = \frac{1}{1 + exp(-x_i)}

nn_module

Calls torch::nn_sigmoid() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSigmoid

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSigmoid$new(id = "nn_sigmoid", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSigmoid$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_sigmoid")
pipeop
# The available parameters
pipeop$param_set


Softmax

Description

Applies a softmax function.

nn_module

Calls torch::nn_softmax() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSoftmax

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSoftmax$new(id = "nn_softmax", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSoftmax$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_softmax")
pipeop
# The available parameters
pipeop$param_set


SoftPlus Activation Function

Description

Applies element-wise, the function Softplus(x) = 1/\beta * log(1 + exp(\beta * x)).

nn_module

Calls torch::nn_softplus() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSoftPlus

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSoftPlus$new(id = "nn_softplus", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSoftPlus$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_softplus")
pipeop
# The available parameters
pipeop$param_set


Soft Shrink Activation Function

Description

Applies the soft shrinkage function elementwise

nn_module

Calls torch::nn_softshrink() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSoftShrink

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSoftShrink$new(id = "nn_softshrink", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSoftShrink$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_softshrink")
pipeop
# The available parameters
pipeop$param_set


SoftSign Activation Function

Description

Applies element-wise, the function SoftSign(x) = x/(1 + |x|

nn_module

Calls torch::nn_softsign() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSoftSign

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSoftSign$new(id = "nn_softsign", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSoftSign$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_softsign")
pipeop
# The available parameters
pipeop$param_set


Squeeze a Tensor

Description

Squeezes a tensor by calling torch::torch_squeeze() with the given dimension dim.

nn_module

Calls nn_squeeze() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchSqueeze

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchSqueeze$new(id = "nn_squeeze", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchSqueeze$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_squeeze")
pipeop
# The available parameters
pipeop$param_set


Tanh Activation Function

Description

Applies the element-wise function:

nn_module

Calls torch::nn_tanh() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchTanh

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchTanh$new(id = "nn_tanh", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchTanh$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_tanh")
pipeop
# The available parameters
pipeop$param_set


Tanh Shrink Activation Function

Description

Applies element-wise, Tanhshrink(x) = x - Tanh(x)

nn_module

Calls torch::nn_tanhshrink() when trained.

Parameters

No parameters.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchTanhShrink

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchTanhShrink$new(id = "nn_tanhshrink", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchTanhShrink$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_tanhshrink")
pipeop
# The available parameters
pipeop$param_set


Treshold Activation Function

Description

Thresholds each element of the input Tensor.

nn_module

Calls torch::nn_threshold() when trained.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchThreshold

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchThreshold$new(id = "nn_threshold", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchThreshold$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_threshold", threshold = 1, value = 2)
pipeop
# The available parameters
pipeop$param_set


Categorical Tokenizer

Description

Tokenizes categorical features into a dense embedding. For an input of shape ⁠(batch, n_features)⁠ the output shape is ⁠(batch, n_features, d_token)⁠.

nn_module

Calls nn_tokenizer_categ() when trained where the parameter cardinalities is inferred. The output shape is ⁠(batch, n_features, d_token)⁠.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchTokenizerCateg

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchTokenizerCateg$new(id = "nn_tokenizer_categ", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchTokenizerCateg$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_tokenizer_categ", d_token = 10)
pipeop
# The available parameters
pipeop$param_set


Numeric Tokenizer

Description

Tokenizes numeric features into a dense embedding. For an input of shape ⁠(batch, n_features)⁠ the output shape is ⁠(batch, n_features, d_token)⁠.

nn_module

Calls nn_tokenizer_num() when trained where the parameter n_features is inferred. The output shape is ⁠(batch, n_features, d_token)⁠.

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchTokenizerNum

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchTokenizerNum$new(id = "nn_tokenizer_num", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchTokenizerNum$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_tokenizer_num", d_token = 10)
pipeop
# The available parameters
pipeop$param_set


Unqueeze a Tensor

Description

Unsqueezes a tensor by calling torch::torch_unsqueeze() with the given dimension dim.

nn_module

Calls nn_unsqueeze() when trained. This internally calls torch::torch_unsqueeze().

Parameters

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method ⁠$shapes_out()⁠.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorch -> PipeOpTorchUnsqueeze

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchUnsqueeze$new(id = "nn_unsqueeze", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchUnsqueeze$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


# Construct the PipeOp
pipeop = po("nn_unsqueeze")
pipeop
# The available parameters
pipeop$param_set


Base Class for Lazy Tensor Preprocessing

Description

This PipeOp can be used to preprocess (one or more) lazy_tensor columns contained in an mlr3::Task. The preprocessing function is specified as construction argument fn and additional arguments to this function can be defined through the PipeOp's parameter set. The preprocessing is done per column, i.e. the number of lazy tensor output columns is equal to the number of lazy tensor input columns.

To create custom preprocessing PipeOps you can use pipeop_preproc_torch.

Inheriting

In addition to specifying the construction arguments, you can overwrite the private .shapes_out() method. If you don't overwrite it, the output shapes are assumed to be unknown (NULL).

Input and Output Channels

See PipeOpTaskPreproc.

State

In addition to state elements from PipeOpTaskPreprocSimple, the state also contains the ⁠$param_vals⁠ that were set during training.

Parameters

In addition to the parameters inherited from PipeOpTaskPreproc as well as those specified during construction as the argument param_set there are the following parameters:

Internals

During ⁠$train()⁠ / ⁠$predict()⁠, a PipeOpModule with one input and one output channel is created. The pipeop applies the function fn to the input tensor while additionally passing the parameter values (minus stages and affect_columns) to fn. The preprocessing graph of the lazy tensor columns is shallowly cloned and the PipeOpModule is added. This is done to avoid modifying user input and means that identical PipeOpModules can be part of different preprocessing graphs. This is only possible, because the created PipeOpModule is stateless.

At a later point in the graph, preprocessing graphs will be merged if possible to avoid unnecessary computation. This is best illustrated by example: One lazy tensor column's preprocessing graph is A -> B. Then, two branches are created B -> C and B -> D, creating two preprocessing graphs A -> B -> C and A -> B -> D. When loading the data, we want to run the preprocessing only once, i.e. we don't want to run the A -> B part twice. For this reason, task_dataset() will try to merge graphs and cache results from graphs. However, only graphs using the same dataset can currently be merged.

Also, the shapes created during ⁠$train()⁠ and ⁠$predict()⁠ might differ. To avoid the creation of graphs where the predict shapes are incompatible with the train shapes, the hypothetical predict shapes are already calculated during ⁠$train()⁠ (this is why the parameters that are set during train are also used during predict) and the PipeOpTorchModel will check the train and predict shapes for compatibility before starting the training.

Otherwise, this mechanism is very similar to the ModelDescriptor construct.

Super classes

mlr3pipelines::PipeOp -> mlr3pipelines::PipeOpTaskPreproc -> PipeOpTaskPreprocTorch

Active bindings

fn

The preprocessing function.

rowwise

Whether the preprocessing is applied rowwise.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTaskPreprocTorch$new(
  fn,
  id = "preproc_torch",
  param_vals = list(),
  param_set = ps(),
  packages = character(0),
  rowwise = FALSE,
  stages_init = NULL,
  tags = NULL
)
Arguments
fn

(function or character(2))
The preprocessing function. Must not modify its input in-place. If it is a character(2), the first element should be the namespace and the second element the name. When the preprocessing function is applied to the tensor, the tensor will be passed by position as the first argument. If the param_set is inferred (left as NULL) it is assumed that the first argument is the torch_tensor.

id

(character(1))
The id for of the new object.

param_vals

(named list())
Parameter values to be set after construction.

param_set

(ParamSet)
In case the function fn takes additional parameter besides a torch_tensor they can be specfied as parameters. None of the parameters can have the "predict" tag. All tags should include "train".

packages

(character())
The packages the preprocessing function depends on.

rowwise

(logical(1))
Whether the preprocessing function is applied rowwise (and then concatenated by row) or directly to the whole tensor. In the first case there is no batch dimension.

stages_init

(character(1))
Initial value for the stages parameter.

tags

(character())
Tags for the pipeop.


Method shapes_out()

Calculates the output shapes that would result in applying the preprocessing to one or more lazy tensor columns with the provided shape. Names are ignored and only order matters. It uses the parameter values that are currently set.

Usage
PipeOpTaskPreprocTorch$shapes_out(shapes_in, stage = NULL, task = NULL)
Arguments
shapes_in

(list() of (integer() or NULL))
The input input shapes of the lazy tensors. NULL indicates that the shape is unknown. First dimension must be NA (if it is not NULL).

stage

(character(1))
The stage: either "train" or "predict".

task

(Task or NULL)
The task, which is very rarely needed.

Returns

list() of (integer() or NULL)


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTaskPreprocTorch$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Examples


# Creating a simple task
d = data.table(
  x1 = as_lazy_tensor(rnorm(10)),
  x2 = as_lazy_tensor(rnorm(10)),
  x3 = as_lazy_tensor(as.double(1:10)),
  y = rnorm(10)
)

taskin = as_task_regr(d, target = "y")

# Creating a simple preprocessing pipeop
po_simple = po("preproc_torch",
  # get rid of environment baggage
  fn = mlr3misc::crate(function(x, a) x + a),
  param_set = paradox::ps(a = paradox::p_int(tags = c("train", "required")))
)

po_simple$param_set$set_values(
  a = 100,
  affect_columns = selector_name(c("x1", "x2")),
  stages = "both" # use during train and predict
)

taskout_train = po_simple$train(list(taskin))[[1L]]
materialize(taskout_train$data(cols = c("x1", "x2")), rbind = TRUE)

taskout_predict_noaug = po_simple$predict(list(taskin))[[1L]]
materialize(taskout_predict_noaug$data(cols = c("x1", "x2")), rbind = TRUE)

po_simple$param_set$set_values(
  stages = "train"
)

# transformation is not applied
taskout_predict_aug = po_simple$predict(list(taskin))[[1L]]
materialize(taskout_predict_aug$data(cols = c("x1", "x2")), rbind = TRUE)

# Creating a more complex preprocessing PipeOp
PipeOpPreprocTorchPoly = R6::R6Class("PipeOpPreprocTorchPoly",
 inherit = PipeOpTaskPreprocTorch,
 public = list(
   initialize = function(id = "preproc_poly", param_vals = list()) {
     param_set = paradox::ps(
       n_degree = paradox::p_int(lower = 1L, tags = c("train", "required"))
     )
     param_set$set_values(
       n_degree = 1L
     )
     fn = mlr3misc::crate(function(x, n_degree) {
       torch::torch_cat(
         lapply(seq_len(n_degree), function(d) torch::torch_pow(x, d)),
         dim = 2L
       )
     })

     super$initialize(
       fn = fn,
       id = id,
       packages = character(0),
       param_vals = param_vals,
       param_set = param_set,
       stages_init = "both"
     )
   }
 ),
 private = list(
   .shapes_out = function(shapes_in, param_vals, task) {
     # shapes_in is a list of length 1 containing the shapes
     checkmate::assert_true(length(shapes_in[[1L]]) == 2L)
     if (shapes_in[[1L]][2L] != 1L) {
       stop("Input shape must be (NA, 1)")
     }
     list(c(NA, param_vals$n_degree))
   }
 )
)

po_poly = PipeOpPreprocTorchPoly$new(
  param_vals = list(n_degree = 3L, affect_columns = selector_name("x3"))
)

po_poly$shapes_out(list(c(NA, 1L)), stage = "train")

taskout = po_poly$train(list(taskin))[[1L]]
materialize(taskout$data(cols = "x3"), rbind = TRUE)


Base Class for Torch Module Constructor Wrappers

Description

PipeOpTorch is the base class for all PipeOps that represent neural network layers in a Graph. During training, it generates a PipeOpModule that wraps an nn_module and attaches it to the architecture, which is also represented as a Graph consisting mostly of PipeOpModules an PipeOpNOPs.

While the former Graph operates on ModelDescriptors, the latter operates on tensors.

The relationship between a PipeOpTorch and a PipeOpModule is similar to the relationshop between a nn_module_generator (like nn_linear) and a nn_module (like the output of nn_linear(...)). A crucial difference is that the PipeOpTorch infers auxiliary parameters (like in_features for nn_linear) automatically from the intermediate tensor shapes that are being communicated through the ModelDescriptor.

During prediction, PipeOpTorch takes in a Task in each channel and outputs the same new Task resulting from their feature union in each channel. If there is only one input and output channel, the task is simply piped through.

Parameters

The ParamSet is specified by the child class inheriting from PipeOpTorch. Usually the parameters are the arguments of the wrapped nn_module minus the auxiliary parameter that can be automatically inferred from the shapes of the input tensors.

Inheriting

When inheriting from this class, one should overload either the private$.shapes_out() and the private$.shape_dependent_params() methods, or overload private$.make_module().

Input and Output Channels

During training, all inputs and outputs are of class ModelDescriptor. During prediction, all input and output channels are of class Task.

State

The state is the value calculated by the public method shapes_out().

Internals

During training, the PipeOpTorch creates a PipeOpModule for the given parameter specification and the input shapes from the incoming ModelDescriptors using the private method .make_module(). The input shapes are provided by the slot pointer_shape of the incoming ModelDescriptors. The channel names of this PipeOpModule are identical to the channel names of the generating PipeOpTorch.

A model descriptor union of all incoming ModelDescriptors is then created. Note that this modifies the graph of the first ModelDescriptor in place for efficiency. The PipeOpModule is added to the graph slot of this union and the the edges that connect the sending PipeOpModules to the input channel of this PipeOpModule are addeded to the graph. This is possible because every incoming ModelDescriptor contains the information about the id and the channel name of the sending PipeOp in the slot pointer.

The new graph in the model_descriptor_union represents the current state of the neural network architecture. It is structurally similar to the subgraph that consists of all pipeops of class PipeOpTorch and PipeOpTorchIngress that are ancestors of this PipeOpTorch.

For the output, a shallow copy of the ModelDescriptor is created and the pointer and pointer_shape are updated accordingly. The shallow copy means that all ModelDescriptors point to the same Graph which allows the graph to be modified by-reference in different parts of the code.

Super class

mlr3pipelines::PipeOp -> PipeOpTorch

Public fields

module_generator

(nn_module_generator or NULL)
The module generator wrapped by this PipeOpTorch. If NULL, the private method private$.make_module(shapes_in, param_vals) must be overwritte, see section 'Inheriting'. Do not change this after construction.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorch$new(
  id,
  module_generator,
  param_set = ps(),
  param_vals = list(),
  inname = "input",
  outname = "output",
  packages = "torch",
  tags = NULL,
  only_batch_unknown = TRUE
)
Arguments
id

(character(1))
Identifier of the resulting object.

module_generator

(nn_module_generator)
The torch module generator.

param_set

(ParamSet)
The parameter set.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.

inname

(character())
The names of the PipeOp's input channels. These will be the input channels of the generated PipeOpModule. Unless the wrapped module_generator's forward method (if present) has the argument ..., inname must be identical to those argument names in order to avoid any ambiguity.
If the forward method has the argument ..., the order of the input channels determines how the tensors will be passed to the wrapped nn_module.
If left as NULL (default), the argument module_generator must be given and the argument names of the modue_generator's forward function are set as inname.

outname

(character())
The names of the output channels channels. These will be the ouput channels of the generated PipeOpModule and therefore also the names of the list returned by its ⁠$train()⁠. In case there is more than one output channel, the nn_module that is constructed by this PipeOp during training must return a named list(), where the names of the list are the names out the output channels. The default is "output".

packages

(character())
The R packages this object depends on.

tags

(character())
The tags of the PipeOp. The tags "torch" is always added.

only_batch_unknown

(logical(1))
Whether only the batch dimension can be missing in the input shapes or whether other dimensions can also be unknown. Default is TRUE.


Method shapes_out()

Calculates the output shapes for the given input shapes, parameters and task.

Usage
PipeOpTorch$shapes_out(shapes_in, task = NULL)
Arguments
shapes_in

(list() of integer())
The input input shapes, which must be in the same order as the input channel names of the PipeOp.

task

(Task or NULL)
The task, which is very rarely used (default is NULL). An exception is PipeOpTorchHead.

Returns

A named list() containing the output shapes. The names are the names of the output channels of the PipeOp.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorch$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


## Creating a neural network
# In torch

task = tsk("iris")

network_generator = torch::nn_module(
  initialize = function(task, d_hidden) {
    d_in = length(task$feature_names)
    self$linear = torch::nn_linear(d_in, d_hidden)
    self$output = if (task$task_type == "regr") {
      torch::nn_linear(d_hidden, 1)
    } else if (task$task_type == "classif") {
      torch::nn_linear(d_hidden, output_dim_for(task))
    }
  },
  forward = function(x) {
    x = self$linear(x)
    x = torch::nnf_relu(x)
    self$output(x)
  }
)

network = network_generator(task, d_hidden = 50)
x = torch::torch_tensor(as.matrix(task$data(1, task$feature_names)))
y = torch::with_no_grad(network(x))


# In mlr3torch
network_generator = po("torch_ingress_num") %>>%
  po("nn_linear", out_features = 50) %>>%
  po("nn_head")
md = network_generator$train(task)[[1L]]
network = model_descriptor_to_module(md)
y = torch::with_no_grad(network(torch_ingress_num.input = x))



## Implementing a custom PipeOpTorch

# defining a custom module
nn_custom = nn_module("nn_custom",
  initialize = function(d_in1, d_in2, d_out1, d_out2, bias = TRUE) {
    self$linear1 = nn_linear(d_in1, d_out1, bias)
    self$linear2 = nn_linear(d_in2, d_out2, bias)
  },
  forward = function(input1, input2) {
    output1 = self$linear1(input1)
    output2 = self$linear1(input2)

    list(output1 = output1, output2 = output2)
  }
)

# wrapping the module into a custom PipeOpTorch

library(paradox)

PipeOpTorchCustom = R6::R6Class("PipeOpTorchCustom",
  inherit = PipeOpTorch,
  public = list(
    initialize = function(id = "nn_custom", param_vals = list()) {
      param_set = ps(
        d_out1 = p_int(lower = 1, tags = c("required", "train")),
        d_out2 = p_int(lower = 1, tags = c("required", "train")),
        bias = p_lgl(default = TRUE, tags = "train")
      )
      super$initialize(
        id = id,
        param_vals = param_vals,
        param_set = param_set,
        inname = c("input1", "input2"),
        outname = c("output1", "output2"),
        module_generator = nn_custom
      )
    }
  ),
  private = list(
    .shape_dependent_params = function(shapes_in, param_vals, task) {
      c(param_vals,
        list(d_in1 = tail(shapes_in[["input1"]], 1)), d_in2 = tail(shapes_in[["input2"]], 1)
      )
    },
    .shapes_out = function(shapes_in, param_vals, task) {
      list(
        input1 = c(head(shapes_in[["input1"]], -1), param_vals$d_out1),
        input2 = c(head(shapes_in[["input2"]], -1), param_vals$d_out2)
      )
    }
  )
)

## Training

# generate input
task = tsk("iris")
task1 = task$clone()$select(paste0("Sepal.", c("Length", "Width")))
task2 = task$clone()$select(paste0("Petal.", c("Length", "Width")))
graph = gunion(list(po("torch_ingress_num_1"), po("torch_ingress_num_2")))
mds_in = graph$train(list(task1, task2), single_input = FALSE)

mds_in[[1L]][c("graph", "task", "ingress", "pointer", "pointer_shape")]
mds_in[[2L]][c("graph", "task", "ingress", "pointer", "pointer_shape")]

# creating the PipeOpTorch and training it
po_torch = PipeOpTorchCustom$new()
po_torch$param_set$values = list(d_out1 = 10, d_out2 = 20)
train_input = list(input1 = mds_in[[1L]], input2 = mds_in[[2L]])
mds_out = do.call(po_torch$train, args = list(input = train_input))
po_torch$state

# the new model descriptors

# the resulting graphs are identical
identical(mds_out[[1L]]$graph, mds_out[[2L]]$graph)
# note that as a side-effect, also one of the input graphs is modified in-place for efficiency
mds_in[[1L]]$graph$edges

# The new task has both Sepal and Petal features
identical(mds_out[[1L]]$task, mds_out[[2L]]$task)
mds_out[[2L]]$task

# The new ingress slot contains all ingressors
identical(mds_out[[1L]]$ingress, mds_out[[2L]]$ingress)
mds_out[[1L]]$ingress

# The pointer and pointer_shape slots are different
mds_out[[1L]]$pointer
mds_out[[2L]]$pointer

mds_out[[1L]]$pointer_shape
mds_out[[2L]]$pointer_shape

## Prediction
predict_input = list(input1 = task1, input2 = task2)
tasks_out = do.call(po_torch$predict, args = list(input = predict_input))
identical(tasks_out[[1L]], tasks_out[[2L]])


Callback Configuration

Description

Configures the callbacks of a deep learning model.

Parameters

The parameters are defined dynamically from the callbacks, where the id of the respective callbacks is the respective set id.

Input and Output Channels

There is one input channel "input" and one output channel "output". During training, the channels are of class ModelDescriptor. During prediction, the channels are of class Task.

State

The state is the value calculated by the public method shapes_out().

Internals

During training the callbacks are cloned and added to the ModelDescriptor.

Super class

mlr3pipelines::PipeOp -> PipeOpTorchCallbacks

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchCallbacks$new(
  callbacks = list(),
  id = "torch_callbacks",
  param_vals = list()
)
Arguments
callbacks

(list of TorchCallbacks)
The callbacks (or something convertible via as_torch_callbacks()). Must have unique ids. All callbacks are cloned during construction.

id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchCallbacks$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Model Configuration: ModelDescriptor(), mlr_pipeops_torch_loss, mlr_pipeops_torch_optimizer, model_descriptor_union()

Other PipeOp: mlr_pipeops_module, mlr_pipeops_torch_optimizer

Examples


po_cb = po("torch_callbacks", "checkpoint")
po_cb$param_set
mdin = po("torch_ingress_num")$train(list(tsk("iris")))
mdin[[1L]]$callbacks
mdout = po_cb$train(mdin)[[1L]]
mdout$callbacks
# Can be called again
po_cb1 = po("torch_callbacks", t_clbk("progress"))
mdout1 = po_cb1$train(list(mdout))[[1L]]
mdout1$callbacks


Entrypoint to Torch Network

Description

Use this as entry-point to mlr3torch-networks. Unless you are an advanced user, you should not need to use this directly but PipeOpTorchIngressNumeric, PipeOpTorchIngressCategorical or PipeOpTorchIngressLazyTensor.

Parameters

Defined by the construction argument param_set.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is set to the input shape.

Internals

Creates an object of class TorchIngressToken for the given task. The purpuse of this is to store the information on how to construct the torch dataloader from the task for this entry point of the network.

Super class

mlr3pipelines::PipeOp -> PipeOpTorchIngress

Active bindings

feature_types

(character(1))
The features types that can be consumed by this PipeOpTorchIngress.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchIngress$new(
  id,
  param_set = ps(),
  param_vals = list(),
  packages = character(0),
  feature_types
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_set

(ParamSet)
The parameter set.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.

packages

(character())
The R packages this object depends on.

feature_types

(character())
The feature types. See mlr_reflections$task_feature_types for available values, Additionally, "lazy_tensor" is supported.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchIngress$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()


Torch Entry Point for Categorical Features

Description

Ingress PipeOp that represents a categorical (factor(), ordered() and logical()) entry point to a torch network.

Parameters

Internals

Uses batchgetter_categ().

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is set to the input shape.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorchIngress -> PipeOpTorchIngressCategorical

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchIngressCategorical$new(
  id = "torch_ingress_categ",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchIngressCategorical$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


graph = po("select", selector = selector_type("factor")) %>>%
  po("torch_ingress_categ")
task = tsk("german_credit")
# The output is a model descriptor
md = graph$train(task)[[1L]]
ingress = md$ingress[[1L]]
ingress$batchgetter(task$data(1, ingress$features(task)), "cpu")


Ingress for Lazy Tensor

Description

Ingress for a single lazy_tensor column.

Parameters

Internals

The returned batchgetter materializes the lazy tensor column to a tensor.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is set to the input shape.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorchIngress -> PipeOpTorchIngressLazyTensor

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchIngressLazyTensor$new(
  id = "torch_ingress_ltnsr",
  param_vals = list()
)
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchIngressLazyTensor$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


po_ingress = po("torch_ingress_ltnsr")
task = tsk("lazy_iris")

md = po_ingress$train(list(task))[[1L]]

ingress = md$ingress
x_batch = ingress[[1L]]$batchgetter(data = task$data(1, "x"), cache = NULL)
x_batch

# Now we try a lazy tensor with unknown shape, i.e. the shapes between the rows can differ

ds = dataset(
  initialize = function() self$x = list(torch_randn(3, 10, 10), torch_randn(3, 8, 8)),
  .getitem = function(i) list(x = self$x[[i]]),
  .length = function() 2)()

task_unknown = as_task_regr(data.table(
  x = as_lazy_tensor(ds, dataset_shapes = list(x = NULL)),
  y = rnorm(2)
), target = "y", id = "example2")

# this task (as it is) can NOT be processed by PipeOpTorchIngressLazyTensor
# It therefore needs to be preprocessed
po_resize = po("trafo_resize", size = c(6, 6))
task_unknown_resize = po_resize$train(list(task_unknown))[[1L]]

# printing the transformed column still shows unknown shapes,
# because the preprocessing pipeop cannot infer them,
# however we know that the shape is now (3, 10, 10) for all rows
task_unknown_resize$data(1:2, "x")
po_ingress$param_set$set_values(shape = c(NA, 3, 6, 6))

md2 = po_ingress$train(list(task_unknown_resize))[[1L]]

ingress2 = md2$ingress
x_batch2 = ingress2[[1L]]$batchgetter(
  data = task_unknown_resize$data(1:2, "x"),
  cache = NULL
)

x_batch2


Torch Entry Point for Numeric Features

Description

Ingress PipeOp that represents a numeric (integer() and numeric()) entry point to a torch network.

Internals

Uses batchgetter_num().

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is set to the input shape.

Super classes

mlr3pipelines::PipeOp -> mlr3torch::PipeOpTorchIngress -> PipeOpTorchIngressNumeric

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchIngressNumeric$new(id = "torch_ingress_num", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchIngressNumeric$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Examples


graph = po("select", selector = selector_type(c("numeric", "integer"))) %>>%
  po("torch_ingress_num")
task = tsk("german_credit")
# The output is a model descriptor
md = graph$train(task)[[1L]]
ingress = md$ingress[[1L]]
ingress$batchgetter(task$data(1:5, ingress$features(task)), "cpu")


Loss Configuration

Description

Configures the loss of a deep learning model.

Input and Output Channels

One input channel called "input" and one output channel called "output". For an explanation see PipeOpTorch.

State

The state is the value calculated by the public method shapes_out().

Parameters

The parameters are defined dynamically from the loss set during construction.

Internals

During training the loss is cloned and added to the ModelDescriptor.

Super class

mlr3pipelines::PipeOp -> PipeOpTorchLoss

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchLoss$new(loss, id = "torch_loss", param_vals = list())
Arguments
loss

(TorchLoss or character(1) or nn_loss)
The loss (or something convertible via as_torch_loss()).

id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchLoss$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr

Other Model Configuration: ModelDescriptor(), mlr_pipeops_torch_callbacks, mlr_pipeops_torch_optimizer, model_descriptor_union()

Examples


po_loss = po("torch_loss", loss = t_loss("cross_entropy"))
po_loss$param_set
mdin = po("torch_ingress_num")$train(list(tsk("iris")))
mdin[[1L]]$loss
mdout = po_loss$train(mdin)[[1L]]
mdout$loss


PipeOp Torch Model

Description

Builds a Torch Learner from a ModelDescriptor and trains it with the given parameter specification. The task type must be specified during construction.

Parameters

General:

The parameters of the optimizer, loss and callbacks, prefixed with "opt.", "loss." and "cb.<callback id>." respectively, as well as:

Evaluation:

Early Stopping:

Dataloader:

Also see torch::dataloder for more information.

Input and Output Channels

There is one input channel "input" that takes in ModelDescriptor during traing and a Task of the specified task_type during prediction. The output is NULL during training and a Prediction of given task_type during prediction.

State

A trained LearnerTorchModel.

Internals

A LearnerTorchModel is created by calling model_descriptor_to_learner() on the provided ModelDescriptor that is received through the input channel. Then the parameters are set according to the parameters specified in PipeOpTorchModel and its '$train()⁠ method is called on the [⁠Task⁠][mlr3::Task] stored in the [⁠ModelDescriptor'].

Super classes

mlr3pipelines::PipeOp -> mlr3pipelines::PipeOpLearner -> PipeOpTorchModel

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchModel$new(task_type, id = "torch_model", param_vals = list())
Arguments
task_type

(character(1))
The task type of the model.

id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchModel$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model_classif, mlr_pipeops_torch_model_regr


PipeOp Torch Classifier

Description

Builds a torch classifier and trains it.

Parameters

See LearnerTorch

Input and Output Channels

There is one input channel "input" that takes in ModelDescriptor during traing and a Task of the specified task_type during prediction. The output is NULL during training and a Prediction of given task_type during prediction.

State

A trained LearnerTorchModel.

Internals

A LearnerTorchModel is created by calling model_descriptor_to_learner() on the provided ModelDescriptor that is received through the input channel. Then the parameters are set according to the parameters specified in PipeOpTorchModel and its '$train()⁠ method is called on the [⁠Task⁠][mlr3::Task] stored in the [⁠ModelDescriptor'].

Super classes

mlr3pipelines::PipeOp -> mlr3pipelines::PipeOpLearner -> mlr3torch::PipeOpTorchModel -> PipeOpTorchModelClassif

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchModelClassif$new(id = "torch_model_classif", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchModelClassif$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_regr

Examples


# simple logistic regression

# configure the model descriptor
md = as_graph(po("torch_ingress_num") %>>%
  po("nn_head") %>>%
  po("torch_loss", "cross_entropy") %>>%
  po("torch_optimizer", "adam"))$train(tsk("iris"))[[1L]]

print(md)

# build the learner from the model descriptor and train it
po_model = po("torch_model_classif", batch_size = 50, epochs = 1)
po_model$train(list(md))
po_model$state


Torch Regression Model

Description

Builds a torch regression model and trains it.

Parameters

See LearnerTorch

Input and Output Channels

There is one input channel "input" that takes in ModelDescriptor during traing and a Task of the specified task_type during prediction. The output is NULL during training and a Prediction of given task_type during prediction.

State

A trained LearnerTorchModel.

Internals

A LearnerTorchModel is created by calling model_descriptor_to_learner() on the provided ModelDescriptor that is received through the input channel. Then the parameters are set according to the parameters specified in PipeOpTorchModel and its '$train()⁠ method is called on the [⁠Task⁠][mlr3::Task] stored in the [⁠ModelDescriptor'].

Super classes

mlr3pipelines::PipeOp -> mlr3pipelines::PipeOpLearner -> mlr3torch::PipeOpTorchModel -> PipeOpTorchModelRegr

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchModelRegr$new(id = "torch_model_regr", param_vals = list())
Arguments
id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchModelRegr$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOps: mlr_pipeops_nn_adaptive_avg_pool1d, mlr_pipeops_nn_adaptive_avg_pool2d, mlr_pipeops_nn_adaptive_avg_pool3d, mlr_pipeops_nn_avg_pool1d, mlr_pipeops_nn_avg_pool2d, mlr_pipeops_nn_avg_pool3d, mlr_pipeops_nn_batch_norm1d, mlr_pipeops_nn_batch_norm2d, mlr_pipeops_nn_batch_norm3d, mlr_pipeops_nn_block, mlr_pipeops_nn_celu, mlr_pipeops_nn_conv1d, mlr_pipeops_nn_conv2d, mlr_pipeops_nn_conv3d, mlr_pipeops_nn_conv_transpose1d, mlr_pipeops_nn_conv_transpose2d, mlr_pipeops_nn_conv_transpose3d, mlr_pipeops_nn_dropout, mlr_pipeops_nn_elu, mlr_pipeops_nn_flatten, mlr_pipeops_nn_ft_cls, mlr_pipeops_nn_ft_transformer_block, mlr_pipeops_nn_geglu, mlr_pipeops_nn_gelu, mlr_pipeops_nn_glu, mlr_pipeops_nn_hardshrink, mlr_pipeops_nn_hardsigmoid, mlr_pipeops_nn_hardtanh, mlr_pipeops_nn_head, mlr_pipeops_nn_identity, mlr_pipeops_nn_layer_norm, mlr_pipeops_nn_leaky_relu, mlr_pipeops_nn_linear, mlr_pipeops_nn_log_sigmoid, mlr_pipeops_nn_max_pool1d, mlr_pipeops_nn_max_pool2d, mlr_pipeops_nn_max_pool3d, mlr_pipeops_nn_merge, mlr_pipeops_nn_merge_cat, mlr_pipeops_nn_merge_prod, mlr_pipeops_nn_merge_sum, mlr_pipeops_nn_prelu, mlr_pipeops_nn_reglu, mlr_pipeops_nn_relu, mlr_pipeops_nn_relu6, mlr_pipeops_nn_reshape, mlr_pipeops_nn_rrelu, mlr_pipeops_nn_selu, mlr_pipeops_nn_sigmoid, mlr_pipeops_nn_softmax, mlr_pipeops_nn_softplus, mlr_pipeops_nn_softshrink, mlr_pipeops_nn_softsign, mlr_pipeops_nn_squeeze, mlr_pipeops_nn_tanh, mlr_pipeops_nn_tanhshrink, mlr_pipeops_nn_threshold, mlr_pipeops_nn_tokenizer_categ, mlr_pipeops_nn_tokenizer_num, mlr_pipeops_nn_unsqueeze, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, mlr_pipeops_torch_loss, mlr_pipeops_torch_model, mlr_pipeops_torch_model_classif

Examples


# simple linear regression

# build the model descriptor
md = as_graph(po("torch_ingress_num") %>>%
  po("nn_head") %>>%
  po("torch_loss", "mse") %>>%
  po("torch_optimizer", "adam"))$train(tsk("mtcars"))[[1L]]

print(md)

# build the learner from the model descriptor and train it
po_model = po("torch_model_regr", batch_size = 20, epochs = 1)
po_model$train(list(md))
po_model$state


Optimizer Configuration

Description

Configures the optimizer of a deep learning model.

Parameters

The parameters are defined dynamically from the optimizer that is set during construction.

Input and Output Channels

There is one input channel "input" and one output channel "output". During training, the channels are of class ModelDescriptor. During prediction, the channels are of class Task.

State

The state is the value calculated by the public method shapes_out().

Internals

During training, the optimizer is cloned and added to the ModelDescriptor. Note that the parameter set of the stored TorchOptimizer is reference-identical to the parameter set of the pipeop itself.

Super class

mlr3pipelines::PipeOp -> PipeOpTorchOptimizer

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorchOptimizer$new(
  optimizer = t_opt("adam"),
  id = "torch_optimizer",
  param_vals = list()
)
Arguments
optimizer

(TorchOptimizer or character(1) or torch_optimizer_generator)
The optimizer (or something convertible via as_torch_optimizer()).

id

(character(1))
Identifier of the resulting object.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.


Method clone()

The objects of this class are cloneable with this method.

Usage
PipeOpTorchOptimizer$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other PipeOp: mlr_pipeops_module, mlr_pipeops_torch_callbacks

Other Model Configuration: ModelDescriptor(), mlr_pipeops_torch_callbacks, mlr_pipeops_torch_loss, model_descriptor_union()

Examples


po_opt = po("torch_optimizer", "sgd", lr = 0.01)
po_opt$param_set
mdin = po("torch_ingress_num")$train(list(tsk("iris")))
mdin[[1L]]$optimizer
mdout = po_opt$train(mdin)
mdout[[1L]]$optimizer


Adjust Brightness Transformation

Description

Calls torchvision::transform_adjust_brightness, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_adjust_brightness"")

Parameters

Id Type Default Levels Range
brightness_factor numeric - [0, \infty)
stages character - train, predict, both -
affect_columns untyped selector_all() -

Adjust Gamma Transformation

Description

Calls torchvision::transform_adjust_gamma, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_adjust_gamma"")

Parameters

Id Type Default Levels Range
gamma numeric - [0, \infty)
gain numeric 1 (-\infty, \infty)
stages character - train, predict, both -
affect_columns untyped selector_all() -

Adjust Hue Transformation

Description

Calls torchvision::transform_adjust_hue, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_adjust_hue"")

Parameters

Id Type Default Levels Range
hue_factor numeric - [-0.5, 0.5]
stages character - train, predict, both -
affect_columns untyped selector_all() -

Adjust Saturation Transformation

Description

Calls torchvision::transform_adjust_saturation, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_adjust_saturation"")

Parameters

Id Type Default Levels Range
saturation_factor numeric - (-\infty, \infty)
stages character - train, predict, both -
affect_columns untyped selector_all() -

Grayscale Transformation

Description

Calls torchvision::transform_grayscale, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_grayscale"")

Parameters

Id Type Default Levels Range
num_output_channels integer - [1, 3]
stages character - train, predict, both -
affect_columns untyped selector_all() -

No Transformation

Description

Does nothing.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.


Normalization Transformation

Description

Calls torchvision::transform_normalize, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_normalize"")

Parameters

Id Type Default Levels
mean untyped -
std untyped -
stages character - train, predict, both
affect_columns untyped selector_all()

Padding Transformation

Description

Calls torchvision::transform_pad, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_pad"")

Parameters

Id Type Default Levels
padding untyped -
fill untyped 0
padding_mode character constant constant, edge, reflect, symmetric
stages character - train, predict, both
affect_columns untyped selector_all()

Reshaping Transformation

Description

Reshapes the tensor according to the parameter shape, by calling torch_reshape(). This preprocessing function is applied batch-wise.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Parameters


Resizing Transformation

Description

Calls torchvision::transform_resize, see there for more information on the parameters. The preprocessing is applied to the whole batch.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_resize"")

Parameters

Id Type Default Levels
size untyped -
interpolation character 2 Undefined, Bartlett, Blackman, Bohman, Box, Catrom, Cosine, Cubic, Gaussian, Hamming, ...
stages character - train, predict, both
affect_columns untyped selector_all()

RGB to Grayscale Transformation

Description

Calls torchvision::transform_rgb_to_grayscale, see there for more information on the parameters. The preprocessing is applied to each element of a batch individually.

Format

R6Class inheriting from PipeOpTaskPreprocTorch.

Construction

po("trafo_rgb_to_grayscale"")

Parameters

Id Type Default Levels
stages character - train, predict, both
affect_columns untyped selector_all()

CIFAR Classification Tasks

Description

The CIFAR-10 and CIFAR-100 datasets. A subset of the 80 million tiny images dataset with noisy labels was supplied to student labelers, who were asked to filter out incorrectly labeled images. The images are have datatype torch_long().

CIFAR-10 contains 10 classes. CIFAR-100 contains 100 classes, which may be partitioned into 20 superclasses of 5 classes each. The CIFAR-10 and CIFAR-100 classes are mutually exclusive. See Chapter 3.1 of the technical report for more details.

The data is obtained from torchvision::cifar10_dataset() (or torchvision::cifar100_dataset()).

Format

R6::R6Class inheriting from mlr3::TaskClassif.

Construction

tsk("cifar10")
tsk("cifar100")

Download

The task's backend is a DataBackendLazy which will download the data once it is requested. Other meta-data is already available before that. You can cache these datasets by setting the mlr3torch.cache option to TRUE or to a specific path to be used as the cache directory.

Properties

References

Krizhevsky, Alex (2009). “Learning Multiple Layers of Features from Tiny Images.” Master's thesis, Department of Computer Science, University of Toronto.

Examples

task_cifar10 = tsk("cifar10")
task_cifar100 = tsk("cifar100")

Iris Classification Task

Description

A classification task for the popular datasets::iris data set. Just like the iris task, but the features are represented as one lazy tensor column.

Format

R6::R6Class inheriting from mlr3::TaskClassif.

Construction

tsk("lazy_iris")

Properties

Source

https://en.wikipedia.org/wiki/Iris_flower_data_set

References

Anderson E (1936). “The Species Problem in Iris.” Annals of the Missouri Botanical Garden, 23(3), 457. doi:10.2307/2394164.

Examples


task = tsk("lazy_iris")
task
df = task$data()
materialize(df$x[1:6], rbind = TRUE)


Melanoma Image classification

Description

Classification of melanoma tumor images. The data is a preprocessed version of the 2020 SIIM-ISIC challenge where the images have been reshaped to size $(3, 128, 128)$.

By default only the training rows are active in the task, but the test data (that has no targets) is also included. Whether an observation is part of the train or test set is indicated by the column "test".

There are no labels for the test rows, so by default, these observations are inactive, which means that the task uses only 32701 of the 43683 observations that are defined in the underlying data backend.

The data backend also contains a more detailed diagnosis of the specific type of tumor.

Columns:

Construction

tsk("melanoma")

Download

The task's backend is a DataBackendLazy which will download the data once it is requested. Other meta-data is already available before that. You can cache these datasets by setting the mlr3torch.cache option to TRUE or to a specific path to be used as the cache directory.

Properties

Source

https://huggingface.co/datasets/carsonzhang/ISIC_2020_small

References

Rotemberg, V., Kurtansky, N., Betz-Stablein, B., Caffery, L., Chousakos, E., Codella, N., Combalia, M., Dusza, S., Guitera, P., Gutman, D., Halpern, A., Helba, B., Kittler, H., Kose, K., Langer, S., Lioprys, K., Malvehy, J., Musthaq, S., Nanda, J., Reiter, O., Shih, G., Stratigos, A., Tschandl, P., Weber, J., Soyer, P. (2021). “A patient-centric dataset of images and metadata for identifying melanomas using clinical context.” Scientific Data, 8, 34. doi:10.1038/s41597-021-00815-z.

Examples

task = tsk("melanoma")

MNIST Image classification

Description

Classic MNIST image classification.

The underlying DataBackend contains columns "label", "image", "row_id", "split", where the last column indicates whether the row belongs to the train or test set.

The first 60000 rows belong to the training set, the last 10000 rows to the test set.

Construction

tsk("mnist")

Download

The task's backend is a DataBackendLazy which will download the data once it is requested. Other meta-data is already available before that. You can cache these datasets by setting the mlr3torch.cache option to TRUE or to a specific path to be used as the cache directory.

Properties

Source

https://torchvision.mlverse.org/reference/mnist_dataset.html

References

Lecun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791.

Examples

task = tsk("mnist")

Tiny ImageNet Classification Task

Description

Subset of the famous ImageNet dataset. The data is obtained from torchvision::tiny_imagenet_dataset().

The underlying DataBackend contains columns "class", "image", "..row_id", "split", where the last column indicates whether the row belongs to the train, validation or test set that are provided in torchvision.

There are no labels for the test rows, so by default, these observations are inactive, which means that the task uses only 110000 of the 120000 observations that are defined in the underlying data backend.

Construction

tsk("tiny_imagenet")

Download

The task's backend is a DataBackendLazy which will download the data once it is requested. Other meta-data is already available before that. You can cache these datasets by setting the mlr3torch.cache option to TRUE or to a specific path to be used as the cache directory.

Properties

References

Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Fei-Fei, Li (2009). “Imagenet: A large-scale hierarchical image database.” In 2009 IEEE conference on computer vision and pattern recognition, 248–255. IEEE.

Examples

task = tsk("tiny_imagenet")

Create a Torch Learner from a ModelDescriptor

Description

First a nn_graph is created using model_descriptor_to_module and then a learner is created from this module and the remaining information from the model descriptor, which must include the optimizer and loss function and optionally callbacks.

Usage

model_descriptor_to_learner(model_descriptor)

Arguments

model_descriptor

(ModelDescriptor)
The model descriptor.

Value

Learner

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_module(), model_descriptor_union(), nn_graph()


Create a nn_graph from ModelDescriptor

Description

Creates the nn_graph from a ModelDescriptor. Mostly for internal use, since the ModelDescriptor is in most circumstances harder to use than just creating nn_graph directly.

Usage

model_descriptor_to_module(
  model_descriptor,
  output_pointers = NULL,
  list_output = FALSE
)

Arguments

model_descriptor

(ModelDescriptor)
Model Descriptor. pointer is ignored, instead output_pointer values are used. ⁠$graph⁠ member is modified by-reference.

output_pointers

(list of character)
Collection of pointers that indicate what part of the model_descriptor$graph is being used for output. Entries have the format of ModelDescriptor$pointer.

list_output

(logical(1))
Whether output should be a list of tensors. If FALSE, then length(output_pointers) must be 1.

Value

nn_graph

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_union(), nn_graph()


Union of ModelDescriptors

Description

This is a mostly internal function that is used in PipeOpTorchs with multiple input channels.

It creates the union of multiple ModelDescriptors:

Usage

model_descriptor_union(md1, md2)

Arguments

md1

(ModelDescriptor) The first ModelDescriptor.

md2

(ModelDescriptor) The second ModelDescriptor.

Details

The requirement that no new input edgedes may be added to PipeOps is not theoretically necessary, but since we assume that ModelDescriptor is being built from beginning to end (i.e. PipeOps never get new ancestors) we can make this assumption and simplify things. Otherwise we'd need to treat "..."-inputs special.)

Value

ModelDescriptor

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), nn_graph()

Other Model Configuration: ModelDescriptor(), mlr_pipeops_torch_callbacks, mlr_pipeops_torch_loss, mlr_pipeops_torch_optimizer


Create a Neural Network Layer

Description

Retrieve a neural network layer from the mlr_pipeops dictionary.

Usage

nn(.key, ...)

Arguments

.key

(character(1))

...

(any)
Additional parameters, constructor arguments or fields.

Examples

po1 = po("nn_linear", id = "linear")
# is the same as:
po2 = nn("linear")

CLS Token for FT-Transformer

Description

Concatenates a CLS token to the input as the last feature. The input shape is expected to be ⁠(batch, n_features, d_token)⁠ and the output shape is ⁠(batch, n_features + 1, d_token)⁠.

This is used in the LearnerTorchFTTransformer.

Usage

nn_ft_cls(d_token, initialization)

Arguments

d_token

(integer(1))
The dimension of the embedding.

initialization

(character(1))
The initialization method for the embedding weights. Possible values are "uniform" and "normal".

References

Devlin, Jacob, Chang, Ming-Wei, Lee, Kenton, Toutanova, Kristina (2018). “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805.


Single Transformer Block for FT-Transformer

Description

A transformer block consisting of a multi-head self-attention mechanism followed by a feed-forward network.

This is used in LearnerTorchFTTransformer.

Usage

nn_ft_transformer_block(
  d_token,
  attention_n_heads,
  attention_dropout,
  attention_initialization,
  ffn_d_hidden = NULL,
  ffn_d_hidden_multiplier = NULL,
  ffn_dropout,
  ffn_activation,
  residual_dropout,
  prenormalization,
  is_first_layer,
  attention_normalization,
  ffn_normalization,
  query_idx = NULL,
  attention_bias,
  ffn_bias_first,
  ffn_bias_second
)

Arguments

d_token

(integer(1))
The dimension of the embedding.

attention_n_heads

(integer(1))
Number of attention heads.

attention_dropout

(numeric(1))
Dropout probability in the attention mechanism.

attention_initialization

(character(1))
Initialization method for attention weights. Either "kaiming" or "xavier".

ffn_d_hidden

(integer(1))
Hidden dimension of the feed-forward network. Multiplied by 2 if using ReGLU or GeGLU activation.

ffn_d_hidden_multiplier

(numeric(1))
Alternative way to specify the hidden dimension of the feed-forward network as d_token * d_hidden_multiplier. Also multiplied by 2 if using RegLU or GeGLU activation.

ffn_dropout

(numeric(1))
Dropout probability in the feed-forward network.

ffn_activation

(nn_module)
Activation function for the feed-forward network. Default value is nn_reglu.

residual_dropout

(numeric(1))
Dropout probability for residual connections.

prenormalization

(logical(1))
Whether to apply normalization before attention and FFN (TRUE) or after (TRUE).

is_first_layer

(logical(1))
Whether this is the first layer in the transformer stack. Default value is FALSE.

attention_normalization

(nn_module)
Normalization module to use for attention. Default value is nn_layer_norm.

ffn_normalization

(nn_module)
Normalization module to use for the feed-forward network. Default value is nn_layer_norm.

query_idx

(integer() or NULL)
Indices of the tensor to apply attention to. Should not be set manually. If NULL, then attention is applied to the entire tensor. In the last block in a stack of transformers, this is set to -1 so that attention is applied only to the embedding of the CLS token.

attention_bias

(logical(1))
Whether attention has a bias. Default is TRUE

ffn_bias_first

(logical(1))
Whether the first layer in the FFN has a bias. Default is TRUE

ffn_bias_second

(logical(1))
Whether the second layer in the FFN has a bias. Default is TRUE

References

Devlin, Jacob, Chang, Ming-Wei, Lee, Kenton, Toutanova, Kristina (2018). “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805. Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.


GeGLU Module

Description

This module implements the Gaussian Error Linear Unit Gated Linear Unit (GeGLU) activation function. It computes \text{GeGLU}(x, g) = x \cdot \text{GELU}(g) where \(x\) and \(g\) are created by splitting the input tensor in half along the last dimension.

Usage

nn_geglu()

References

Shazeer N (2020). “GLU Variants Improve Transformer.” 2002.05202, https://arxiv.org/abs/2002.05202.

Examples


x = torch::torch_randn(10, 10)
glu = nn_geglu()
glu(x)


Graph Network

Description

Represents a neural network using a Graph that contains mostly PipeOpModules.

Usage

nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)

Arguments

graph

(Graph)
The Graph to wrap. Is not cloned.

shapes_in

(named integer)
Shape info of tensors that go into graph. Names must be graph$input$name, possibly in different order.

output_map

(character)
Which of graph's outputs to use. Must be a subset of graph$output$name.

list_output

(logical(1))
Whether output should be a list of tensors. If FALSE (default), then length(output_map) must be 1.

Value

nn_graph

Fields

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union()

Examples


graph = mlr3pipelines::Graph$new()
graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE)
graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE)
graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE)
graph$add_edge("module_1", "module_2")
graph$add_edge("module_2", "module_3")

network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10)))

x = torch_randn(16, 10)

network(module_1.input = x)


Concatenates multiple tensors

Description

Concatenates multiple tensors on a given dimension. No broadcasting rules are applied here, you must reshape the tensors before to have the same shape.

Usage

nn_merge_cat(dim = -1)

Arguments

dim

(integer(1))
The dimension for the concatenation.


Product of multiple tensors

Description

Calculates the product of all input tensors.

Usage

nn_merge_prod()

Sum of multiple tensors

Description

Calculates the sum of all input tensors.

Usage

nn_merge_sum()

ReGLU Module

Description

Rectified Gated Linear Unit (ReGLU) module. Computes the output as \text{ReGLU}(x, g) = x \cdot \text{ReLU}(g) where \(x\) and \(g\) are created by splitting the input tensor in half along the last dimension.

Usage

nn_reglu()

References

Shazeer N (2020). “GLU Variants Improve Transformer.” 2002.05202, https://arxiv.org/abs/2002.05202.

Examples


x = torch::torch_randn(10, 10)
reglu = nn_reglu()
reglu(x)


Reshape

Description

Reshape a tensor to the given shape.

Usage

nn_reshape(shape)

Arguments

shape

(integer())
The desired output shape.


Squeeze

Description

Squeezes a tensor by calling torch::torch_squeeze() with the given dimension dim.

Usage

nn_squeeze(dim)

Arguments

dim

(integer())
The dimension to squeeze.


Categorical Tokenizer

Description

Tokenizes categorical features into a dense embedding. For an input of shape ⁠(batch, n_features)⁠ the output shape is ⁠(batch, n_features, d_token)⁠.

Usage

nn_tokenizer_categ(cardinalities, d_token, bias, initialization)

Arguments

cardinalities

(integer())
The number of categories for each feature.

d_token

(integer(1))
The dimension of the embedding.

bias

(logical(1))
Whether to use a bias.

initialization

(character(1))
The initialization method for the embedding weights. Possible values are "uniform" and "normal".

References

Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.


Numeric Tokenizer

Description

Tokenizes numeric features into a dense embedding. For an input of shape ⁠(batch, n_features)⁠ the output shape is ⁠(batch, n_features, d_token)⁠.

Usage

nn_tokenizer_num(n_features, d_token, bias, initialization)

Arguments

n_features

(integer(1))
The number of features.

d_token

(integer(1))
The dimension of the embedding.

bias

(logical(1))
Whether to use a bias.

initialization

(character(1))
The initialization method for the embedding weights. Possible values are "uniform" and "normal".

References

Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.


Unsqueeze

Description

Unsqueezes a tensor by calling torch::torch_unsqueeze() with the given dimension dim.

Usage

nn_unsqueeze(dim)

Arguments

dim

(integer(1))
The dimension to unsqueeze.


Network Output Dimension

Description

Calculates the output dimension of a neural network for a given task that is expected by mlr3torch. For classification, this is the number of classes (unless it is a binary classification task, where it is 1). For regression, it is 1.

Usage

output_dim_for(x, ...)

Arguments

x

(any)
The task.

...

(any)
Additional arguments. Not used yet.


Create Torch Preprocessing PipeOps

Description

Function to create objects of class PipeOpTaskPreprocTorch in a more convenient way. Start by reading the documentation of PipeOpTaskPreprocTorch.

Usage

pipeop_preproc_torch(
  id,
  fn,
  shapes_out = NULL,
  param_set = NULL,
  packages = character(0),
  rowwise = FALSE,
  parent_env = parent.frame(),
  stages_init = NULL,
  tags = NULL
)

Arguments

id

(character(1))
The id for of the new object.

fn

(function)
The preprocessing function.

shapes_out

(function or NULL or "infer")
The private .shapes_out(shapes_in, param_vals, task) method of PipeOpTaskPreprocTorch (see section Inheriting). Special values are NULL and "infer": If NULL, the output shapes are unknown. Option "infer" uses infer_shapes. Method "infer" should be correct in most cases, but might fail in some edge cases.

param_set

(ParamSet or NULL)
The parameter set. If this is left as NULL (default) the parameter set is inferred in the following way: All parameters but the first and ... of fn are set as untyped parameters with tags 'train' and those that have no default value are tagged as 'required' as well. Default values are not annotated.

packages

(character())
The R packages this object depends on.

rowwise

(logical(1))
Whether the preprocessing is applied row-wise.

parent_env

(environment)
The parent environment for the R6 class.

stages_init

(character(1))
Initial value for the stages parameter. If NULL (default), will be set to "both" in case the id starts with "trafo" and to "train" if it starts with "augment". Otherwise it must specified.

tags

(character())
Tags for the pipeop

Value

An R6Class instance inheriting from PipeOpTaskPreprocTorch

Examples


PipeOpPreprocExample = pipeop_preproc_torch("preproc_example", function(x, a) x + a)
po_example = PipeOpPreprocExample$new()
po_example$param_set


Replace the head of a network Replaces the head of the network with a linear layer with d_out classes.

Description

Replace the head of a network Replaces the head of the network with a linear layer with d_out classes.

Usage

replace_head(network, d_out)

Arguments

network

(torch::nn_module)
The network

d_out

(integer(1))
The number of output classes.


Sugar Function for Torch Callback

Description

Retrieves one or more TorchCallbacks from mlr3torch_callbacks. Works like mlr3::lrn() and mlr3::lrns().

Usage

t_clbk(.key, ...)

t_clbks(.keys)

Arguments

.key

(character(1))
The key of the torch callback.

...

(any)
See description of dictionary_sugar_get().

.keys

(character())
The keys of the callbacks.

Value

TorchCallback

list() of TorchCallbacks

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, torch_callback()

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_loss(), t_opt()

Examples


t_clbk("progress")


Loss Function Quick Access

Description

Retrieve one or more TorchLosses from mlr3torch_losses. Works like mlr3::lrn() and mlr3::lrns().

Usage

t_loss(.key, ...)

t_losses(.keys, ...)

Arguments

.key

(character(1))
Key of the object to retrieve.

...

(any)
See description of dictionary_sugar_get.

.keys

(character())
The keys of the losses.

Value

A TorchLoss

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_opt()

Examples


t_loss("mse", reduction = "mean")
# get the dictionary
t_loss()


t_losses(c("mse", "l1"))
# get the dictionary
t_losses()


Optimizers Quick Access

Description

Retrieves one or more TorchOptimizers from mlr3torch_optimizers. Works like mlr3::lrn() and mlr3::lrns().

Usage

t_opt(.key, ...)

t_opts(.keys, ...)

Arguments

.key

(character(1))
Key of the object to retrieve.

...

(any)
See description of dictionary_sugar_get.

.keys

(character())
The keys of the optimizers.

Value

A TorchOptimizer

See Also

Other Torch Descriptor: TorchCallback, TorchDescriptor, TorchLoss, TorchOptimizer, as_torch_callbacks(), as_torch_loss(), as_torch_optimizer(), mlr3torch_losses, mlr3torch_optimizers, t_clbk(), t_loss()

Other Dictionary: mlr3torch_callbacks, mlr3torch_losses, mlr3torch_optimizers

Examples


t_opt("adam", lr = 0.1)
# get the dictionary
t_opt()


t_opts(c("adam", "sgd"))
# get the dictionary
t_opts()


Create a Dataset from a Task

Description

Creates a torch dataset from an mlr3 Task. The resulting dataset's ⁠$.get_batch()⁠ method returns a list with elements x, y and index:

The data is returned on the device specified by the parameter device.

Usage

task_dataset(task, feature_ingress_tokens, target_batchgetter = NULL)

Arguments

task

(Task)
The task for which to build the dataset.

feature_ingress_tokens

(named list() of TorchIngressToken)
Each ingress token defines one item in the ⁠$x⁠ value of a batch with corresponding names.

target_batchgetter

(⁠function(data, device)⁠)
A function taking in arguments data, which is a data.table containing only the target variable, and device. It must return the target as a torch tensor on the selected device.

Value

torch::dataset

Examples


task = tsk("iris")
sepal_ingress = TorchIngressToken(
  features = c("Sepal.Length", "Sepal.Width"),
  batchgetter = batchgetter_num,
  shape = c(NA, 2)
)
petal_ingress = TorchIngressToken(
  features = c("Petal.Length", "Petal.Width"),
  batchgetter = batchgetter_num,
  shape = c(NA, 2)
)
ingress_tokens = list(sepal = sepal_ingress, petal = petal_ingress)

target_batchgetter = function(data) {
  torch_tensor(data = data[[1L]], dtype = torch_float32())$unsqueeze(2)
}
dataset = task_dataset(task, ingress_tokens, target_batchgetter)
batch = dataset$.getbatch(1:10)
batch


Create a Callback Descriptor

Description

Convenience function to create a custom TorchCallback. All arguments that are available in callback_set() are also available here. For more information on how to correctly implement a new callback, see CallbackSet.

Usage

torch_callback(
  id,
  classname = paste0("CallbackSet", capitalize(id)),
  param_set = NULL,
  packages = NULL,
  label = capitalize(id),
  man = NULL,
  on_begin = NULL,
  on_end = NULL,
  on_exit = NULL,
  on_epoch_begin = NULL,
  on_before_valid = NULL,
  on_epoch_end = NULL,
  on_batch_begin = NULL,
  on_batch_end = NULL,
  on_after_backward = NULL,
  on_batch_valid_begin = NULL,
  on_batch_valid_end = NULL,
  on_valid_end = NULL,
  state_dict = NULL,
  load_state_dict = NULL,
  initialize = NULL,
  public = NULL,
  private = NULL,
  active = NULL,
  parent_env = parent.frame(),
  inherit = CallbackSet,
  lock_objects = FALSE
)

Arguments

id

(character(1))
'
The id for the torch callback.

classname

(character(1))
The class name.

param_set

(ParamSet)
The parameter set, if not present it is inferred from the ⁠$initialize()⁠ method.

packages

(character())
⁠The packages the callback depends on. Default is⁠NULL'.

label

(character(1))
The label for the torch callback. Defaults to the capitalized id.

man

(character(1))
String in the format ⁠[pkg]::[topic]⁠ pointing to a manual page for this object. The referenced help package can be opened via method ⁠$help()⁠. The default is NULL.

on_begin, on_end, on_epoch_begin, on_before_valid, on_epoch_end, on_batch_begin, on_batch_end, on_after_backward, on_batch_valid_begin, on_batch_valid_end, on_valid_end, on_exit

(function)
Function to execute at the given stage, see section Stages.

state_dict

(⁠function()⁠)
The function that retrieves the state dict from the callback. This is what will be available in the learner after training.

load_state_dict

(⁠function(state_dict)⁠)
Function that loads a callback state.

initialize

(⁠function()⁠)
The initialization method of the callback.

public, private, active

(list())
Additional public, private, and active fields to add to the callback.

parent_env

(environment())
The parent environment for the R6Class.

inherit

(R6ClassGenerator)
From which class to inherit. This class must either be CallbackSet (default) or inherit from it.

lock_objects

(logical(1))
Whether to lock the objects of the resulting R6Class. If FALSE (default), values can be freely assigned to self without declaring them in the class definition.

Value

TorchCallback

Internals

It first creates an R6 class inheriting from CallbackSet (using callback_set()) and then wraps this generator in a TorchCallback that can be passed to a torch learner.

Stages

See Also

Other Callback: TorchCallback, as_torch_callback(), as_torch_callbacks(), callback_set(), mlr3torch_callbacks, mlr_callback_set, mlr_callback_set.checkpoint, mlr_callback_set.progress, mlr_callback_set.tb, mlr_callback_set.unfreeze, mlr_context_torch, t_clbk()

Examples


custom_tcb = torch_callback("custom",
  initialize = function(name) {
    self$name = name
  },
  on_begin = function() {
    cat("Hello", self$name, ", we will train for ", self$ctx$total_epochs, "epochs.\n")
  },
  on_end = function() {
    cat("Training is done.")
  }
)

learner = lrn("classif.torch_featureless",
  batch_size = 16,
  epochs = 1,
  callbacks = custom_tcb,
  cb.custom.name = "Marie",
  device = "cpu"
)
task = tsk("iris")
learner$train(task)