Mojo function
max_pool
max_pool(input: Symbol, filter_shape: Tuple[Int, Int], stride: Tuple[Int, Int] = VariadicPack(<store_to_mem({1}), store_to_mem({1})>, 1), dilation: Tuple[Int, Int] = VariadicPack(<store_to_mem({1}), store_to_mem({1})>, 1), padding: Tuple[Int, Int, Int, Int] = VariadicPack(<store_to_mem({0}), store_to_mem({0}), store_to_mem({0}), store_to_mem({0})>, 1)) -> Symbol
Computes max pooling with the given filter shape, strides, and dilations.
For now the op only supports 2d max pooling (so input and filter must be 4D), with the following layout assumption:
- input has layout NHWC, i.e., (batch_size, height, width, in_channels)
All hyperparameters (i.e. strides, dilations, padding) must be of rank 1, or
unranked. If the input has static rank, all hyperparameters with static
shape must have sizes of input_rank - 2
, except padding, which must have size
2 * (input_rank - 2)
. Individual elements in the hyperparameters applies to
corresponding dimensions of the input (after ignoring the batch and channel dimensions),
with padding representing a before/after pair for each axis. The padding values
are expected to take the form (pad_dim1_before, pad_dim1_after, pad_dim2_before,
pad_dim2_after...). In 2D Convolution, dim1 here repesents H and dim2 represents W.
This op currently only supports strides and dilations on the filter.
Args:
- input (
Symbol
): The input tensor to perform the pooling upon. - filter_shape (
Tuple[Int, Int]
): The shape of the pooling filter. - stride (
Tuple[Int, Int]
): The stride of the pooling operation. - dilation (
Tuple[Int, Int]
): The spacing between the kernel points. - padding (
Tuple[Int, Int, Int, Int]
): The amount of padding applied to the input.
Returns:
A symbolic tensor value with the pooling applied.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!