Mojo function
min
min(src: Buffer[type, size, address_space=address_space, origin=origin]) -> SIMD[$0, 1]
Computes the min element in a buffer.
Args:
- src (
Buffer[type, size, address_space=address_space, origin=origin]
): The buffer.
Returns:
The minimum of the buffer elements.
min[reduce_axis: Int](src: NDBuffer[type, rank, shape, strides, alignment=alignment, address_space=address_space, exclusive=exclusive], dst: NDBuffer[type, rank, shape])
Computes the min across reduce_axis of an NDBuffer.
Parameters:
- reduce_axis (
Int
): The axis to reduce across.
Args:
- src (
NDBuffer[type, rank, shape, strides, alignment=alignment, address_space=address_space, exclusive=exclusive]
): The input buffer. - dst (
NDBuffer[type, rank, shape]
): The output buffer.
min[: origin.set, : origin.set, //, type: DType, input_fn: fn[Int, Int](IndexList[$1]) capturing -> SIMD[$1|2, $0], output_fn: fn[Int, Int](IndexList[$1], SIMD[$1|2, $0]) capturing -> None, /, single_thread_blocking_override: Bool = 0, target: StringLiteral = "cpu"](input_shape: IndexList[size], reduce_dim: Int, context: MojoCallContextPtr = MojoCallContextPtr())
Computes the min across the input and output shape.
This performs the min computation on the domain specified by input_shape
,
loading the inputs using the input_fn
. The results are stored using
the output_fn
.
Parameters:
- type (
DType
): The type of the input and output. - input_fn (
fn[Int, Int](IndexList[$1]) capturing -> SIMD[$1|2, $0]
): The function to load the input. - output_fn (
fn[Int, Int](IndexList[$1], SIMD[$1|2, $0]) capturing -> None
): The function to store the output. - single_thread_blocking_override (
Bool
): If True, then the operation is run synchronously using a single thread. - target (
StringLiteral
): The target to run on.
Args:
- input_shape (
IndexList[size]
): The input shape. - reduce_dim (
Int
): The axis to perform the min on. - context (
MojoCallContextPtr
): The pointer to DeviceContext.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!