Skip to main content
Log in

Mojo struct

LayoutTensorIter

@register_passable(trivial) struct LayoutTensorIter[type: DType, layout: Layout, /, *, address_space: AddressSpace = 0, alignment: Int = Int(alignof[::DType,__mlir_type.!kgen.target]() if is_nvidia_gpu() else 1), circular: Bool = False, axis: OptionalReg[Int] = OptionalReg(None), layout_bitwidth: Int = Int(bitwidthof[::DType,__mlir_type.!kgen.target]()), masked: Bool = False]

Iterate through a memory buffer and construct layout tensor.

The returned layout tensor is NOT vectorized. User should explicitly vectorize.

Aliases

  • uint_type = SIMD[_get_unsigned_type(layout, address_space), 1]:

Fields

  • ptr (UnsafePointer[SIMD[type, 1], address_space=address_space, alignment=alignment]):
  • offset (SIMD[_get_unsigned_type(layout, address_space), 1]):
  • stride (SIMD[_get_unsigned_type(layout, address_space), 1]):
  • bound (SIMD[_get_unsigned_type(layout, address_space), 1]):
  • runtime_layout (RuntimeLayout[layout, bitwidth=layout_bitwidth]):
  • dimension_bound (SIMD[_get_unsigned_type(layout, address_space), 1]):
  • idx (SIMD[_get_unsigned_type(layout, address_space), 1]):

Implemented traits

AnyType, UnknownDestructibility

Methods

__init__

__init__() -> Self

Empty iterator, used as default value.

__init__(ptr: UnsafePointer[SIMD[type, 1], address_space=address_space, alignment=alignment], bound: SIMD[_get_unsigned_type(layout, address_space), 1], stride: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(layout.size()), offset: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(0)) -> Self

__init__(ptr: UnsafePointer[SIMD[type, 1], address_space=address_space, alignment=alignment], bound: SIMD[_get_unsigned_type(layout, address_space), 1], runtime_layout: RuntimeLayout[layout, bitwidth=bitwidth], stride: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(layout.size() if layout.all_dims_known() else -1), offset: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(0), dimension_bound: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(0), idx: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(0)) -> Self

__getitem__

__getitem__(self) -> LayoutTensor[type, layout, layout.rank(), address_space=address_space, masked=masked]

Return the layout tensor at current iterator.

__iadd__

__iadd__[T: Intable](mut self, rhs: T)

Increment the iterator.

This function is unsafe. It omits bound checking for performance reasons. Caller must make sure index doesn't go out-of-bound.

__iadd__(mut self, rhs: SIMD[_get_unsigned_type(layout, address_space), 1])

Increment the iterator.

This function is unsafe. It omits bound checking for performance reasons. Caller must make sure index doesn't go out-of-bound.

get

get(self) -> LayoutTensor[type, layout, layout.rank(), address_space=address_space, masked=masked]

Return the layout tensor at current iterator.

next

next[T: Intable](self, rhs: T) -> Self

Return an iterator pointing to the next rhs layout tensor.

next(self, rhs: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(1)) -> Self

next_unsafe

next_unsafe(self, rhs: SIMD[_get_unsigned_type(layout, address_space), 1] = SIMD(1)) -> Self

Return an iterator pointing to the next rhs layout tensor. This is the unsafe version and user must ensure rhs < bound / stride.

reshape

reshape[dst_layout: Layout](self) -> LayoutTensorIter[type, dst_layout, address_space=address_space, alignment=alignment, circular=circular, layout_bitwidth=layout_bitwidth, masked=masked]

Reshape the iterator to a new layout.

This method creates a new iterator with a different layout while preserving the underlying data. The new layout must have the same total size as the original.

Constraints:

  • The destination layout must have the same total size as the original. - Both layouts must be contiguous. - Both layouts must have compile-time known dimensions.

Parameters:

  • dst_layout (Layout): The target layout to reshape to.

Returns:

A new iterator with the specified layout.

bitcast

bitcast[new_type: DType, *, address_space: AddressSpace = address_space, alignment: Int = alignment](self) -> LayoutTensorIter[new_type, layout, address_space=address_space, alignment=alignment, circular=circular, layout_bitwidth=layout_bitwidth, masked=masked]

Reinterpret the iterator's underlying pointer as a different data type.

This method performs a bitcast operation, allowing you to view the same memory location as a different data type without copying or converting the data.

Parameters:

  • new_type (DType): The target data type to cast to.
  • address_space (AddressSpace): The memory address space for the new iterator (defaults to current).
  • alignment (Int): Memory alignment requirement for the new iterator (defaults to current).

Returns:

A new LayoutTensorIter with the same layout but different data type.