Mojo function
stack_allocation_like
stack_allocation_like[layout: Layout, dtype: DType, *, address_space: AddressSpace, target_address_space: AddressSpace = AddressSpace(0)](in_tensor: LayoutTensor[dtype, layout, origin, address_space=address_space, element_layout=element_layout, layout_bitwidth=layout_bitwidth, masked=masked, alignment=alignment]) -> LayoutTensor[dtype, layout, MutableAnyOrigin, address_space=target_address_space, masked=masked]
Create a stack-allocated tensor with the same layout as an existing tensor.
This function creates a new tensor on the stack with the same layout, data type, and masking properties as the input tensor, but potentially with a different address space. This is useful for creating temporary tensors that match the structure of existing tensors.
Example:
```mojo
from layout import LayoutTensor, Layout
from layout.layout_tensor import stack_allocation_like
var global_tensor = LayoutTensor[DType.float32, Layout((10, 10)),
address_space=AddressSpace.GLOBAL]()
var stack_tensor: LayoutTensor[DType.float32, Layout((10, 10)),
MutableAnyOrigin, address_space=AddressSpace.GENERIC]
stack_allocation_like(global_tensor, stack_tensor)
```
```mojo
from layout import LayoutTensor, Layout
from layout.layout_tensor import stack_allocation_like
var global_tensor = LayoutTensor[DType.float32, Layout((10, 10)),
address_space=AddressSpace.GLOBAL]()
var stack_tensor: LayoutTensor[DType.float32, Layout((10, 10)),
MutableAnyOrigin, address_space=AddressSpace.GENERIC]
stack_allocation_like(global_tensor, stack_tensor)
```
Performance:
- Creates a tensor on the stack, which is typically faster to allocate and
access than heap-allocated memory.
- Stack allocations have automatic lifetime management, reducing memory
management overhead.
- Stack size is limited, so be cautious with large tensor allocations.
- Creates a tensor on the stack, which is typically faster to allocate and
access than heap-allocated memory.
- Stack allocations have automatic lifetime management, reducing memory
management overhead.
- Stack size is limited, so be cautious with large tensor allocations.
Notes:
- The new tensor will have the same layout, data type, and masking properties
as the input tensor.
- The address space can be changed, which is useful for moving data between
different memory regions (e.g., from global to shared memory).
- Stack allocations are automatically freed when they go out of scope.
- The function uses the stack_allocation method of the result tensor type.
- The new tensor will have the same layout, data type, and masking properties
as the input tensor.
- The address space can be changed, which is useful for moving data between
different memory regions (e.g., from global to shared memory).
- Stack allocations are automatically freed when they go out of scope.
- The function uses the stack_allocation method of the result tensor type.
Parameters:
- layout (
Layout
): The layout of the tensor to allocate. - dtype (
DType
): The data type of the tensor elements. - address_space (
AddressSpace
): The address space of the input tensor. - target_address_space (
AddressSpace
): The address space for the new tensor. Defaults to GENERIC.
Args:
- in_tensor (
LayoutTensor[dtype, layout, origin, address_space=address_space, element_layout=element_layout, layout_bitwidth=layout_bitwidth, masked=masked, alignment=alignment]
): The input tensor to match the layout of.
Returns:
A new tensor allocated on the stack with the same layout as the input tensor.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!