Skip to main content

Mojo function

unfused_qkv_matmul_ragged_paged_gguf_quantized

unfused_qkv_matmul_ragged_paged_gguf_quantized[dtype: DType, params: KVCacheStaticParams, page_size: Int, //, quantization_encoding_q: StringSlice[StaticConstantOrigin], quantization_encoding_k: StringSlice[StaticConstantOrigin], quantization_encoding_v: StringSlice[StaticConstantOrigin]](hidden_state: LayoutTensor[DType.float32, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], input_row_offsets: LayoutTensor[DType.uint32, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], q_weight: LayoutTensor[DType.uint8, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], k_weight: LayoutTensor[DType.uint8, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], v_weight: LayoutTensor[DType.uint8, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], kv_collection: PagedKVCacheCollection[dtype, params, page_size], layer_idx: UInt32, output: LayoutTensor[DType.float32, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], ctx: DeviceContextPtr)

Performs a quantized matmul, writing the output into a mutable PagedKVCacheCollection object.

Unlike the un-quantized version (kv_matmul_ragged_continuous_batching), this implementation does not concat the q, k, and v weights together. Instead, it performs three matmuls. This allows the q, k, and v weights to have different quantization encodings.

This is only supported on CPU.

Args:

  • hidden_state (LayoutTensor): Tensor with shape (sum(seq_lens), num_heads * head_size).
  • input_row_offsets (LayoutTensor): Tensor with shape (batch_size + 1,) denoting the start of each sequence along the seq_len dimension.
  • q_weight (LayoutTensor): Tensor with shape (num_heads * head_size, num_kv_heads * head_size).
  • k_weight (LayoutTensor): Tensor with shape (num_heads * head_size, num_kv_heads * head_size).
  • v_weight (LayoutTensor): Tensor with shape (num_heads * head_size, num_kv_heads * head_size).
  • kv_collection (PagedKVCacheCollection): The Collection object storing KVCache entries.
  • layer_idx (UInt32): The index of the layer being executed. Used to retrieve the KVCache for the given layer from kv_collection.
  • output (LayoutTensor): Tensor with shape (sum(seq_lens), num_kv_heads * head_size). This is the output buffer for the Q matmul.
  • ctx (DeviceContextPtr): The call context pointer, passed by the graph compiler.

Was this page helpful?