Intro to custom ops
Custom operations (custom ops) extend MAX Graph's Python inference APIs with custom Mojo kernels. Whether you need to optimize performance of functions, implement custom algorithms, or create hardware-specific versions of existing operators, custom ops provide the flexibility you need.
The custom ops API provides complete control over MAX Graph while handling kernel integration and optimization pipelines automatically.
Try it now with our custom ops examples on GitHub or follow the Build custom ops for GPUs tutorial and let us know what you think.
How it works
A custom op consists of two main components that work together to integrate your custom implementation into the MAX execution pipeline:
- A custom function implementation written in Mojo that defines your computation
- A registration process that connects your function to the graph execution system
Under the hood, custom ops utilize high-level abstractions that handle memory management, device placement, and optimization. The graph compiler integrates your custom op implementation into the execution flow.
For more information:
- Follow the Build custom ops for GPUs tutorial
- Learn more about GPU programming with Mojo
- Explore the Custom ops GitHub examples
- Reference the MAX Graph custom ops API
Mojo custom ops in PyTorch
You can also use Mojo to write high-performance kernels for existing PyTorch models without migrating your entire workflow to MAX. This approach allows you to replace specific performance bottlenecks in your PyTorch code with optimized Mojo implementations.
Custom operations in PyTorch can now be written using Mojo, letting you
experiment with new GPU algorithms in a familiar PyTorch environment. These
custom operations are registered using the
CustomOpLibrary
class in the
max.torch
package.
How it works
- Write your kernel implementation in Mojo.
- Register your custom operation using
CustomOpLibrary
frommax.torch
. - Replace specific operations in your existing PyTorch model with your Mojo implementation.
This allows you to keep your existing PyTorch workflows while gaining access to Mojo's performance capabilities for targeted optimizations.
For more information, see the Extending PyTorch with custom operations in Mojo example.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!