Image by author
Rust Burn is a new deep learning framework written entirely in the Rust programming language. The motivation behind creating a new framework instead of using existing ones like PyTorch or TensorFlow is to create a versatile framework that is well suited for various users, including researchers, machine learning engineers, and low-level software engineers.
The key design principles behind Rust Burn are flexibility, performance, and ease of use.
Flexibility comes from the ability to quickly implement cutting-edge research ideas and conduct experiments.
Performance It is achieved through optimizations such as taking advantage of specific hardware features, such as Tensor Cores on Nvidia GPUs.
Easy to use arises from simplifying the workflow of training, deploying and running models in production.
Key Features:
- Flexible and dynamic computational graph.
- Thread-safe data structures
- Intuitive abstractions for a simplified development process
- Incredibly fast performance during training and inference
- Supports multiple backend implementations for both CPU and GPU
- Full support for logs, metrics and checkpoints during training
- Small but active developer community
rust installation
Burn is a powerful deep learning framework based on the Rust programming language. It requires a basic knowledge of Rust, but once you master it, you’ll be able to take advantage of all the features Burn has to offer.
To install it using an official guide. You can also consult Geeksparageeks Guide to install Rust on Windows and Linux with screenshots.
Picture of Install rust
recording facility
To use Rust Burn, you must first have Rust installed on your system. Once Rust is configured correctly, you can create a new Rust application using burdenRust package manager.
Run the following command in your current directory:
Navigate to this new directory:
Next, add Burn as a dependency, along with the WGPU backend function that enables GPU operations:
cargo add burn --features wgpu
At the end, compile the project to install Burn:
This will install the Burn framework along with the WGPU backend. WGPU allows Burn to execute low-level GPU operations.
Smart addition of elements
To run the following code, you need to open and replace the content in src/main.rs
:
use burn::tensor::Tensor;
use burn::backend::WgpuBackend;
// Type alias for the backend to use.
type Backend = WgpuBackend;
fn main() {
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
let tensor_1 = Tensor::::from_data(((2., 3.), (4., 5.)));
let tensor_2 = Tensor::::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
In the main function, we create two tensors with WGPU backend and perform the addition.
To run the code, you must run cargo run
In the terminal.
Production:
You should now be able to see the result of the addition.
Tensor {
data: ((3.0, 4.0), (5.0, 6.0)),
shape: (2, 2),
device: BestAvailable,
backend: "wgpu",
kind: "Float",
dtype: "f32",
}
Note: The following code is an example of Burn Book: Starting.
Intelligent position advancement module
Below is an example of how easy the framework is to use. We declare a position advance module and its step forward using this code snippet.
use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;
#(derive(Module, Debug))
pub struct PositionWiseFeedForward<B: Backend> {
linear_inner: Linear<B>,
linear_outer: Linear<B>,
dropout: Dropout,
gelu: GELU,
}
impl PositionWiseFeedForward<B> {
pub fn forward(&self, input: Tensor<B, D>) -> Tensor<B, D> {
let x = self.linear_inner.forward(input);
let x = self.gelu.forward(x);
let x = self.dropout.forward(x);
self.linear_outer.forward(x)
}
}
The code above is from GitHub. repository.
Example projects
To get more examples and run them, clone the https://github.com/burn-rs/burn repository and run the following projects:
Pre-trained models
To build your ai application, you can use the following pre-trained models and fine-tune them with your data set.
Rust Burn represents an interesting new option in the landscape of deep learning frameworks. If you’re already a Rust developer, you can take advantage of the speed, security, and concurrency of Rust to push the boundaries of what’s possible in deep learning research and production. Burn aims to find the right compromises in flexibility, performance, and usability to create an exceptionally versatile framework suitable for diverse use cases.
Although still in its early stages, Burn shows promise in addressing the weaknesses of existing frameworks and meeting the needs of diverse professionals in the field. As the framework matures and the community around it grows, it has the potential to become a production-ready framework on par with established options. Its new design and choice of language offer new possibilities for the deep learning community.
Resources
Abid Ali Awan (@1abidaliawan) is a certified professional data scientist who loves building machine learning models. Currently, he focuses on content creation and writing technical blogs on data science and machine learning technologies. Abid has a Master’s degree in technology Management and a Bachelor’s degree in Telecommunications Engineering. His vision is to build an artificial intelligence product using a graph neural network for students struggling with mental illness.