In this tutorial, we explore an innovative approach that combines deep learning with physical laws by taking advantage of the neural networks informed by Physics (Pinn) to solve the equation of unidimensional hamburgers. Using Pytorch in Google Colab, we demonstrate how to encode the differential government equation directly in the function of loss of the neuronal network, allowing the model to learn the solution 𝑢 (𝑥, ticket) that inherently respects the underlying physics. This technique reduces dependence on large data sets labeled and offers a new perspective to solve complex and non -linear partial differential equations using modern computational tools.
!pip install torch matplotlib
First, we install the Pytorch and Matplootlib libraries using PIP, ensuring that you have the necessary tools to build neural networks and visualize the results in its Google Colab environment.
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
torch.set_default_dtype(torch.float32)
We import essential libraries: Pytorch for deep learning, Numpy for numerical operations and matplootlib to draw. We established the predetermined tensioner data type in float32 for a numerical precision consistent throughout its calculations.
x_min, x_max = -1.0, 1.0
t_min, t_max = 0.0, 1.0
nu = 0.01 / np.pi
N_f = 10000
N_0 = 200
N_b = 200
X_f = np.random.rand(N_f, 2)
X_f(:, 0) = X_f(:, 0) * (x_max - x_min) + x_min # x in (-1, 1)
X_f(:, 1) = X_f(:, 1) * (t_max - t_min) + t_min # t in (0, 1)
x0 = np.linspace(x_min, x_max, N_0)(:, None)
t0 = np.zeros_like(x0)
u0 = -np.sin(np.pi * x0)
tb = np.linspace(t_min, t_max, N_b)(:, None)
xb_left = np.ones_like(tb) * x_min
xb_right = np.ones_like(tb) * x_max
ub_left = np.zeros_like(tb)
ub_right = np.zeros_like(tb)
X_f = torch.tensor(X_f, dtype=torch.float32, requires_grad=True)
x0 = torch.tensor(x0, dtype=torch.float32)
t0 = torch.tensor(t0, dtype=torch.float32)
u0 = torch.tensor(u0, dtype=torch.float32)
tb = torch.tensor(tb, dtype=torch.float32)
xb_left = torch.tensor(xb_left, dtype=torch.float32)
xb_right = torch.tensor(xb_right, dtype=torch.float32)
ub_left = torch.tensor(ub_left, dtype=torch.float32)
ub_right = torch.tensor(ub_right, dtype=torch.float32)
We establish the simulation domain for the equation of hamburgers defining spatial and temporal limits, viscosity and the number of placement points, initials and limits. Then it generates random data and uniformly spaced data for these conditions and makes them pytorch tensioners, which allows the gradient calculation where necessary.
class PINN(nn.Module):
def __init__(self, layers):
super(PINN, self).__init__()
self.activation = nn.Tanh()
layer_list = ()
for i in range(len(layers) - 1):
layer_list.append(nn.Linear(layers(i), layers(i+1)))
self.layers = nn.ModuleList(layer_list)
def forward(self, x):
for i, layer in enumerate(self.layers(:-1)):
x = self.activation(layer(x))
return self.layers(-1)(x)
layers = (2, 50, 50, 50, 50, 1)
model = PINN(layers)
print(model)
Here, we define a neuronal network informed by Personalized Physics (Pinn) by extending the Pytorch Module. The network architecture is dynamically created using a list of layer sizes, where each linear layer is followed by a TANH activation (except the final output layer). In this example, the network takes a two -dimensional entrance, it passes through four hidden layers (each with 50 neurons) and generates a unique value. Finally, the model is ordered with the specified architecture, and its structure is printed.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
Here, we verify if there is a GPU enabled for CUDA available, establish the device accordingly and move the model to that device to obtain an accelerated calculation during training and inference.
def pde_residual(model, x):
x = x(:, 0:1)
t = x(:, 1:2)
u = model(torch.cat((x, t), dim=1))
u_x = torch.autograd.grad(u, x, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)(0)
u_t = torch.autograd.grad(u, t, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)(0)
u_xx = torch.autograd.grad(u_x, x, grad_outputs=torch.ones_like(u_x), create_graph=True, retain_graph=True)(0)
f = u_t + u * u_x - nu * u_xx
return f
def loss_func(model):
f_pred = pde_residual(model, X_f.to(device))
loss_f = torch.mean(f_pred**2)
u0_pred = model(torch.cat((x0.to(device), t0.to(device)), dim=1))
loss_0 = torch.mean((u0_pred - u0.to(device))**2)
u_left_pred = model(torch.cat((xb_left.to(device), tb.to(device)), dim=1))
u_right_pred = model(torch.cat((xb_right.to(device), tb.to(device)), dim=1))
loss_b = torch.mean(u_left_pred**2) + torch.mean(u_right_pred**2)
loss = loss_f + loss_0 + loss_b
return loss
Now, we calculate the hamburger equation residue at the placement points by calculating the required derivatives through automatic differentiation. Then, we define a loss function that adds the residual loss of PDE, the error of the initial condition and the errors of the contour conditions. This combined loss guide to the network to learn a solution that satisfies both the physical law and the conditions imposed.
optimizer = optim.Adam(model.parameters(), lr=1e-3)
num_epochs = 5000
for epoch in range(num_epochs):
optimizer.zero_grad()
loss = loss_func(model)
loss.backward()
optimizer.step()
if (epoch+1) % 500 == 0:
print(f'Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.5e}')
print("Training complete!")
Here, we configure the Pinn training loop using the ADAM optimizer with a 1 × 10 learning rate−3. More than 5000 times, repeatedly calculates the loss (which includes residual, initial and PDE limit errors), back gradients and update the model parameters. Every 500 times, print the current era and the loss to monitor progress and finally announces when training is completed.
N_x, N_t = 256, 100
x = np.linspace(x_min, x_max, N_x)
t = np.linspace(t_min, t_max, N_t)
x, T = np.meshgrid(x, t)
XT = np.hstack((x.flatten()(:, None), T.flatten()(:, None)))
XT_tensor = torch.tensor(XT, dtype=torch.float32).to(device)
model.eval()
with torch.no_grad():
u_pred = model(XT_tensor).cpu().numpy().reshape(N_t, N_x)
plt.figure(figsize=(8, 5))
plt.contourf(x, T, u_pred, levels=100, cmap='viridis')
plt.colorbar(label="u(x,t)")
plt.xlabel('x')
plt.ylabel('t')
plt.title("Predicted solution u(x,t) via PINN")
plt.show()
Finally, we create a grid of points on the defined spatial domain (𝑥) and temporal (ticket), we feed these points to the model trained to predict the solution 𝑢 (𝑥, AS) and remodel the output in a 2D matrix. In addition, visualize the predicted solution as a contour graph using Matpletlib, complete with a colored bar, axis labels and a title, which allows it to observe how the pinn has approximate the dynamics of the hamburger equation.
In conclusion, this tutorial has shown how Pinn can effectively implement to solve the 1D hamburger equation incorporating the physics of the problem in the training process. Through the careful construction of the neuronal network, the generation of placement data and limits, and automatic differentiation, we achieve a model that learns a solution consistent with the PDE and the prescribed conditions. This fusion of traditional automatic learning and physics is paved the way to address more challenging problems in computer science and engineering, inviting greater exploration to systems of higher dimensions and more sophisticated neuronal architectures.
Here is the Colab notebook. Besides, don't forget to follow us <a target="_blank" href="https://x.com/intent/follow?screen_name=marktechpost” target=”_blank” rel=”noreferrer noopener”>twitter and join our Telegram channel and LINKEDIN GRsplash. Do not forget to join our 85k+ ml of submen.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, Asif undertakes to take advantage of the potential of artificial intelligence for the social good. Its most recent effort is the launch of an artificial intelligence media platform, Marktechpost, which stands out for its deep coverage of automatic learning and deep learning news that is technically solid and easily understandable by a broad audience. The platform has more than 2 million monthly views, illustrating its popularity among the public.