[ad_1]
The past few years were special for centralized server-based AI models like ChatGPT, Google’s Gemini, Microsoft’s Copilot, etc. There is no doubt such AI models have transformed so many aspects. But on the other hand, they have some downsides. Probably, you have also heard about the several outages of ChatGPT.
The centralized server-based models are at huge risk of a single point of failure. But now, time has changed. Blockchain technology is also revolutionizing many fields, including artificial intelligence (AI).
The Internet Computer Protocol (ICP) which Dfinity developed, is working to decentralize AI by enabling AI applications to run on a fully decentralized cloud infrastructure. The unique thing is, that ICP allows AI models and services to be deployed across a network of independent data centers. This sort of decentralization approach ensures that AI operations are transparent, censorship-resistant, and less prone to single points of failure.
This article is in the contrast to practical test Face Recognition DApp which is based on the ICP. We will check how the decentralization of AI works objectively with subtle demonstrations of each step. We have attempted to break complex technical concepts into easy-to-understand tutorials so let’s get started.
The entire project development and testing processes were carried out on the Windows-based machine, so we are creating a Linux subsystem here because many ICP development tools and scripts are optimized for Linux-based environments.
📥Prerequisites:
📥Set Up the Development Environment:
- Open your WSL terminal, and run the following command to install DFX:
sh -ci "$(curl -fsSL https://smartcontracts.org/install.sh)"
👉To confirm the installation:
dfx –version
- Install Rust by running the following command:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
👉To verify the installation, run:
rustc –version
3. Install Node.js by running the following command:
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install -y nodejs
👉To verify the installation, run:
node -v npm -v
- Install wasi2ic: First clone their GitHub repository.
git clone https://github.com/wasm-forge/wasi2ic.git
cd wasi2ic
cargo install --path .
👉To check the installation, run:
echo $PATH
wasi2ic –help
- Install wasm-opt:
cargo install wasm-opt
📥Clone the Project:
We are using the project’s GitHub repository for our testing purposes. You can check their repository here.
1. Clone the repository and navigate to the face-recognition project:
git clone https://github.com/dfinity/examples.git
cd examples/rust/face-recognition
Note: You can access your Linux subsystem files by searching \wsl$
in the File Explorer or direct search feature in Windows OS.
📥Download Models for Face Recognition
- Download the face detection model:
The face detection model will be used to detect faces in an image. Run:
./download-face-detection-model.sh
- Download the face recognition model:
You need to export the model in ONNX format using PyTorch
and facenet-pytorch
. Start by installing these Python libraries:
pip install torch facenet-pytorch onnx
- Export the ONNX model:
In a Python file or shell, run the following: Type python3 in the terminal then run the following code. After execution, type exit().
import torch
import facenet_pytorch
resnet = facenet_pytorch.InceptionResnetV1(pretrained='vggface2').eval()
input = torch.randn(1, 3, 160, 160)
torch.onnx.export(resnet, input, "face-recognition.onnx", verbose=False, opset_version=11)
This will generate the face-recognition.onnx
file. Copy it to the root of your repository.
📥Build
Now, it’s time to build the project. Run:
dfx start --background
dfx deploy
If everything goes correctly, you can see the Frontend URL in the terminal.
Note: If you face some error due to missing client, try to install webpack
and webpack-cli
globally using npm:
sudo npm install -g webpack webpack-cli
And, local dependencies:
npm install
📥Create the canister, run:
dfx canister create backend
You can see the following message in the terminal.
👉Now, you can check the canister backend status:
dfx canister status backend
You can see something like this in the terminal.
📥Chunk Uploading of Models
Since AI models are typically large. They can’t be embedded directly into the WebAssembly (Wasm) binary of a smart contract. Instead, these models need to be uploaded separately. To handle this, DecideAI developed a tool for uploading models incrementally. You can find the tool here: ic-file-uploader.
👉To install the tool, use the following command:
cargo install ic-file-uploader
Once installed, you can use the upload-models-to-canister.sh
script by running ./upload-models-to-canister.sh
in the terminal to upload the models. This script performs the following steps:
- Clears the existing AI models from the canister:
dfx canister call backend clear_face_detection_model_bytes
dfx canister call backend clear_face_recognition_model_bytes
- Uploads the new models incrementally:
ic-file-uploader backend append_face_detection_model_bytes version-RFB-320.onnx
ic-file-uploader backend append_face_recognition_model_bytes face-recognition.onnx
- Finally, the script sets up the uploaded models:
dfx canister call backend setup_models
Now, you can interact with the Frontend by using the URL you received in the terminal while dfx deploy
command (Figure 1)
Once you upload an image by clicking on ICP’s logo, try to add a person’s name by clicking on the “Add person” button to train the model.
It detects the face with an automatic rectangle mark in the face. Once you set the name, it displays a message of successful addition like as follows:
AI remembers the name you set for the face so you can test whether it detects the faces of different photos of the same person or not. Here, different photos of Nikola Tesla are taken. To upload another Photo for face detection, reload the page, and then click on the “Recognize” option. Let’s see; it has successfully identified the face of Nikola Tesla with a slight difference from the initial one.
Here, an AI-generated photo of Nikola Tesla is submitted to the model, and surprisingly, it has accurately detected the face, let’s see here:
It has successfully verified that ICP Face Recognizing DApp is working perfectly as it has successfully detected the faces of Nikola Tesla. You can try it by yourself.
Since it is for testing purposes, UI and features are limited. You can add extra features and services if you are rushing away to build the production-ready dApp on ICP for face-recognizing purposes. You can try the following features with code examples. Please note that you should add or customize codes as per your requirements.
📥Here are some tips for you:
- Add User Authentication with Internet Identity
If you add this feature to this dApp, only registered users can access the DApp. Here is how you can add authentication using Internet Identity. You can integrate Internet Identity by adding the Internet Identity login button on the homepage. Once the user logs in, display a personalized dashboard.
How to Add:
👉Install Internet Identity dependencies:
npm install @dfinity/agent @dfinity/auth-client
👉Add the following code to your frontend to enable authentication:
import { AuthClient } from "@dfinity/auth-client";
async function init() {
const authClient = await AuthClient.create();
if (await authClient.isAuthenticated()) {
// Display dashboard or personalized content
} else {
authClient.login({
identityProvider: "https://identity.ic0.app/#authorize",
onSuccess: () => {
// Once authenticated, display personalized features
},
});
}
}
init();
👉After logging in, the user can see their history of recognized faces or other personalized data.
- Image Upload with Drag and drag-and-drop functionality
Make the image upload function and experience smoother by allowing users to drag and drop images for face detection.
How to Add:
👉Use HTML5’s drag-and-drop functionality:
<div id="drop-area">
<p>Drag and drop an image here or click to select</p>
<input type="file" id="file-input" hidden />
</div>
👉Add JavaScript to handle the drag-and-drop action:
const dropArea = document.getElementById("drop-area");
dropArea.addEventListener("dragover", (event) => {
event.preventDefault();
});
dropArea.addEventListener("drop", (event) => {
event.preventDefault();
const files = event.dataTransfer.files;
// Process the uploaded image
});
👉Make sure it integrates smoothly with your existing face detection functionality.
- Progress Bar for Face Recognition
It will be an engaging feature to display a progress bar when an image is uploaded to inform users that the system is processing the image.
How to Add:
👉Use a simple HTML progress bar:
<div id="progress-bar">
<div id="progress-fill" style="width: 0%;"></div>
</div>
👉Dynamically update the progress bar as the image is processed:
const progressFill = document.getElementById("progress-fill");
let progress = 0;
const interval = setInterval(() => {
progress += 10;
progressFill.style.width = `${progress}%`;
if (progress === 100) {
clearInterval(interval);
}
}, 100); // Simulate progress every 100ms
- Notifications for Face Recognition Results
You can add a feature to provide real-time notifications once the face recognition is complete, either via a modal or toast notification.
How to Add:
👉You can use a lightweight library like Toastr or custom toast notifications.
<div id="notification" class="hidden">Face Recognition Complete!</div>
👉In your JavaScript, show the notification when the backend returns the result:
function showNotification(message) {
const notification = document.getElementById("notification");
notification.innerHTML = message;
notification.classList.remove("hidden");
setTimeout(() => {
notification.classList.add("hidden");
}, 3000); // Hide notification after 3 seconds
}
// Call this after face recognition is done
showNotification("Face detected and recognized!");
📥Final Steps: Rebuild and Deploy
After implementing these new features:
👉Rebuild the project:
dfx build
👉Deploy to the testnet: It is to test first before rushing to mainnet.
dfx deploy --network ic
These are a few examples and ideas for demonstration purposes. You can test them by yourself and let us know your progress in the comment section. Also, you can do more with it to decentralize AI on ICP.
Conclusion:
We have successfully tested Face Recognizing DApp on ICP which is a real attempt to decentralize AI (DeAI). As per our testing, DApp responses were quick, and the detection of faces was accurate. The ICP’s unique infrastructure could assist us in performing complex tasks such as facial recognition without relying on centralized systems.
It doesn’t just enhance security and privacy but also exhibits the potential for decentralized AI (DeAI) applications to evolve rapidly. As ICP continues to develop, the ability to deploy large models and perform AI inference on-chain can open up new possibilities for innovation. We can expect decentralized AI to be a key player in the future of trustworthy and scalable solutions. Developers also have new opportunities to create more products and services on ICP.
[ad_2]
Source link