Building Privacy-Preserving Neural Networks with LEO and zkSNARKs

Dmytriiev Petro
3 min readJul 31, 2023

--

Introduction

Artificial intelligence (AI) has revolutionized various industries, solving complex tasks that were once exclusive to human intelligence. Neural networks, in particular, have played a pivotal role in AI breakthroughs, enabling the training and inference of sophisticated models. However, the data-intensive nature of AI systems raises privacy concerns, especially during training, where sensitive data must be protected. In this article, we explore running inference of multilayer perceptron neural networks in zkSNARKs using the LEO programming language to ensure data privacy during computations.

AI Workflow: Training vs. Inference

The AI workflow involves two primary phases: training and inference. During training, an AI model learns from a vast amount of training data. The sensitive nature of this data requires robust protection. On the other hand, inference involves using the trained model to make decisions based on input features without revealing sensitive information.

Implementing Inference Computation

Neural networks are at the core of AI systems, transforming input values into output predictions. Each neuron in a neural network has an activation function, weight, and bias parameters. We use a ReLU (Rectified Linear Unit) activation function, but for fixed-point computations, we need to work with integer representations of noninteger values.

Leveraging Fixed-Point Numbers

zk-SNARK based languages like LEO do not natively support non-integer numbers, but we can work around this limitation using fixed-point numbers. By using fixed-point representation, we can accurately compute and represent fractions, crucial for avoiding errors in deeper neural networks.

Implementing Neural Networks in LEO

To implement a neural network in LEO, we define the network architecture, weights, biases, and input parameters. The program computes and outputs the neural network’s final result using rectified linear activation functions.

Python Script for Neural Network Code Generation

While manually coding neural networks can be tedious and limiting, we can automate this process using Python. By creating a Python script, we can generate LEO code for arbitrary neural network architectures, making it easier to handle deeper networks.

Evaluating Scalability

Using the Python script, we can analyze the scalability of the neural network implementation in LEO. By adding more hidden layers, we observe a linear relationship between the number of hidden layers and the number of circuit constraints. This outlook is promising for building deep neural network-based applications in LEO, ensuring privacy throughout the computations.

Conclusion

The integration of zkSNARKs, LEO programming language, and fixed-point numbers empowers developers to build privacy-preserving neural networks. By running inference in zkSNARKs, sensitive data remains protected while achieving AI’s potential in various industries. The automated code generation capabilities through Python further simplify the implementation process, enabling developers to explore deeper neural network architectures. The future of privacy-enhanced AI lies within the realm of zkSNARKs and the LEO programming language.

--

--

Dmytriiev Petro
Dmytriiev Petro

Written by Dmytriiev Petro

crypto geek from austria @ogpetya

No responses yet