The Polytune MPC Engine
Secure Multi-Party Computation (MPC) allows two or more parties to interactively perform a computation on their private inputs, without revealing any additional information about their inputs to each other, other than what the result of the computation reveals.
Our aim at SINE is to make advanced technology, such as MPC, available to as many companies as possible. We are especially keen to apply MPC for the exchange of sustainability data.
Polytune, our MPC engine, aims at fulfilling our vision by providing an easy-to-use framework to apply this technology wherever there is a need to share data privately, without a third-party trustee.
Polytune implements the multi-party [WRK17b] protocol and will include optimizations from [YWZ20]. This protocol achieves the highest level of security an MPC protocol can achieve, i.e., it is secure against malicious adversaries that can control up to all-but-one of the parties. This means that even if all but one of the parties are corrupt and together try to obtain information about the honest party's input while actively deviating from the protocol, they are unable to do so, i.e., the input of the honest party remains private.
⚠️ Note Our MPC engine is being actively developed, so expect breaking changes.
Polytune is directly compatible with Garble, our programming language that allows us to describe computations on a high level and translate them to a Boolean circuit format that is required by the [WRK17b] MPC protocol. Learn more about Garble: Garble Documentation.
You can see examples for the usage with examples described in Garble, e.g., benches/.join.garble.rs
or in examples/sql-integration/.example.garble.rs
.
Funded by
MPC Protocol
The [WRK17] protocol is an MPC protocol designed to allow collaborative computation between multiple parties while maintaining strong security guarantees. It enables multiple parties to jointly compute a function over their private inputs without revealing those inputs to each other.
The WRK17 protocol is based on garbled circuits and preprocessing-based MPC, allowing efficient execution in dishonest-majority settings. In these settings, the privacy of the inputs is guaranteed even when up to all but one of the parties can be corrupt and collude to learn the honest parties' inputs.
WRK17 achieves high performance through:
- Preprocessing Phase: Correlated randomness can be generated ahead of time to speed up computation (either using an interactive preprocessing protocol or a so-called trusted dealer).
- Efficient Online Phase: The computation can be efficiently executed using the preprocessed data.
- Scalability: Designed to handle a large number of parties.
Our implementation of WRK17 provides the mpc
function for executing MPC computations:
#![allow(unused)] fn main() { pub async fn mpc( channel: &impl Channel, circuit: &Circuit, inputs: &[bool], p_fpre: Preprocessor, p_eval: usize, p_own: usize, p_out: &[usize], ) -> Result<Vec<bool>, Error> }
Let's look at the parameters in detail:
channel
: The communication channel for sending/receiving messages.circuit
: The Boolean circuit representing the computation to be securely evaluated.inputs
: The party's private input bits.p_fpre
: Whether to run the preprocessingUntrusted
or use aTrustedDealer
(more efficient).p_eval
: The party responsible for evaluating the circuit.p_own
: The index of the current party executing the protocol.p_out
: The indices of parties who receive the output.
Usage Scenario: This is a low-level functionality with both inputs and outputs being vectors of bits. The mpc
function is used when each party participates in an actual MPC execution, but usually accompanied by higher-level functions to translate data structures to/from their bit-level representations. We provide numerous example usages in the examples
directory, and in our simulation example, simulate_mpc
, in the tests
directory.
Customizable Communication Channels
The Channel
trait in our MPC engine provides a flexible and extensible abstraction for message-passing between parties. It allows communication to be implemented in various ways, enabling users to choose between platform-specific implementations. Polytune is deliberately communication-agnostic, while remaining quite flexible, offering the following features:
- Customizable Transport: Implement the
Channel
trait using any transport mechanism — HTTP, WebSockets, in-memory queues, or custom networking protocols. - Serialization-Aware: The trait ensures that messages can be efficiently serialized.
We provide example implementations for:
- Rust sync channels using
std::sync::mpsc
- HTTP channels for distributed deployments for servers
- WebAssembly-compatible HTTP channels for clients
- Peer-to-Peer channels
How to Implement Your Own Channel
- Define a Struct: Implement your own channel struct, ensuring it manages communication between multiple parties.
- Implement the
Channel
Trait: Define the required methods (send_bytes_to
,recv_bytes_from
) based on your chosen communication mechanism. - Handle Errors Gracefully: Ensure robust error handling for message sending and receiving.
That's it! You can create a custom Channel
implementation that integrates seamlessly with Polytune, adapting it to different transport mechanisms such as network sockets or async channels.
Implementation Requirements
When implementing the Channel
trait, you need to:
- Define the error types for sending and receiving operations
- Implement the sending mechanism through
send_bytes_to
- Implement the receiving mechanism through
recv_bytes_from
#![allow(unused)] fn main() { trait Channel { type SendError; type RecvError; async fn send_bytes_to( &self, p: usize, msg: Vec<u8>, phase: &str, ) -> Result<(), Self::SendError>; async fn recv_bytes_from( &self, p: usize, phase: &str, ) -> Result<Vec<u8>, Self::RecvError>; } }
Tips for Custom Implementations
-
Channel Parameters:
p
: Index of the target party for send/receivephase
: Phase of the protocol where the message is sentmsg
: Message sent to the target party (only insend_bytes_to
)
-
Connection Management:
- Consider connection setup/teardown if needed
- Ensure proper resource cleanup
-
Security Considerations:
- Add encryption if transmitting over insecure channels
- Implement authentication mechanisms if needed
HTTP Multi-Server Channels
The HttpChannel
enables multi-party computation (MPC) over a network by sending messages between parties using HTTP requests. Each party runs a server to receive messages and a client to send them.
- Suitable for distributed environments where parties communicate over a network.
- Ideal when parties run on separate servers and need a simple, HTTP-based transport layer.
How It Works
- Each party starts an HTTP server using
axum
. - Messages are sent via HTTP
POST
requests usingreqwest
. - Messages are received through an HTTP endpoint (
/msg/:from
) and forwarded to an async channel.
Example Implementation: HTTP Channel
The following example shows how to implement a Channel
trait using HTTP communication:
#![allow(unused)] fn main() { struct HttpChannel { urls: Vec<Url>, party: usize, recv: Vec<Mutex<Receiver<Vec<u8>>>>, } impl HttpChannel { async fn new(urls: Vec<Url>, party: usize) -> Result<Self, Error> { let port = urls[party].port().expect("All URLs must specify a port"); let recv = serve(port, urls.len()).await?; Ok(Self { urls, party, recv }) } } impl Channel for HttpChannel { type SendError = anyhow::Error; type RecvError = anyhow::Error; async fn send_bytes_to( &self, p: usize, msg: Vec<u8>, _info: SendInfo, ) -> Result<(), Self::SendError> { let client = reqwest::Client::new(); let url = format!("{}msg/{}", self.urls[p], self.party); loop { let Ok(resp) = client.post(&url).body(msg.clone()).send().await else { println!("Could not reach party {p} at {url}..."); sleep(Duration::from_millis(200)).await; continue; }; match resp.status() { StatusCode::OK => return Ok(()), StatusCode::NOT_FOUND => { println!("Could not reach party {p} at {url}..."); sleep(Duration::from_millis(200)).await; } status => anyhow::bail!("Unexpected status code: {status}"), } } } async fn recv_bytes_from(&self, p: usize, _info: RecvInfo) -> Result<Vec<u8>, Self::RecvError> { let mut r = self.recv[p].lock().await; Ok(timeout(Duration::from_secs(1), r.recv()) .await .context("recv_bytes_from({p})")? .unwrap_or_default()) } } }
HTTP Single-Server Channels
The PollingHttpChannel
provides an implementation of the Channel
trait that uses a centralized HTTP server to relay messages between participants in a Multi-Party Computation (MPC) system.
Key Differences from HTTP Multi-Server Channels
Unlike the previous HttpChannel
implementation which establishes direct connections between parties, PollingHttpChannel
:
- Uses a central relay server - All communication passes through a dedicated server
- Employs session management - Supports multiple concurrent MPC sessions on the same server
- Uses a polling mechanism - Periodically checks for messages rather than maintaining open connections
- Has explicit participation tracking - Parties must join a session before communication begins
Implementation Highlights
#![allow(unused)] fn main() { pub(crate) struct PollingHttpChannel { pub(crate) url: String, pub(crate) session: String, pub(crate) party: usize, pub(crate) client: reqwest::Client, } impl Channel for PollingHttpChannel { type SendError = HttpChannelError; type RecvError = HttpChannelError; async fn send_bytes_to( &self, p: usize, msg: Vec<u8>, _info: SendInfo, ) -> Result<(), HttpChannelError> { let url = format!("{}/send/{}/{}/{}", self.url, self.session, self.party, p); let resp: reqwest::Response = self.client.post(url).body(msg).send().await?; if resp.status().is_success() { Ok(()) } else { Err(HttpChannelError::UnexpectedStatusCode(resp.status())) } } async fn recv_bytes_from( &self, p: usize, _info: RecvInfo, ) -> Result<Vec<u8>, HttpChannelError> { let url = format!("{}/recv/{}/{}/{}", self.url, self.session, p, self.party); let mut attempts = 0; loop { let resp = self.client.post(&url).send().await?; if resp.status() == StatusCode::BAD_REQUEST { attempts += 1; if attempts >= 10 { return Err(HttpChannelError::Timeout); } sleep(Duration::from_millis(200)).await; continue; } if !resp.status().is_success() { return Err(HttpChannelError::UnexpectedStatusCode(resp.status())); } let bytes: Vec<u8> = resp.bytes().await?.into(); return Ok(bytes); } } } }
When to Use PollingHttpChannel
This implementation is ideal for:
- NAT/Firewall Traversal - When direct connections between parties aren't possible due to network limitations
- Multiple Independent Sessions - When you need to run multiple MPC computations concurrently
- Dynamic Participant Management - When participants may join/leave at different times
- Simplified Deployment - When you want to avoid configuring direct connections between all parties
Usage Scenario
The typical usage pattern involves three roles:
- Relay Server - A central server that routes messages between parties
- Trusted Dealer - Optional pre-computation role that generates correlated randomness
- Computing Parties - Participants that contribute inputs and receive results
Basic Usage Example
// Start the relay server
$ polytune serve
// Initialize as trusted dealer (optional)
$ polytune pre http://server-address --session=my-session --parties=3
// Join as a computing party
$ polytune party http://server-address --session=my-session --program=my-program.garble --party=0 --input="123u32"
Implementation Notes
- Session Management - Each computation is identified by a unique session string
- Polling Mechanism - Uses retries with backoff for message retrieval
- Participant Coordination - Waits for all parties to join before computation begins
- Error Handling - Custom error types for timeouts and connection issues
Security Considerations
- This implementation sends data in plaintext - secure only for trusted networks
- The relay server can see all communication between parties
- Consider adding TLS for transport security in production environments
WASM HTTP Channels
This example covers implementing the Channel
trait for WebAssembly (WASM) environments, allowing MPC computation to run in web browsers. The implementation consists of two components:
- Message Broker Server - A lightweight relay server that routes messages between parties
- WASM Client - A browser-compatible implementation of the
Channel
trait
Message Broker Server
The broker server acts as a central relay for messages between parties participating in MPC computations. It supports multiple concurrent sessions.
#![allow(unused)] fn main() { // Create a simple relay server with endpoints for sending and receiving messages let app = Router::new() .route("/ping", get(ping)) .route("/session/:session/send/:from/:to", post(send)) .route("/session/:session/recv/:from/:to", post(recv)) .with_state(state) // CORS enabled for browser compatibility .layer(cors) // Support large messages (up to 1000MB) .layer(DefaultBodyLimit::max(1000 * 1024 * 1024)) .layer(TraceLayer::new_for_http()); }
Key features include:
- Session-based message queuing
- CORS support for browser access
- Long polling for message retrieval (with 30-second timeout)
WASM Client Implementation
The WASM client implements the Channel
trait to enable MPC computation in browsers.
#![allow(unused)] fn main() { impl Channel for HttpChannel { type SendError = String; type RecvError = String; async fn send_bytes_to( &self, p: usize, msg: Vec<u8>, _info: SendInfo, ) -> Result<(), Self::SendError> { let client = reqwest::Client::new(); let url = format!("{}send/{}/{}", self.url, self.party, p); for _ in 0..50 { let Ok(resp) = client.post(&url).body(msg.clone()).send().await else { println!("Could not reach party {p} at {url}..."); TimeoutFuture::new(100).await; continue; }; match resp.status() { StatusCode::OK => return Ok(()), status => eprintln!("Unexpected status code: {status}"), } TimeoutFuture::new(100).await; } return Err(format!("Could not reach {url}")); } async fn recv_bytes_from( &self, p: usize, _info: RecvInfo, ) -> Result<Vec<u8>, Self::RecvError> { // Implementation with retries and timeouts // ... } } }
When to Use WASM Channel Implementation
This implementation is ideal for:
- Browser-Based MPC - When computation needs to run directly in web browsers
- Interactive Web Applications - For user-facing applications requiring secure computation
- Cross-Platform Deployment - When the same code needs to run on web and native platforms
- Public-Facing Applications - When the MPC protocol needs to be accessed by many users
Usage Pattern
The typical usage flow involves:
- Deploy the message broker server (exposed publicly)
- Compile the WASM client to JavaScript/WASM using
wasm-pack
- Import and use the WASM module in a web application
JavaScript Integration Example
import { compute } from "mpc-wasm";
async function runMpcComputation() {
try {
const result = await compute(
"https://broker-server.example.com/session/demo-session/",
0, // party ID
42, // input value
10 // range parameter
);
console.log("MPC result:", result);
} catch (error) {
console.error("MPC computation failed:", error);
}
}
Implementation Differences
Compared to the previous Channel
implementations, the WASM version:
- Uses Simpler Error Types - String-based errors for JS compatibility
- Employs Web-Compatible Timeouts - Uses
gloo_timers
instead of Tokio's sleep - Has Session Management Built-in - URL patterns include session IDs
- Uses Long Polling - Both client and server implement polling with retry logic
- Has CORS Support - Enabled for cross-origin requests in browsers
Security Considerations
- The broker server should be deployed with HTTPS in production
- No authentication mechanism is included (consider adding one for production)
- Browser security restrictions apply (CORS, etc.)
- All parties must trust the relay server not do drop messages
SQL Integration
This example is more advanced and shows how to load data from different input databases (PostgreSQL + MySQL), convert the rows to Garble language data types, join them together (using the built-in join
function of the Garble language) and write the output to a third (PostgreSQL) database.
The example uses two parties, which communicate over MPC without the need for a trusted (or semi-trusted) third party. Each party runs an HTTP server to receive incoming messages and sends messages by sending HTTP requests to the other party. The MPC program as well as any configuration necessary to read from / write to databases is specified in a JSON policy file which is read on startup.
How to Run the Example
Make sure that Docker is running (used to spin up the databases), then seed the databases:
docker compose -f docker-compose.yml up -d
The easiest way to run the example is as a test that orchestrates the two parties:
cargo test --release -- --nocapture
How to Deploy the Engine
The following example shows how to deploy the MPC engine for two parties, based on the SQL integration example (but without showing how and where to deploy the databases). If you want to deploy the engine with more parties or a different Garble program, the same principles apply.
Two Dockerfiles are provided as examples of how to run the MPC engine inside a docker container, party0.Dockerfile
and party1.Dockerfile
. They are identical except for the ports that they use, you could of course just use a single Dockerfile in case all of your parties listen on the same port. These Dockerfiles do not contain any DB configuration, it is up to you to either bundle a database into the docker container (similar to how databases are set up using Docker Compose for the tests, see docker-compose.yml
) or to change the database URLs in the configuration files (policy0.json
and policy1.json
) so that DBs that are hosted somewhere else can be accessed.
Assuming that the databases are hosted somewhere else, most of party0.Dockerfile
(or party1.Dockerfile
) can stay as it is. Let's take a look at the last three lines to see what you might want to change:
EXPOSE 8000
WORKDIR /usr/src/polytune/examples/sql-integration
CMD ["polytune-sql-integration", "--addr=0.0.0.0", "--port=8000", "--config=./policy0.json"]
The above Dockerfile exposes the MPC engine on port 8000 and reads its configuration from policy0.json
(contained here in this repository).
To build and run the container, use the following commands and make sure to run them from the top level directory of the repository:
docker build -f examples/sql-integration/party0.Dockerfile --tag 'polytune0' .
docker run -t -p 8000:8000 polytune0
You will notice that running this docker container will fail, because party 0 is configured to be the leader (in policy0.json
) and is thus expected all other parties to be listening already:
2024-11-18T21:59:17.244221Z INFO polytune_sql_integration: listening on 0.0.0.0:8000
2024-11-18T21:59:17.244366Z INFO polytune_sql_integration: Acting as leader (party 0)
2024-11-18T21:59:17.270663Z INFO polytune_sql_integration: Waiting for confirmation from party http://localhost:8001/
2024-11-18T21:59:17.274310Z ERROR polytune_sql_integration: Could not reach http://localhost:8001/run: error sending request for url (http://localhost:8001/run): error trying to connect: tcp connect error: Cannot assign requested address (os error 99)
Error: Some participants are missing, aborting...
To solve this, make sure to deploy and run the contributors first (in this example only party 1, but you could deploy more than two parties, in which case all contributing parties need to be started before the leader starts running), for example:
docker build -f examples/sql-integration/party1.Dockerfile --tag 'polytune1' . && docker run -t -p 8001:8001 polytune1
[+] Building 279.4s (20/20) FINISHED
2024-11-18T22:52:32.213120Z INFO polytune_sql_integration: listening on 0.0.0.0:8001
2024-11-18T22:52:32.213365Z INFO polytune_sql_integration: Listening for connection attempts from other parties
2024-11-18T22:52:42.214689Z INFO polytune_sql_integration: Listening for connection attempts from other parties
2024-11-18T22:52:52.216829Z INFO polytune_sql_integration: Listening for connection attempts from other parties
You can check that the party is running and listening by making a GET request to its /ping
route (in this example thus localhost:8001/ping
), which should respond with a pong
message.
Make sure to change the "participants"
key in the configuration files (in our example case policy0.json
and policy1.json
) to the addresses used by the parties. The first address in the array is always the first party, the second address in the array the second party and so on. As a result, the configuration files of the different parties must all use the same "participants"
array if they want to be able to communicate with each other.
Let's assume that party 0 is listening at http://1.2.3.4:8000
and party 1 at http://5.6.7.8:9000
. The configuration files policy0.json
and policy1.json
would then both need to contain:
{
"participants": ["http://1.2.3.4:8000", "http://5.6.7.8:9000"],
...
}
API Integration
This example is more advanced and shows how to provide data using a simple API directly as Garble literals, join them together (using the built-in join
function of the Garble language) and send the output to an API endpoint that accepts Garble literals.
The example uses two parties, which communicate over MPC without the need for a trusted (or semi-trusted) third party. Each party runs an HTTP server to receive incoming messages and sends messages by sending HTTP requests to the other party. The MPC program as well as any configuration necessary is specified using a JSON configuration that is provided via an API call.
How to Run the Example
The easiest way to run the example is as a test that orchestrates the two parties:
cargo test --release -- --nocapture
How to Deploy the Engine
The following example shows how to deploy the MPC engine for two parties. If you want to deploy the engine with more parties or a different Garble program, the same principles apply.
A simple Dockerfile is provided as an example of how to run the MPC engine inside a docker container. The only thing you might need to change is the port that the MPC engine is listening on:
EXPOSE 8000
WORKDIR /usr/src/polytune/examples/api-integration
CMD ["polytune-api-integration", "--addr=0.0.0.0", "--port=8000"]
To build and run the container, use the following commands and make sure to run them from the top level directory of the repository:
docker build -f examples/api-integration/Dockerfile --tag 'polytune0' .
docker run -t -p 8000:8000 polytune0
Starting the container does not immediately start an MPC execution, this needs to be explicitly triggered with a POST request to localhost:8000/launch
while providing the necessary configuration file (see policy0.json
and policy1.json
for example configs) as a JSON body.
The "input"
part of the JSON needs to use Garble's serialization format, as described in the Garble Serialization How-To. The result of the MPC execution will use the same serialization format and is sent to the endpoint specified as "output"
in the JSON.
Please note that you must call /launch
for all contributors (who will then start waiting for incoming MPC requests) before you call /launch
for the MPC leader (who will immediately start sending requests to all the other participants and fail if one of them is unreachable).
You can check that the party is running and listening by making a GET request to its /ping
route (in this example thus localhost:8000/ping
), which should respond with a pong
message.
Make sure to change the "participants"
key in the configuration files (in our example case policy0.json
and policy1.json
) to the addresses used by the parties. The first address in the array is always the first party, the second address in the array the second party and so on. As a result, the configuration files of the different parties must all use the same "participants"
array if they want to be able to communicate with each other.
Let's assume that party 0 is listening at http://1.2.3.4:8000
and party 1 at http://5.6.7.8:9000
. The configuration files policy0.json
and policy1.json
would then both need to contain:
{
"participants": ["http://1.2.3.4:8000", "http://5.6.7.8:9000"],
...
}
Benchmarks
Although performance optimizations have not been the main focus so far, you can benchmark Polytune and get a feel for how it performs. We provide a benchmark for our join
example which joins two databases with n_records
records:
cargo bench join
To run the join
benchmark once and produce a flamegraph:
sudo CARGO_PROFILE_BENCH_DEBUG=true cargo flamegraph --bench join
To run the join
benchmark multiple times, benchmark it, then produce a flamegraph:
sudo CARGO_PROFILE_BENCH_DEBUG=true cargo flamegraph --bench join -- --bench
To run all the benchmarks you can run:
cargo bench
Contributing
While Polytune was developed by us at the SINE Foundation, we would love to see how you end up using Polytune and are happy to accept pull requests. Polytune is distributed under the MIT license and hosted on GitHub:
Wondering where to start? The Polytune MPC Engine implements a complex protocol [WRK17] with many steps. The different steps and their modules are as follows:
ot.rs
implements a maliciously secure correlated OT receiver and sender based on (a modified version of) the KOS OT implementation of swanky.fpre.rs
implements the preprocessing phase with an additional party, the trusted dealer, who distributes the correlated randomness used in the MPC protocol. Note that this requires a different trust assumption and should only be used with caution.faand.rs
implements the preprocessing phase of the [WRK17] protocol in a distributed manner. This means that the parties interactively generate random authenticated triples in a maliciously secure protocol infaand::faand
, which is then used in the MPC protocol. For transforming random authenticated triples to concrete authenticated triples, Beaver's method is implemented infaand::beaver_aand
.protocol.rs
implements the MPC protocol. Its online phase is implemented using the garbling method implemented ingarble.rs
.
You can also reach us at polytune@sine.foundation.