Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Hello Shard! Intershard communication example app #429

Draft
wants to merge 16 commits into
base: dev
Choose a base branch
from

Conversation

sbellem
Copy link
Collaborator

@sbellem sbellem commented Mar 18, 2020

The relevant commit is 5941750

Far from elegant and very rudimentary first version that can for sure benefit from many improvements.

Some of the current limitations are:

  1. The client sends messages to a gateway shard, meanwhile the other shard only receives masked messages from this gateway shard.
  2. The complete message transmission, from client to gateway shard to "receiving" shard all happens within an epoch.
  3. The generation of preprocessing elements for the intershard transfers is "fake", as it is done at start up time as though a trusted dealer would distribute field elements (masks shares) to each server.
  4. The contract does not control access to the queue of intershard messages thus allowing "un-authorized" reads.

From the above limitations, things to improve are roughly:

  • The message transmission flow can go from client to any shard, and the first receiving shard will forward it to the other.
  • Not sure, but it seems to me that the "complete" message transmission (client -> shard 1 -> shard
    2 does not need to be all happening within one epoch. That is, the shard that produces messages and the shard that consumes messages do not need to operate at the same rate, since the producer queues up messages and the consumer consumes the messages from the queue; they can be operating at different speeds.
  • Implement randousha-based generate_intershard_masks() function.
  • Tidy up the access control using a SecretCellArray data structure in the contract.

sbellem added 5 commits March 4, 2020 13:59
* Catch web3 tx not found exception (API changed in versions > 5.0.0)
* Fix missing one_minus_ones elements
* Refresh the cache of preprocessed elements after the writing to file
  step is done. There was a line `pp_elements.init_mixins()` which looks
  like it was expected to do something similar but the method
  (`init_mixins()`) does not exist. Perhaps it can be implemented in the
  future.

Related to initc3#425
For the time being because it is sometimes difficult to understand why
the patch coverage is too low and this prevents moving forward with
merging a pull request which does not affect the overall coverage of the
project.
* Add documentation for asynchromix app.
* Remove extra whitespace in asynchromix contract
This was derived from apps/asynchromix/asynchromix.[py,sol].

The "mixing" part of the original asynchromix app was
removed to make it into a simple app in which a client sends
a mask message and the MPC network unmasks it.

Perhaps there is no need for an ethereum-based coordinator for such a
simple case but the idea is to provide a basis for more complex
applications such as message mixing and intershard secure
communications.
Far from elegant and very rudimentary first version that can
for sure benefit from many improvements.

Some of the current limitations are:

1. The client sends messages to a gateway shard, meanwhile the
   other shard only receives masked messages from this gateway shard.
2. The complete message transmission, from client to gateway shard to
   "receiving" shard all happens within an epoch.
3. The generation of preprocessing elements for the intershard transfers
   is "fake", as it is done at start up time as though a trusted dealer would
   distribute field elements (masks shares) to each server.
4. The contract does not control access to the queue of intershard
   messages thus allowing "un-authorized" reads.

From the above limitations, things to improve are roughly:

1. The message transmission flow can go from client to any shard, and the
   first receiving shard will forward it to the other.
2. Not sure, but it seems to me that the "complete" message transmission
   (client -> shard 1 -> shard 2) does not need to be all happening within
   one epoch. That is, the shard that produces messages and the shard that
   consumes messages do not need to operate at the same rate, since the
   producer queues up messages and the consumer consumes the messages from
   the queue; they can be operating at different speeds.
3. Implement randousha-based generate_intershard_masks() function.
4. Tidy up the access control using a SecretCellArray data structure in
   the contract.
@sbellem sbellem changed the title WIP: Hello Shard! example app for intershard communication WIP: Hello Shard! Intershard communication example app Mar 18, 2020
@codecov
Copy link

codecov bot commented Mar 18, 2020

Codecov Report

Merging #429 into dev will decrease coverage by 8.72043%.
The diff coverage is 9.58904%.

@@                 Coverage Diff                 @@
##                 dev        #429         +/-   ##
===================================================
- Coverage   77.27842%   68.55799%   -8.72043%     
===================================================
  Files             50          59          +9     
  Lines           5585        6380        +795     
  Branches         856         900         +44     
===================================================
+ Hits            4316        4374         +58     
- Misses          1095        1833        +738     
+ Partials         174         173          -1     

Copy link
Contributor

@amiller amiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great so far

produce masked message for other shard
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. mask the client message with an intershard mask share
2. (inner-shard communication) open the masked share to get the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intra-shard or within-shard lets call it?

for (uint i = 0; i < n; i++) {
shard_1[i] = _shard_1[i];
shard_2[i] = _shard_2[i];
servermap[_shard_1[i]] = i+1; // servermap is off-by-one
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see why something like this is necessary. On the other hand, it might not be efficient if we have more than just 2 shards of different lengths. What else could we use here, mapping address => PartyStruct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes definitely, it needs to be generalized/improved.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to use eth addresses as node ids instead of integers (i)?

async def prog(ctx):
logging.info(f"[{self.global_id}] Running MPC network")
client_msg_share = ctx.Share(client_msg_field_elem)
client_msg = await client_msg_share.open()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This MPC program looks unclear. Which shard runs it, and also isn't clients secret misnamed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this needs to be improved, Ideally it would be run by any shard that has received a client secret. In this code one shard acts as a gateway shard (hence the condition at line 251 above: if self.is_gateway_shard:) and only the gateway shard is expected to receive inputs from the client. Whether a server is part of a gateway shard or not is set at start up time in the main module.

I guess that the _mpc_initiate_loop logic needs to be adapted to shards. That is, each shard needs to have an inputs_ready condition that when met can be use to trigger the start of the client's input processing by the relevant shard.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for the name, yeah ... I now wonder what happened ... I think I was trying to be better understand what type each variable is and tried to pick names that could reflect this. Definitely can be changed!

It is indeed confusing. The method _collect_client_input() is used to read the client secret from the input queue but rather than just returning the client secret () it subtracts the input mask share ([r]) and returns the result which is a field element that is then cast to a Share ([m]). The naming was trying to reflect this I think ... 😄.

sbellem added 5 commits March 25, 2020 00:19
The goal is to organize the code such that the pieces belonging to
different "actors" are ideally fully isolated. That is, the client code
is just client code, the MPC server/network code is just MPC server code,
the contract deployment code is just contract deployment code, etc. With
this separation it should be possible to run each component
independently of each other, such as in different processes, or different
containers.
Now running two containers:

1. ganache (local test ethereum)
2. client & MPC servers

Next: Run the client and MPC servers in separate containers, that is:

1. ganache (local test ethereum)
2. client
3. MPC servers

Could also potentially run the deployment of the contract separately.
* contract is deployed in a separate service and address is written to
  a file that can be read by all parties
* servers are instantiated with a contract context (address, name, source
  code)
* Client reads the contract address from pubic data, and creates web3
  Contract object to interact with the on-chain contract.
* MPC servers serve a GET /inputmasks/{id} endpoint
* Client queries servers for input mask shares
* Makefile can be used to launch example into tmux panes for each
  service (ethereum blockchain, setup phase (contract deployment), MPC
  network, client)

next:
* config for public data including ethereum addresses of client and
  servers
* authorization check for clients when they query a share
* MPC server communication over network sockets
* preprocessing service
* cleanup

note: some of the above next steps may be done at at later stage
@sbellem sbellem force-pushed the sharding/intershard-communication branch from e668548 to af3cc39 Compare April 3, 2020 03:46
sbellem added 4 commits April 2, 2020 22:57
* Add client.toml config file and .from_config() class method to create
  a Client class instance from configuration dictionary.

next:
* Setup Phase - create a toml file with public values, such as contract
  address, name, file location.
The setup phase for now is responsible to deploy the contract and to
provide the contract address to the MPC servers and clients.

Additionaly, the contract deployer, the MPC servers and clients, need
an eth address and for now a "dummy" eth address is assigned to each
participant. The addresses can be added to the common config file.

next:
* make sure all common data is in the config file
* update client config with just client config data
* use config for mpc network
@sbellem sbellem marked this pull request as draft April 10, 2020 02:39
sbellem added 2 commits April 10, 2020 23:45
The goal is to provide a somewhat generic Client class that can be used
as a base for specific clients.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants