This document covers setting up a network on your local machine for various development and testing activities. Unless you are intending to contribute to the development of the Hyperledger Fabric project, you'll probably want to follow the more commonly used approach below - [leveraging published Docker images](#leveraging-published Docker-images) for the various Hyperledger Fabric components, directly. Otherwise, skip down to the secondary approach below.
This approach simply leverages the Docker images that the Hyperledger Fabric project publishes to DockerHub and either Docker commands or Docker Compose descriptions of the network one wishes to create.
Note: When running Docker natively on Mac and Windows, there is no IP forwarding support available. Hence, running more than one fabric-peer image is not advised because you do not want to have multiple processes binding to the same port. For most application and chaincode development/testing running with a single fabric peer should not be an issue unless you are interested in performance and resilience testing the fabric's capabilities, such as consensus. For more advanced testing, we strongly recommend using the fabric's Vagrant development environment.
With this approach, there are multiple choices as to how to run Docker: using Docker Toolbox or one of the new native Docker runtime environments for Mac OSX or Windows. There are some subtle differences between how Docker runs natively on Mac and Windows versus in a virtualized context on Linux. We'll call those out where appropriate below, when we get to the point of actually running the various components.
Once you have Docker (1.11 or greater) installed and running, prior to starting any of the fabric components, you will need to first pull the fabric images from DockerHub.
docker pull hyperledger/fabric-peer:latest
docker pull hyperledger/fabric-membersrvc:latest
Note: This approach is not necessarily recommended for most users. If you have pulled images from DockerHub as described in the previous section, you may proceed to the next step.
The second approach would be to leverage the development environment setup (which we will assume you have already established) to build and deploy your own binaries and/or Docker images from a clone of the hyperledger/fabric GitHub repository. This approach is suitable for developers that might wish to contribute directly to the Hyperledger Fabric project, or that wish to deploy from a fork of the Hyperledger code base.
The following commands should be run from within the Vagrant environment described in Setting Up Development Environment.
To create the Docker image for the hyperledger/fabric-peer
:
cd $GOPATH/src/github.com/hyperledger/fabric
make peer-image
To create the Docker image for the hyperledger/fabric-membersrvc
:
make membersrvc-image
Check the available images again with docker images
. You should see hyperledger/fabric-peer
and hyperledger/fabric-membersrvc
images. For example,
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-membersrvc latest 7d5f6e0bcfac 12 days ago 1.439 GB
hyperledger/fabric-peer latest 82ef20d7507c 12 days ago 1.445 GB
If you don't see these, go back to the previous step.
With the relevant Docker images in hand, we can start running the peer and membersrvc services.
Next, we need to determine the address of your docker daemon for the CORE_VM_ENDPOINT. If you are working within the Vagrant development environment, or a Docker Toolbox environment, you can determine this with the ip add
command. For example,
$ ip add
<<< detail removed >>>
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ad:be:70:cb brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:adff:febe:70cb/64 scope link
valid_lft forever preferred_lft forever
Your output might contain something like inet 172.17.0.1/16 scope global docker0
. That means the docker0 interface is on IP address 172.17.0.1. Use that IP address for the CORE_VM_ENDPOINT
option. For more information on the environment variables, see core.yaml
configuration file in the fabric
repository.
If you are using the native Docker for Mac or Windows, the value for CORE_VM_ENDPOINT
should be set to unix:///var/run/docker.sock
. [TODO] double check this. I believe that 127.0.0.1:2375
also works.
The ID value of CORE_PEER_ID
must be unique for each validating peer, and it must be a lowercase string. We often use a convention of naming the validating peers vpN where N is an integer starting with 0 for the root node and incrementing N by 1 for each additional peer node started. e.g. vp0, vp1, vp2, ...
By default, we are using a consensus plugin called NOOPS
, which doesn't really do consensus. If you are running a single peer node, running anything other than NOOPS
makes little sense. If you want to use some other consensus plugin in the context of multiple peer nodes, please see the Using a Consensus Plugin section, below.
We'll be using Docker Compose to launch our various Fabric component containers, as this is the simplest approach. You should have it installed from the initial setup steps. Installing Docker Toolbox or any of the native Docker runtimes should have installed Compose.
Let's launch the first validating peer (the root node). We'll set CORE_PEER_ID to vp0 and CORE_VM_ENDPOINT as above. Here's the docker-compose.yml for launching a single container within the Vagrant development environment:
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=vp0
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
You can launch this Compose file as follows, from the same directory as the docker-compose.yml file:
$ docker-compose up
Here's the corresponding Docker command:
$ docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_LOGGING_LEVEL=DEBUG -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger/fabric-peer peer node start
If you are running Docker for Mac or Windows, we'll need to explicitly map the ports, and we will need a different value for CORE_VM_ENDPOINT as we discussed above.
Here's the docker-compose.yml for Docker on Mac or Windows:
vp0:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
- "30303:30303"
- "30304:30304"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
This single peer configuration, running the NOOPS
'consensus' plugin, should satisfy many development/test scenarios. NOOPS
is not really providing consensus, it is essentially a no-op that simulates consensus. For instance, if you are simply developing and testing chaincode; this should be adequate unless your chaincode is leveraging membership services for identity, access control, confidentiality and privacy.
If you want to take advantage of security (authentication and authorization), privacy and confidentiality, then you'll need to run the Fabric's certificate authority (CA). Please refer to the CA Setup instructions.
Following the pattern we established above we'll use vp1
as the ID for the second validating peer. If using Docker Compose, we can simply link the two peer nodes.
Here's the docker-compse.yml for a Vagrant environment with two peer nodes - vp0 and vp1:
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
vp1:
extends:
service: vp0
environment:
- CORE_PEER_ID=vp1
- CORE_PEER_DISCOVERY_ROOTNODE=vp0:30303
links:
- vp0
If we wanted to use the docker command line to launch another peer, we need to get the IP address of the first validating peer, which will act as the root node to which the new peer(s) will connect. The address is printed out on the terminal window of the first peer (e.g. 172.17.0.2) and should be passed in with the CORE_PEER_DISCOVERY_ROOTNODE
environment variable.
docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_PEER_ID=vp1 -e CORE_PEER_ADDRESSAUTODETECT=true -e CORE_PEER_DISCOVERY_ROOTNODE=172.17.0.2:30303 hyperledger/fabric-peer peer node start
A consensus plugin might require some specific configuration that you need to set up. For example, to use the Practical Byzantine Fault Tolerant (PBFT) consensus plugin provided as part of the fabric, perform the following configuration:
- In
core.yaml
, set thepeer.validator.consensus
value topbft
- In
core.yaml
, make sure thepeer.id
is set sequentially asvpN
whereN
is an integer that starts from0
and goes toN-1
. For example, with 4 validating peers, set thepeer.id
tovp0
,vp1
,vp2
,vp3
. - In
consensus/pbft/config.yaml
, set thegeneral.mode
value tobatch
and thegeneral.N
value to the number of validating peers on the network, also setgeneral.batchsize
to the number of transactions per batch. - In
consensus/pbft/config.yaml
, optionally set timer values for the batch period (general.timeout.batch
), the acceptable delay between request and execution (general.timeout.request
), and for view-change (general.timeout.viewchange
)
See core.yaml
and consensus/pbft/config.yaml
for more detail.
All of these setting may be overridden via the command line environment variables, e.g. CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft
or CORE_PBFT_GENERAL_MODE=batch
See Logging Control for information on controlling
logging output from the peer
and deployed chaincodes.