And now for the thrilling conclusion to… HLv1_RPiDS! (<— a what now?)
- Setting up a Hyperledger Fabric development environment on a Raspberry Pi
- Building Hyperledger Fabric on Raspberry Pi
- Setting up a Docker Swarm on Raspberry Pi
- Deploying a Hyperledger Fabric network on the Swarm with Docker Stack and testing with BYFN.
In this section we’ll go over the steps I take to launch the network and talk through some of the configuration sections to watch out for as you setup your own.
But first a quick proof of work demonstration:
First things first, verify your swarm is up and running with docker node ls
. You should see all your nodes in the active state, if you don’t start troubleshooting, (tip: start with a reboot for that node 😉 )
Create an Attachable Overlay Network
In order for our nodes to communicate we’ll need an overlay network in our case we want to use an attachable one.
docker network create -d overlay --attachable hyperledger-fabric
Clone the repo to each device
Next we need the repo with all the magic in it. You’ll want to clone or fork it yourself and update the constraints names to the hostnames you used for your RasPis and possibly other things like your username. See this commit to know what I’m talking about (as I just did it myself for this project). This repo needs to be cloned on each of your workers as well. This is primarily to pass the certificates easily. I’ll go into a full dissection of the docker compose file I’ve used and what you should change and do in other situations later in this article.
If you have suggestions or improvement for the compose configuration please submit a pull request. I’ve been trying to think of way to make it more universal but I always get sidetracked…
Clone this or your modified repo to your Swarm master and workers: git clone https://github.com/Cleanshooter/hyperledger-pi-composer.git
Prep monitoring
I actually like watching the nodes communicate back and forth so I have some monitoring setup, pus it’s helpful if something isn’t working and you need to debug. If you’d like to watch was well…
on Node 1 run: tail ./hyperledger-pi-composer/logs/peer1org1log.txt -f
on Node 2 run: tail ./hyperledger-pi-composer/logs/peer0org2log.txt -f
on Node 3 run: tail ./hyperledger-pi-composer/logs/peer1org2log.txt -f
Demo Arcitecture
In my setup I have 4 nodes in my swarm and 6 containers:
- 1 Orderer
- 2 Organizations with 2 peers each.
- 1 CLI image to run the BYFN script
If you change the architecture in your version of the docker compose file you’ll need to update the crypto-config.yaml, generate new certificates and a new genesis block as well.
Start it up
The docker compose file we’ll use will not only start-up your nodes on specific workers but it will automatically mount the needed volumes for the certs and keys from the repo. In another tutorial I’ll go over how to generate your own certs (leave comments if interested) but for the sake of this introductory getting started tutorial I’m going to skip that part.
Not only will the docker compose file start up your containers and set the proper configuration but it will also automatically run our Build Your First Network [BYFN] test.
On your Master node cd into the repo you cloned and run: docker stack deploy --compose-file docker-compose-cli.yaml HLFv1_RPiDS && docker ps
IMMEDIATELY after it’s finished starting up look at the docker ps list for the CLI container ID (the one running jmotacek/fabric-tools:armv7l-1.0.7)
Find the ID and run docker logs -f [contianer ID]
This will show you the progress of the BFYN test as it goes through the steps to work with the various peers and nodes. The first time you run this it will take a while to complete as the various nodes download the necessary docker images to launch the containers they need to fulfill the BYFN actions. The second run should complete in under 5 minutes.
Shut it down
Once your happy you can shut your test network down like so: docker stack rm HLFv1_RPiDS
If you have any issues with your tests let me know and I’ll try to help if I can. I’ve run into a multitude of issues of my tests so I might have some ideas. If you are having issues please post the actual outputs and bugs… otherwise it can be hard to follow.
Dissecting the Docker Compose file
As promised I’m going to provide some more details on the docker compose file that runs it all. I’ve added some comments to it below to provide more background and context than the one out on github.
# Copyright Joe Motacek All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '3' services: orderer: image: jmotacek/fabric-orderer:armv7l-1.0.7 environment: # The Orderer General stuff is fairly common in the setup for most HL configs # I used most of the settings found in other repos # You can configure HL to work with out TLS but I feel like that would be pointless since you really needed it in a private blockchain. - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] # This setting is CRUCIAL! # I spent hours digging in to the repos G Code and debugging to figure out this one out... # Basically OOTB HL expects to run on a much larger system than a RasPi and the default memory allocation for a container # exceeded what a RasPi actually has. This setting changes the max limit the system will try to take. - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 # Here you can designate your own images (if you want to try an 1.1 build for example) # NOTE if you did your own builds you'll need to push them to Docker Hub or somewhere that your Swarm workers can find them # Workers won't see images that only exist on your master node (or wherever you built your images) - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric # I can't remember which of these two is more important but if these aren't set the other nodes won't be able to communicate properly with each other. # I think it's both honestly... I know HL needs the hostname for something and the networks alias is needed so the nodes can communicate. hostname: orderer.example.com networks: hyperledger-fabric: aliases: - orderer.example.com volumes: # Genesis blocks are created with the generate script which I can cover in a separate tut if anyone is interested. - /home/jmotacek/hyperledger-pi-composer/channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - /home/jmotacek/hyperledger-pi-composer/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - /home/jmotacek/hyperledger-pi-composer/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls ports: - 7050:7050 # You don't necessarily need to constrain where Swarm places your nodes, I wanted to in this so I could do some fun stuff with blinky lights. (See video DEMO) deploy: placement: constraints: - node.hostname == hyperledger-swarm-master command: orderer peer0_org1: image: jmotacek/fabric-peer:armv7l-1.0.7 environment: - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock # The network mode tells the generated containers to use the external network we defined. # This way generated chaincode containers can attach to it - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-fabric - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_TLS_ENABLED=true - CORE_PEER_GOSSIP_USELEADERELECTION=true - CORE_PEER_GOSSIP_ORGLEADER=false - CORE_PEER_PROFILE_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt - CORE_PEER_ID=peer0.org1.example.com - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer hostname: peer0.org1.example.com networks: hyperledger-fabric: aliases: - peer0.org1.example.com volumes: - /var/run/:/host/var/run/ - /home/jmotacek/hyperledger-pi-composer/logs:/home/logs - /home/jmotacek/hyperledger-pi-composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - /home/jmotacek/hyperledger-pi-composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls ports: - 7051:7051 - 7053:7053 deploy: placement: constraints: - node.hostname == hyperledger-swarm-master # You really only need the peer node start portion of the command below. # I added the external logging for convenience and demos command: bash -c "peer node start > /home/logs/peer0org1log.txt 2>&1" # The other peers are just slightly modified permutations of the original # You can see them on github but there isn't anything unique to mention about each one besides that numbers change... peer1_org1: peer0_org2: peer1_org2: cli: image: jmotacek/fabric-tools:armv7l-1.0.7 tty: true environment: - GOPATH=/opt/gopath - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_ID=cli - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_PEER_TLS_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer # I give myself 30 seconds to start to tail the CLI container so I can watch the BYFN output command: /bin/bash -c 'sleep 30; ./scripts/script.sh; while true; do sleep 20170504; done' volumes: - /var/run/:/host/var/run/ - /home/jmotacek/hyperledger-pi-composer/chaincode:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go - /home/jmotacek/hyperledger-pi-composer/crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ - /home/jmotacek/hyperledger-pi-composer/scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/ - /home/jmotacek/hyperledger-pi-composer/channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts # This prevents the CLI containers from starting until all the other containers are running successfully. depends_on: - orderer - peer0_org1 - peer1_org1 - peer0_org2 - peer1_org2 deploy: placement: constraints: - node.hostname == hyperledger-swarm-master networks: hyperledger-fabric: aliases: - cli.example.com # This is external by design. We need to use an external attached network for the generated containers to communicate. # As HL executes chaincode it spawns containers throughout the swarm to process it. These generated containers need access to the network. # After your BYFN executes run docker ps on node 1 and node 3 and you'll see what I mean. networks: hyperledger-fabric: external: true
Hi Joe,
the last time you helped me was very good. I noticed you uploaded a new tutorial (Part 4). And if I clicked on it, I ran into an error. I don’t know, if this was on purpose, but if you are on the link Part 3, you can replace the 3 with the 4. So I did the Part 4 of the tutorial and did not get a runnable container. I ran the command
docker stack deploy –compose-file docker-compose-cli.yaml HLFv1_RPiDS && docker ps
and there is no container running. My docker-engine version is 18.05.0-ce.
Maybe you can help me with this problem.
Kind regards,
Jenna
Hi Joe, it is me again.
I fixed the problem, but ran into the next one. If I execute the command docker logs -f [id], it returns the following sentence:
/bin/bash: ./scripts/script.sh: No such file or directory
I haven’t found the problem. And I don’t know where to look after it.
Best regards,
Jenna
Looks like your not the only one who’s come across this: https://stackoverflow.com/questions/45352547/hyperledger-get-bin-bash-scripts-script-sh-no-such-file-or-directory-whe
My guess is that the git repo doesn’t exist on the RasPi your CLI container is running from (or the CLI contianer can’t find it in any case…). I’m not sure how many RasPis you have in your swarm but make sure that the deployment constraint in the compose file is locked to the RasPi you are running the docker stack command from (your manager node).
cli:
…
deploy:
placement:
constraints:
– node.hostname == hyperledger-swarm-master
…
Thanks for letting me know. Looks like my link was bad (stupid ellipses)
Joe, I am having some trouble getting the BYFN to run.
I have 4 pi. Configured as:
hyperledger-swarm-master
hyperledger-swarm-node1
hyperledger-swarm-node2
hyperledger-swarm-node3
I used the images you provided.
For the compose file I only changes the username for Volumes
When I run the deploy I get some output but no container.
mem@hyperledger-swarm-master:~/hyperledger-pi-composer $ docker stack deploy –compose-file docker-compose-cli.yaml HLFv1_RPiDS && docker ps
Creating service HLFv1_RPiDS_peer1_org1
Creating service HLFv1_RPiDS_peer0_org2
Creating service HLFv1_RPiDS_peer1_org2
Creating service HLFv1_RPiDS_cli
Creating service HLFv1_RPiDS_orderer
Creating service HLFv1_RPiDS_peer0_org1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
mem@hyperledger-swarm-master:~/hyperledger-pi-composer $
What am I missing here? Do i need to update the ports in the compose file?
Try running
‘docker stack ps HLFv1_RPiDS’
and let me know what the output is. I’m curious if there are some error messages there.
Hi, For this problem you need to update the docker-compose.yaml volumes from /home/jmotacek to /home/pi (or whatever you’re device username is) for all entries
DearJoe,
I’ve been trying this part, and I got the containers running. However, docker logs -f shows me that the initialization encounters a problem ” Error while dialing dial tcp: lookup orderer.example.com on 127.0.0.11:53: no such host”; Reconnecting to {orderer.example.com:7050 }”
I guess it’s because it’s missing a mapping between orderer.example.com to my real device IP address. Do you know where should I perform this mapping?
regards
Dear Joe,
I managed to make it work by adding 2dns_search: .” to all nodes inside composer_cli.yaml.
However, I encountered an error “Caused by: context deadline exceeded” when the script try to add a peer (ie: execute the command peer channel join), it’s quite random (meaning sometimes the three first nodes join the channel successfully, sometime there is a problem directly for peer0) by has never managed to successfully the “End-2-End Scenario”
Hope you have an idea,
regards,
Hi,
Did you solve this problem??
regards
Hi Kevin,
Did u solved that issue, i have same problem..
regards
Hi Joe, I ran your 4 tutorials with success and I’m really thankful.
I am having a project at the moment where I am sending measured data (via temperature sensors). The data are sent to the smart contract (chaincode) using Hyperledger Fabric. I succeeded to run my own chaincode which will validate measured temperature values on 3 Pi’s. The problem I have is that I want the data sent by the Pi’s every second. The temperature sensors use local binaries, which are not part of the docker repository (see https://tutorials-raspberrypi.com/raspberry-pi-temperature-sensor-1wire-ds18b20/).
So, I know you know docker pretty well. And maybe you have a solution for this. This is not a question belonging to this tutorial, but I didn’t know how to contact you else.
Kind regards,
Jenna
In addition, I also have written scripts which send the data. But I have problems to involve the derived by the hardware. The script in the tutorial (see the link) uses the local data in /sys/bus/w1/devices/.
Kind regards 2.0,
Jenna
Brilliant. However, it’s a bit racy still. I’ve managed to run the end-to-end once successfully, which is a blessing, but most of the time it craps out with: “Query result on PEER0 is INVALID.” I’ll keep the tail of the trace, just in case you want it. Ta!
Hey Joe, I am trying to install fabric1.4 on the raspberry pi. But apparently while making the base-image on pi, it runs out of memory and the process is stopped. Can you please help me with any alternative solution to this?
What as the error message from the build?
Hi. I ran your all tutorial and succeed all. thanks.
I am trying to make node client. In order to make client, I know I have to create CA-server so as to communicate with hyperledger fabric network. It is possible in amd64-bit however, I wonder if possible in arm7-32bit OS(raspberryPi). Is there any example to construct node client module?
I have build stand alone node applications on RasPi before so it is possible.
hi , i am newbie on hyperledger, and i already try all the step given, and all perfect, BUT,,,,in the last step when i try to use this :
docker stack deploy –compose-file docker-compose-cli.yaml HLFv1_RPiDS && docker ps
that was not work perfectly, as shown as this :
Creating service HLFv1_RPiDS_peer1_org2
Creating service HLFv1_RPiDS_cli
Creating service HLFv1_RPiDS_orderer
Creating service HLFv1_RPiDS_peer0_org1
Creating service HLFv1_RPiDS_peer1_org1
Creating service HLFv1_RPiDS_peer0_org2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
no CONTAINER ID pop up,
so, can you help me with that issues,
Thanks
they probably crashed for some reason. try “docker ps -a”