orderer集群多机部署

0、说明:

orderer节点的部署方式有两种:solo和kafka。单机很简单,这次重点记录kafka(分布式队列)的部署方式

首先规划好机器节点容器配置。

orderer: 3台
zookeeper: 3、5、7,此处简单设置,我们启动3台
kafka: 4台最好,4台情况下可允许一台宕机(本次示例中,kafka配置了3台,机器不够,权当第四台宕机了。。。)

首先必须明确启动顺序,辅助容器先起。

所以启动顺序是:zookeeper、kafka、orderer、peer

1、生成网络配置以及证书文件

修改网络配置文件configtx.yaml(该配置文件中只需修改orderer相关的配置就行)

Orderer: &OrdererDefaults

 # Orderer Type: The orderer implementation to start # Available types are "solo" and "kafka" OrdererType: kafka Addresses: - orderer0.example.com:7050 - orderer1.example.com:7050 - orderer2.example.com:7050 # Batch Timeout: The amount of time to wait before creating a batch BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block BatchSize: # Max Message Count: The maximum number of messages to permit in a batch MaxMessageCount: 10 # Absolute Max Bytes: The absolute maximum number of bytes allowed for # the serialized messages in a batch. AbsoluteMaxBytes: 98 MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for # the serialized messages in a batch. A message larger than the preferred # max bytes will result in a batch larger than preferred max bytes. PreferredMaxBytes: 1024 KB Kafka: # Brokers: A list of Kafka brokers to which the orderer connects. Edit # this list to identify the brokers of the ordering service. # NOTE: Use IP:port notation. Brokers: - 192.168.3.231:9092 - 192.168.3.231:10092 - 192.168.3.232:9092 - 192.168.3.233:9092 # Organizations is the list of orgs which are defined as participants on # the orderer side of the network Organizations:

修改crypto-config.yaml,生成各个orderer的相关证书(同样也是修改orderer相关配置)

OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: example.com
    CA:
        Country: US
        Province: California
        Locality: San Francisco
    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer0
      - Hostname: orderer1
      - Hostname: orderer2

使用generate脚本自动生成相关配置

./generateArtifacts.sh mychannel

2、编写zookeeper、kafka、orderer、peer等相关节点配置文件

此处只给出示例,详细配置不解释。

docker-compose-zookeeper.yaml:

version: '2'

services: zookeeper1: container_name: zookeeper1 hostname: zookeeper1 image: hyperledger/fabric-zookeeper restart: always environment: - ZOO_MY_ID=1  - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 quorumListenOnAllIPs=true  ports: - "2181:2181"  - "2888:2888"  - "3888:3888"  extra_hosts: - "zookeeper1:192.168.3.231"  - "zookeeper2:192.168.3.232"  - "zookeeper3:192.168.3.233"  - "kafka1:192.168.3.231"  - "kafka2:192.168.3.231"  - "kafka3:192.168.3.232"  - "kafka4:192.168.3.233"

docker-compose-kafka.yaml:

version: '2'

services: kafka1: container_name: kafka1 hostname: kafka1 image: hyperledger/fabric-kafka restart: always environment: - KAFKA_BROKER_ID=1  - KAFKA_MIN_INSYNC_REPLICAS=2  - KAFKA_DEFAULT_REPLICATION_FACTOR=3  - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181  - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false  ports: - "9092:9092"  extra_hosts: - "zookeeper1:192.168.3.231"  - "zookeeper2:192.168.3.232"  - "zookeeper3:192.168.3.233"  - "kafka1:192.168.3.231"  - "kafka2:192.168.3.232"  - "kafka3:192.168.3.233"

docker-compose-orderer.yaml:

version: '2'

services: orderer0.example.com: container_name: orderer0.example.com image: hyperledger/fabric-orderer environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=e2e_cli_default  - ORDERER_GENERAL_LOGLEVEL=debug  - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0  - ORDERER_GENERAL_LISTENPORT=7050  - ORDERER_GENERAL_GENESISMETHOD=file  - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block  - ORDERER_GENERAL_LOCALMSPID=OrdererMSP  - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp  # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true  - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key  - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt  - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]  - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s  - ORDERER_KAFKA_RETRY_LONGTOTAL=100s  - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s  - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s  - ORDERER_KAFKA_VERBOSE=true  - ORDERER_KAFKA_BROKERS=[192.168.3.231:9092,192.168.3.231:10092,192.168.3.232:9092,192.168.3.233:9092]  working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block  - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp  - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls  ports: - 7050:7050  extra_hosts: - "kafka1:192.168.3.231"  - "kafka2:192.168.3.232"  - "kafka3:192.168.3.233"

docker-compose-peer.yaml:

version: '2'

services: ca0: image: hyperledger/fabric-ca environment: - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server  - FABRIC_CA_SERVER_CA_NAME=ca-org1  - FABRIC_CA_SERVER_TLS_ENABLED=true  - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem  - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/b8db7518d78a9e26d1441fd63268c9d7cac7ccc23cedf9ec78c9676699b85d5c_sk  ports: - "7054:7054"  command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/b8db7518d78a9e26d1441fd63268c9d7cac7ccc23cedf9ec78c9676699b85d5c_sk -b admin:adminpw -d' volumes: - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config  container_name: ca_peerOrg1 peer0.org1.example.com: container_name: peer0.org1.example.com extends: file: base/peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer0.org1.example.com  - CORE_PEER_ADDRESS=peer0.org1.example.com:7051  - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052  - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052  - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051  - CORE_PEER_LOCALMSPID=Org1MSP  volumes: - /var/run/:/host/var/run/  - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp  - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls  ports: - 7051:7051  - 7052:7052  - 7053:7053  # 此处添加其它节点的域名和ip绑定(貌似只要配置启动网络的机器,后续的不需要) # 此处如果不加,peer日志会联系不上其它节点(鬼知道为什么。。 # 网传kafka集群模式下,这个属性没有用,所以建议本地hosts文件一起配上(我就是这么搞的)) extra_hosts: - "peer1.org1.example.com:IP"  - "peer0.org2.example.com:IP"  - "peer1.org2.example.com:IP"  peer1.org1.example.com: container_name: peer1.org1.example.com extends: file: base/peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer1.org1.example.com  - CORE_PEER_ADDRESS=peer1.org1.example.com:7051  - CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:7052  - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052  - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051  - CORE_PEER_LOCALMSPID=Org1MSP  volumes: - /var/run/:/host/var/run/  - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp  - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls  ports: - 8051:7051  - 8052:7052  - 8053:7053  cli: container_name: cli image: hyperledger/fabric-tools tty: true environment: - GOPATH=/opt/gopath  - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock  - CORE_LOGGING_LEVEL=DEBUG  - CORE_PEER_ID=cli  - CORE_PEER_ADDRESS=peer0.org1.example.com:7051  - CORE_PEER_LOCALMSPID=Org1MSP  - CORE_PEER_TLS_ENABLED=true  - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt  - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key  - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt  - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp  working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer #command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT' volumes: - /var/run/:/host/var/run/  - ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go  - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/  - ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/  - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts  extra_hosts: - "orderer0.example.com:192.168.3.231"  - "orderer1.example.com:192.168.3.232"  - "orderer2.example.com:192.168.3.233"  - "peer0.org1.example.com:192.168.3.231"  - "peer1.org1.example.com:192.168.3.231"  - "peer0.org2.example.com:192.168.3.232"  - "peer1.org2.example.com:192.168.3.233"

分发配置文件至其他服务器:

scp -r e2e_cli root@192.168.1.1:/tmp

3、微调各个节点分发到的配置文件,并启动容器

先在各个节点上启动zookeeper:

docker-compose -f docker-compose-zookeeper.yaml up -d

然后在各个节点上启动kafka:

docker-compose -f docker-compose-kafka.yaml up -d

各个节点启动orderer:

docker-compose -f docker-compose-orderer.yaml up -d

根据联盟结构,启动各自的peer:

docker-compose -f docker-compose-peer.yaml up -d

4、使用cli构建网络,通道,链码等,并进行测试

docker exec -it cli bash
./script/script.sh

阅读更多

更多精彩内容