일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
- 스칼라
- Actor
- 파이썬 강좌
- Adapter 패턴
- Golang
- 블록체인
- 그라파나
- 하이브리드앱
- 파이썬 데이터분석
- Play2 로 웹 개발
- Akka
- Play2
- 이더리움
- CORDA
- 엔터프라이즈 블록체인
- 파이썬 머신러닝
- play2 강좌
- 파이썬 동시성
- 플레이프레임워크
- akka 강좌
- Hyperledger fabric gossip protocol
- 파이썬
- 스칼라 동시성
- play 강좌
- 안드로이드 웹뷰
- 주키퍼
- 스칼라 강좌
- 스위프트
- 하이퍼레저 패브릭
- hyperledger fabric
- Today
- Total
HAMA 블로그
[Hyperleder Fabric ] FastFabric - 20,000tps Q/A 본문
[Hyperleder Fabric ] FastFabric - 20,000tps Q/A
[하마] 이승현 (wowlsh93@gmail.com) 2019. 11. 26. 12:31
FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per Second
위 논문에 대한 저자와의 질문/답변을 공개합니다.
Question to U.Waterloo
Improvement 1: Orderer - Seperate transaction header from payload\
- You stated that we should reduce the amount of data sent from Orderer to Kafka by keeping the RWSet at the Orderer and send only the transaction's header to Kafka
- This works well when there's only one Orderer, but in case there's multiple Orderers to distribute load, how can the Orderer that made the block retrieves the body of transactions from the other Orderers?
Improvement 2: Orderer - Process transactions in parallel
- We think it's a good idea to distribute transactions into multiple threads inside Orderer as this utilizes hardware more efficiently
- However the disruptor pattern (https://lmax-exchange.github.io/disruptor/) maybe faster for this task. What are your thoughts on applying this pattern?
Improvement 3: Peer - Replacing the world state database with a hash table
- As we understood, you wanted to replace LevelDB with a completely in-memory lightweight structure like a hash table. We think this is a good improvement, but it comes with its own problems:
- What if the stateDB grows in size to be more than the machine's memory?
@There must be a mechanism to save memory to disk, change the hardware and boot up the node a again
@Physically upgrade the node in this way cost a lot of time and resources
- What if the node goes down suddenly? All data stored in memory would be lost, whereas with LevelDB it would be recovered partly from disk.
Improvement 4: Peer- Store blocks using a cluster
- As we understood, you suggest peers should store files in a Hadoop or Spark cluster
- We think this will greatly increase the complexity of the Peer with little benefit, since State DB storage has more impact on TPS than block storage. Do you have any other reason for this suggestion than to increase block write speed?
Improvement 5: Peer - Separate commitment and endorsement
- We agree with splitting committer and endorser into different servers since this will reduce context switching.
- We will make the endorser's DB a read-only replica of the committer's State DB as designed after Improvement 3.
- This approach comes with committer - endorser synchronization overhead (due to the need to use networking to synchronize states). We think more testing is needed to see if this overhead is worth the benefits.
Improvement 6: Peer - Parallelize validation
- From what we've read, this mean parallelize packet consistency check, signature check an RWSet correctness check.
- We see there's a potential for double spending when we process transactions to the same account in parallel. Do you have any solution for this without impacting performance?
- Also, concurrent processing comes with context switching overhead, will this be a problem when the system's under load?
- We intend to offload signature verification to hardware using FPGA or ASIC, do you think this will result in good improvements in processing speed?
Improvement 7: Peer - Cache marshaled blocks
- We mostly agree with this since this make the cache more versatile and reduce memory allocation.
- We will perform benchmark on this to see how much performance improvements can be gained while not increasing the complexity of the system.
Response from U.Waterloo
- We found that Kafka got exorbitantly slower for bigger message sizes. That is why we split of the body from the tx header. For multiple orderers, each would need to broadcast their tx bodies to other orderers out of band of Kafka, which we found to be faster in preliminary tests, but we didn’t fully implement the broadcast. Note that this might be obsolete now that Fabric is using Raft.
- I’m not very familiar with the disruptor, but it seems to me that this is an orthogonal optimization. Currently, all orderer cores but one are idle, so we definitely need to optimize for concurrent execution. It seems to me that the disruptor is simply managing access to the tx queue. Therefore it would need to be tested if the disruptor is faster than using a go channel for this.
- For most reasonable workloads a peer’s memory should be able to store billions of keys in memory. If this is not enough, some paging mechanism would need to be implemented, but we did not address this in our paper. Peers are still writing to disk, they just outsource the task to a secondary storage server. Therefore the latest permeated state could be recovered from there. We did not check if Fabric’s recovery mechanisms still work with the external storage, but it should be “relatively” easy to reroute recovery to the storage server.
- We suggest Hadoop or Spark as a possibility to be able to easily do data analytics on the peer backend. However, it is not necessary and in our proof of concept implementation we still use LevelDB (the storage server simply has to catch up to the fast peer whenever it is not running at maximum throughput)
- After implementing all our improvements, the throughput was still CPU bound (crypto computations), so we didn’t feel the network overhead yet.
- In FastFabric, we only parallelize the crypto verification, RWset correctness validation is still done sequentially to prevent double spending. In our most current work (https://arxiv.org/abs/1906.11229) we add a tx dependency analyzer to be able to parallelize RWset correctness validation for independent txs as well. I think offloading crypto to specialized hardware is the way to go forward to increase performance further.
- Using a unmarshalling cache as described in the paper is a workaround for better performance so we didn’t need to completely rewrite Fabric. In our most current implementation, each block is caching it’s own unmarshalled data, so we don’t need a centralized cache anymore (so no potential locking issues with multiple threads reading/writing blocks). Ideally, the unmarshalling would happen directly at the network interfaces and, internally, Fabric would only deal with domain model objects.
페이스북 : 엔터프라이즈 블록체인 그룹
'블록체인' 카테고리의 다른 글
[Hyperleder Fabric ] Leader election in RAFT, Gossip (0) | 2020.01.15 |
---|---|
PBFT와 자손들 Tendermint, NCCU BFT, SEIVE, IBFT, MinBFT, RBFT, Mir-BFT, aBFT (0) | 2020.01.03 |
hyperledger besu vs qourum vs Corda vs Fabric (0) | 2019.11.12 |
DID / SSI 를 위한 블록체인 플랫폼 - Hyperledger Indy (0) | 2019.10.23 |
[하이퍼레저 패브릭] FastFabric - 20,000tps (0) | 2019.08.30 |