#5 frontend storage performance test

已關閉
l1dev3 年之前創建 · 1 條評論
l1dev commented 3 年之前

website for client side performance auditing of storage nodes

So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain.

  • pick some random number of objects (~50, variable)
  • more than one measurement per provider to even out random variability from one request to another
  • allow user to redo the analysis with chosen number of objects

presentation

visualize results with graphs

performance

use hydra

query {
  workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){
    metadata
  }
}

future

files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.

website for client side performance auditing of storage nodes So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain. - pick some random number of objects (~50, variable) - more than one measurement per provider to even out random variability from one request to another - allow user to redo the analysis with chosen number of objects # presentation visualize results with graphs # performance use [hydra](https://hydra.joystream.org/graphql) ``` query { workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){ metadata } } ``` # future files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.
l1dev commented 3 年之前
所有者
https://joystreamstats.live/storage
l1dev3 年之前 關閉
Sign in to join this conversation.
未選擇里程碑
未指派成員
1 參與者
正在加載...
取消
保存
尚未有任何內容