#5 frontend storage performance test

クローズ
3 年 前l1dev によって開かれました · 1 コメント
l1dev3 年 前 にコメントしました

website for client side performance auditing of storage nodes

So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain.

  • pick some random number of objects (~50, variable)
  • more than one measurement per provider to even out random variability from one request to another
  • allow user to redo the analysis with chosen number of objects

presentation

visualize results with graphs

performance

use hydra

query {
  workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){
    metadata
  }
}

future

files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.

website for client side performance auditing of storage nodes So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain. - pick some random number of objects (~50, variable) - more than one measurement per provider to even out random variability from one request to another - allow user to redo the analysis with chosen number of objects # presentation visualize results with graphs # performance use [hydra](https://hydra.joystream.org/graphql) ``` query { workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){ metadata } } ``` # future files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.
l1dev3 年 前 にコメントしました
オーナー
https://joystreamstats.live/storage
l1dev 3 年 前 に閉じられました
会話に参加するには サインイン してください。
マイルストーンなし
担当者なし
1 参加者
読み込み中…
キャンセル
保存
まだコンテンツがありません