by maxlevush id 2130
videoId, slideId, timestamp_start:
Summary: This is the first installment of the Joystream community update video series in an effort to share more effectively what it is we are working on, what is coming down the pipe, what’s happening in the community, to bring everyone up to speed on where we are going and the road to mainnet basically.
Summary: In this first episode I’m going to be covering first of the three next immediate networks – Antioch, Sumer, and Olympia, in that order.
Then I am going to be covering Hydra which is an infrastructure piece making all these networks possible, and it is really important for delivering the main products we are working on that are consumer-facing – Atlas and Pioneer.
Then I am going to try to go through the community side – what we are doing, what the point of the different initiatives is, what the status is, and, of course, the different new specifications that we have prepared for new exciting improvements that are coming after Olympia.
Summary: If you have been following us for a while, if you are even inside of JS Genesis trying to build this out, I am sure you must have noticed that the technical and social complexity of what we are trying to deliver has escalated quite significantly in the last six months. And with any growing effort organization you are going to have a lot of difficulty trying to synchronize all that information effectively.
So, the point of this video series is to bring people more up to date on what we are doing as up until now we have been making these network releases and announcements that are themselves kind of quite brief on the details of what’s actually being delivered.
Summary: It is not just the scope of what we are trying to do but the number of people involved inside of JS Genesis specifically has grown significantly.
People are involved into sub teams, they are trying to deliver quite complex functionality but still isolated to some smaller part of the system. Then it is very easy to get lost and not see how everything fits together.
So, it’s both for the benefit of the community and us, as an organization, to try to get up to speed and organized around what we are trying to deliver.
Video 1 Community Update #1
00:01 Hi everyone!
00:02 And welcome to this first installment of the Joystream community update video series.
00:08 So, this is, as I said, the first installment of many to come
00:12 in an effort to try to share a little bit more effectively what it is we are working on, what is coming down the pipe, what’s happening in the community.
00:18 Just to try to bring everyone up to speed on where we are going and the road to main net basically.
00:25 So, in this first episode I’m going to be covering first of the three next immediate networks – Antioch, Sumer, and Olympia, in that order.
00:34 Then I am going to be covering Hydra which is sort of an infrastructure piece which is part of making all these networks possible and really important for delivering the main products we are working on that are consumer-facing – Atlas and Pioneer.
00:49 Then I am going to try to go through the community side – what are we doing, what is the point of the different initiatives, what is the status, and, of course, least but not last, well, last but not least, the different new specifications that we have prepared for new exciting improvements that are coming after Olympia.
01:12 So, I guess I should say a little bit more about what the point is of these update series.
01:19 If you have been following us for a while, if you are even inside of JS Genesis trying to build this out, I am sure you must have noticed that the technical and social complexity of what we are trying to deliver has escalated quite significantly in the last six months or so.
01:35 And with any growing effort organization you are going to have a lot of difficulty trying to synchronize all that information effectively.
01:44 So, the point of this video series is we’ll just try to open up the floodgates informationally speaking so that people are more up to date on our sort of finer time scale in terms of what we are doing because up until now we have been making these network releases and announcements that are themselves kind of quite brief on the details of what’s actually being delivered.
02:08 You are going to have to dig into a lot of documentation and try a lot of stuff in order to learn to understand.
02:14 So, it is not the best format for conveying where we are and what we are doing and what the point is.
02:18 So, hopefully, these video series will go somewhere towards getting people synchronized on what we are trying to do, and it is not just the scope of what we are trying to do but the number of people involved inside of JS Genesis specifically has grown significantly.
02:37 People are involved into sub teams, they are trying to deliver quite complex functionality but still isolated to some smaller part of the system.
02:45 Then it is very easy to get lost and not see the sort of how everything fits together.
02:51 So, it’s both for the benefit of the community and us, as an organization, to try to get up to speed and organized around what we are trying to deliver.
03:01 So that is the goal and, hopefully, it is informative.
03:04 Please, give me feedback on what you think I should cover, how the format can be improved, and I will definitely try to take that on board.
03:16 So, this is going to be a six-part series.
03:20 This first update just to try to break it down, reduce the chance that I blow up one of these.
03:25 Well, if I did it all at once, I think I would have blown up the recording so I think just breaking it up is good for everyone.
03:31 So, the next episode, I think, will be about Antioch.
03:36 So, see you in that video.
03:38 Thank you for joining me and enjoy!
videoId, slideId, timestamp_start:
Summary: What is the Antioch network?
Summary: About two weeks ago we were trying to make a small, not very invasive upgrade to the Babylone network, which had been humming along for about three months or so, mainly to tweak a little bit of tokenomics parameters. We wanted to increase the number of simultaneous proposals there could be - just a few minor things like that to improve the effectiveness of the testnet.
Summary: Just for context, for people that may not know, the blockchain system or variety that the Joystream platform is built on allows you to upgrade the rules of the chain itself in flight using a special kind of transaction.
And we tried to use this on-chain upgradability feature at that time, and that was supposed to be fine but what happened was in a matter of 20 blocks or so after the upgrade, there was a split in the network which ended up partitioning the validators into two separate polls. One group thought that the new runtime was in play, and one group thought that the old runtime was in play.
That is obviously very undesirable. The whole point of your consensus system is to have agreement upon what the history and, therefore, the state of your blockchain is.
And we’ve gone through a lot of effort trying to get to the bottom of what happened.
Trying to figure out the root cause of live failures and distributed systems is notoriously difficult, in particular if you haven’t actually prepared yourself for trying to debug those sorts of failures to begin with which we hadn’t.
Summary: We’ve gone through lots of different iterations of or possible hypothesis for what the cause could be. The best hypothesis that we have at the current time is that there is a specific bug in the version of substrate.
The joystick blockchain is built on the substrate blockchain framework which is the framework that the Polkadot blockchain is built on, and, in general, the framework that’s used to build pair chains which are blockchains that connect to Polkadot which Joystream itself may or may not end up doing.
It’s a great framework because it means you don’t have to focus on peer-to-peer networking or consensus or any of these very low-level things, similarly as if you were deploying on Ethereum, and it really allows you to focus on building exactly the business logic that’s specific to your blockchain.
We are using a specific version of substrate, it isn’t particularly new, and the best hypothesis we could really come up with, for which there is limited evidence, was that there was a specific kind of bug in the version of substrate that we are relying on, and that’s the best candidate for what’s causing the failure. What we’ve been working on for the past two weeks or so has been to figure that out and then to migrate to a newer version of substrate.
Summary: That’s what we have done. We used to be on version two release candidate four, now we are on 201. We are going to be launching a new chain, namely the Antioch network in probably two or three days from now that would be based on a new version of substrate which has benefits of its own, but we are mainly doing it to hopefully resolve this problem.
We would then get the runtime that we were trying to get initially with these improvements of the parameters for the proposal system.
There have also been some other changes to the way the council work. It is a bigger council. The council period is not shorter.
There are a few things that have happened that have independent benefits but the main issue in Antioch is to get back to the core, use case that Babylon already had with these small improvements.
And then we are trying to get to Sumer as soon as possible. It’s a big, inconvenient departure from the focus that we had but we had to do it, and now Sumer is hopefully next within a short while.
Video 2 Antioch Network
00:01 Ok, so what is the Antioch network?
00:06 Now, about a week ago or so…I think it’s two weeks ago now. The time flies. We were trying to make a small not very invasive upgrade to the Babylone network which had been humming along for about, well, I want to say, three months or so.
00:26 Mainly to tweak a little bit of tokenomics parameters.
00:29 We wanted to increase the number of simultaneous proposals there could be. Just a few minor things like that to improve the effectiveness of the testnet.
00:38 And it wasn’t expected to be a big deal but what happened was not that long after the upgrade happened, so just for context, for people that may not know, the blockchain system or variety that the Joystream platform is built on allows you to upgrade the rules of the chain itself in flight using a special kind of transaction.
01:07 And that’s great for lots of reasons that we’ll probably cover in the future.
01:12 And we tried to use this on-chain upgradability feature at this time, at that time, and that was supposed to be fine but what happened was in a matter of a few, I want to say 20 blocks or so after the upgrade, there was a split in the network which ended up partitioning the validators into two separate polls.
01:34 One group thought that the new runtime was in play, and one group thought that the old runtime was in play.
01:39 That is obviously very undesirable.
01:42 The whole point of your consensus system is to have agreement upon what the history and, therefore, the state of your blockchain is.
01:50 So that’s obviously a serious problem.
01:53 And, you know, we’ve gone through a lot of effort trying to get to the bottom of what happened.
02:00 Trying to figure out the root cause of live failures and distributed systems is notoriously difficult, in particular if you haven’t actually prepared yourself for trying to debug those sorts of failures to begin with which we hadn’t.
02:15 And so, we’ve gone through lots of different iterations of or, I should say, possible hypothesis for what the cause could be.
02:28 The best hypothesis that we have at the current time is that there is a specific bug in the version of substrate.
02:35 So, taking a step back here as well in case you don’t know, the joystick blockchain is built on the substrate blockchain framework which is the framework that the Polkadot blockchain is built on.
02:48 And, in general, the framework that’s used to build pair chains which are blockchains that connect to Polkadot which Joystream itself may or may not end up doing.
02:55 It’s a great framework because it means you don’t have to focus on peer-to-peer networking or consensus or any of these very low-level things similarly as if you were deploying on Ethereum, let’s say.
03:09 And it really allows you to focus on building exactly the business logic that’s specific to your blockchain.
03:14 So just mentioning where does this substrate thing come from.
03:20 So, we are using the substrate, we are using a specific version of substrate, it isn’t particularly new, and the best hypothesis we could really come up with, for which there is limited evidence I should say, was that there was a specific kind of bug in the version of substrate that we are relying on, and that’s the best candidate for what’s causing the failure.
03:43 So, what we’ve been working on for the past two weeks or so has been to obviously figure that out, and then to migrate to a newer version of substrate. 03:54 So, that’s what we have done.
03:56 We used to be on version two release candidate four, now we are on 201.
04:02 And we are going to be launching a new chain, namely the Antioch network, that’s probably going to be in two or three days from now so that actually wrong on the slides because I just made them a few or a while back.
04:15 And that would be based on a new version of substrate which has benefits of its own, I should say, but we are mainly doing it to hopefully resolve this problem.
04:28 Of course, we would then get the runtime that we were trying to get initially with these improvements of the parameters for the proposal system and so on, there has also been some other changes to the way the council work.
04:42 I think we expanded from…Actually I don’t remember now, to be honest.
04:47 There are so many things going on but it is a bigger council.
04:50 The council period is not shorter.
04:53 So there are a few things that have happened that have independent benefits but the main issue here in Antioch is really to get back to the core, use case that Babylon already had with these small improvements.
05:07 And then we are trying to get to Sumer as soon as possible.
05:09 So, that’s the story on Antioch.
05:11 It’s a big, you know, inconvenient departure from the focus that we had but we had to do it, and now Sumer is hopefully next within a short while.
05:24 So that’s it on Antioch.
05:25 Join me again for Sumer.
Summary: Welcome to this second installment of the first Joystream community update. This segment is about the Sumer network which is a network we’ve been working on for about three months now. It is going to be building on Antioch which either is going to be released or has just been released depending on when this video comes out!
Summary: The goal in the Sumer network is to do three separate things.
First of all, we want to introduce the next and final iteration of our on-chain content directory.
Then we are going to introduce Atlas Studio which is new part of the Atlas product.
And then we are going to introduce a new working group which we are calling the operations working group.
Summary: The new content directory is an enhancement over the existing one and through pretty important ways.
The first one is that it is radically simplified. The existing content directory that we had was actually very complex because we were trying to achieve the goal of having the community to be able to update what is in the content directory, like videos and channels, and playlists without having to do runtime upgrades.
So, runtime upgrades, as I probably have mentioned in this community update, is a way in substrate chains can change the rules of the system. So, for example, at one point in time a video has a title, and then at some later point in time a video has a title and also what language the content of the video is recorded in or what language the people in the video speak.
That is a relatively small thing to change but you want to make it easier for the community to change stuff like that, and if changing every little thing like that requires a community update, it's going to be really hard for the community to iterate quickly on this part of the platform which really needs to be very flexible.
If you wanted to introduce other things, not just videos, for example, eBooks or some other mild variation of what we already have, it would also be very inhibiting if you'd have to do a runtime upgrade because you have to dive into the rust code, you have to change it, you have to figure out how to take all the old stuff in your state and turn it into the new stuff through a migration step that runs inside of the consensus of your blockchain, you have to update all sorts of dependencies and libraries and infrastructure to reflect how the new system works, you have to test a lot in advance.
If the change is significantly big, you should probably also do an integration test where you run through a simulated upgrade with some representative state in your system, you see how it works after the runtime upgrade, if your account still works, if your voting system still works.
So, it's a lot of work. And if you make a mistake, you can permanently destroy your chain. So, it’s risky, it's hard, and it requires a lot of care.
This is a very long-winded way of explaining why we ended up having the old content directory that we had. The point of that content directory was that it was very abstract, almost to the extent that it was like a relational database where it allowed the community to define schemas and concepts on chain so that you didn't have to do runtime upgrades to define new things or change the way things were represented.
The problem was that it was extremely complicated. It became really hard to both have work properly on chain, it became really hard for people to understand how it worked. It turned out that you couldn't actually get all the flexibility that you wanted.
What we're going to do in this release is we're going to put the heart of what it means to be in the content directory on chain, and then we're going to make the metadata associated with all the different things on the chain, such as videos and channels. We're going to make sure that that's actually very easy to change. You don't need to change the low-level business logic of the chain itself in order to make smaller tweaks that I described, such as the fact that a video may have a language. So, you just lift it out of the chain.
We just decided that this is the way our content directory is supposed to work. That’s a pretty big decision, and that's what's landing in Sumer.
Summary: Let me go through this very quickly.
The video of myself which is not that useful is covering up a part of the diagram which is useful. What's supposed to be there is a square which shows the unchanged storage system.
The on-chain content directory has in this representation memberships. Members own channels. Channels have within them stuff like videos, and playlists, and series. All those actually exist in the chain but they haven't been fully implemented, and they will not be implemented in the consumer product like in Atlas itself.
It has the idea of curators and curator groups. These are people who are employed in the content working group to manage and make sure that everything in the content directory is going according to plan, and they can also own channels themselves on behalf of the platform to feature official platform content.
Now the interesting part here is that on chain you have this sort of index of what videos exist, who owns them. You also have an index of what data exists, like the images, the cover photos, the actual video media files. There's like a map basically which holds a representation of who owns everything, how much space has member number X used out of all the space available to them to publish to their channel, and, of course, when the storage infrastructure is supposed to be replicating what part of the data. Right now, of course, that's fully replicated in the current storage system but that would be changed in a future version which I’m going to get to in one of the later videos. But that index also lives on chain in the data directory.
The actual storage is on separate off-chain infrastructure and storage nodes that are also responsible for shipping the data to users. One of the things that actually are possible in this release is for things outside the content directory to also use data. For example, we are aiming to have your membership avatars stored in the same storage system.
Before, for your avatar you really have to reference some URL somewhere. The first step of that in this Sumer release is that you could also store assets like that in the storage system itself, just like the videos for the content directory. Likewise, that could be used in other parts of the system, for example, as attachment in proposals or in forum posts.
It’s going to be a general infrastructure piece for the rest of the runtime.
That's the first part of what we're doing in Sumer on the content directory.
Summary: The next step is that we're launching Atlas Studio.
Atlas is a viewer product where you can see videos and channels.
And Atlas Studio is sort of the flip side of that experience where you can actually see all your channels, make channels, upload stuff to your channel, manage it, delete stuff - basically like the channel publisher owner experience.
That really is a very big step in the direction of making it easier for people to publish content to the system which before or at the current time has to be done through a command line interface which is a very rough experience.
Summary: I think I can show a few outtakes of what that experience looks like.
You'll have a nice experience for filling in the basic metadata and setting up your channel and editing it.
Summary: You will have a way to view all of your videos, and change and edit the metadata associated with them.
You have drafts for stuff that you haven't committed to chain locally stored.
This all runs in the browser, just as Atlas itself does.
Summary: There'll be a smooth upload flow for providing the media files and the basic metadata for videos in a step-by-step way which ends with you signing a transaction which, that's interesting, uses the Polkadot JS signer extension rather than the native wallet or local storage wallet that is in the normal Pioneer product that we're currently using.
That's also step in the right direction of having people use an external key manager.
Summary: As I mentioned, we can store assets now like images on the storage infrastructure, so that means we're going to be helping you set and provide the right assets, manage how they're going to be displayed as part of those upload flows.
I think, it's going to be a very big improvement.
Atlas studio is the second major goal to launch for this release.
Summary: If you have a look at the experience here for uploading and editing videos, you can see there's a tab system here, because we want to make it easier for people to manage multiple things at the same time.
With that of course comes the need to manage a lot of different uploads at the same time as well, so there would be a separate area to manage all the different assets that are uploading at any given time. Uploads can fail, you could lose your connection, so we'll have a graceful way for you to retry anything that hasn't worked in the past. I don't think we could have had anything reasonable even in the CLI to make this possible.
This is a very big step in the right direction, and it's a huge effort from a lot of people, designers and developers, and infrastructure pieces that are needed to get this to work.
Summary: Then the last piece of the puzzle is the Operations working group.
I am going to get to what a working group is in a little bit more detail later.
If you're a little bit familiar with Joystream, you’ve probably noticed that there's the council and there are these groups that are responsible for specific things, and the operations working group is a new group like that, and what's special about it is that it's supposed to be for any kind of activity that doesn't have at least yet an on-chain footprint or a role.
If you're a forum moderator, that implies that you can do certain things in the forum that other people can’t do. There's an on-chain forum in Joystream, as most people probably noticed. Likewise for the storage system.
The operations group is meant for all of those activities we're currently doing and which will be part of the system in the future which don't really have any direct privilege on chain.
We just want to provide the basics of what a working group allows you to model - what the roles are so everyone can see, it's transparent how people got into the roles, how they applied, what were the merits for people being admitted. People have predictable reward schedules for what they will be paid, they have predictable stake at risk, so they can be given a little bit more responsibility in terms of what they can do, what they can be tasked with on behalf of the group and of the system overall.
So, for example, we have at least one of the founding members, I believe, who is looking to be one of the first developers in the operations working group. In general, managers, marketers anyone who would like a role or a job but doesn't require you to do a lot on chain as VM. I’m hoping that this will be sort of a sandbox for discovering lots of roles that we haven't explicitly modeled into the system. Maybe we will as a result of what we find out but I think it's high time for something like this.
Summary: Again, my little preview thing is covering part of the image. I can’t move it, so I’ll just try to explain.
The goal of this is just to show how the working group fits into the overall system of Joystream.
There is some general information in this community update series so I'm sort of straddling the line between very general stuff and stuff very specific to the releases. I think in the future we'll do some deep dives where we try to go systematically through each one of these, and give you a more fine-grained and a thorough introduction.
The governance system in Joystream is actually deeper than what you find in a lot of other crypto systems. In a lot of other crypto systems, you just have a flat coin voting, sort of voting pool which has proposals. Typically, they're actually limited to things like signaling and spending in upgrading the protocol. You don't even really have that rich of a portfolio of proposals to choose from.
In Joystream that set of proposals are very broad. The root of trust for the whole system is a coin vote which happens not on individual proposals but on election cycles where you elect a council. A council is a one actor-one vote where you have council members vote on proposals. I think, the current setting we have for that is every two weeks there is a new council elected. That's mostly just informed by what's practical in order to have new people in the community, learn what's going on. It will be interesting to figure out what that number should be on main net but anyway there's a council which lives for a council period. The same, the members can stand for council, and they can be reelected for future councils.
The main responsibility of the council is to vote on proposals, and the proposals do the things that I've just described, including hiring leads for individual working groups. There's one working group per subsystem.
There's a membership subsystem at least in the Olympia runtime, which I actually haven't mentioned, but that's the third community update, I think, so it’s coming. Prop is mostly preoccupied with invitations to grow the membership pool. You have the storage working group which is primarily about operating the storage system, storage infrastructure. You have the forum for operating and curating the communication on the forum. You have the operations working group that we are talking about here. It's these different subsystems that run some part of what the overall platform needs to work.
Inside of each working group you basically have a leader which is someone who applies to occupy that role through a proposal to the council. That leader is basically responsible for spending money out of budget that is allocated to that group from the council for all sorts of things. For example, if you're a storage working group leader then you need to figure out how much money we need for the next month, and then you have to go to the council to have them give you that much for your budget.
The leader is able to pay the rewards for himself and everyone else, all the other workers, as they're called, in the working group, for providing the service to the system. The leaders are also able to change what someone has as their reward and can slash them if they do something they're not supposed to do. The same applies to the leader with respect to the council. The council can update the reward, and slash them, and fire them.
So, the working group is sort of the lowest bureaucratic organ in the overall governance hierarchy of the Joystream system. And we're getting a new working group in Sumer.
That hopefully was a useful introduction to working groups and the operations working group.
Video 3 Sumer Network
00:01 Hi and welcome to this second installment of the first Joystream community update.
00:07 So, this segment is about the Sumer network which is a network we’ve been working on for about, I want to say, three months now.
00:16 It is going to be building on Antioch which either is going to be released or has just been released depending on when this video comes out.
00:25 So, the goal in the Sumer network is to do three separate things.
00:32 First of all, we want to introduce the next and I want to say final iteration of our on-chain content directory.
00:39 I am going to explain this in further detail but I am just going over the overview.
00:43 Then we are going to introduce Atlas Studio which is new part of the Atlas product.
00:47 And then we are going to introduce a new working group which we are calling the operations working group.
00:51 So, let’s go through this.
00:53 So, the new content directory.
00:56 The new content directory is an enhancement over the existing one and through pretty important ways.
01:03 I am going to go though what the content directory actually is as in the next slide but just let’s dwell on this for a moment.
01:10 The first one is that it is radically simplified.
01:12 The existing content directory that we had was actually very very complex because we were trying to achieve the goal of having the community to be able to update what is in the content directory, so stuff like videos and channels, and playlists without having to do runtime upgrades.
01:35 So, runtime upgrades, as I probably have mentioned prior to this in this community update, is a way in substrate chains you can change the rules of the system.
01:46 So, for example, you can imagine at one point in time a video has a title, and then at some later point in time maybe a video has a title and also what language the content of the video is recorded in or what language the people in the video speak or something like that.
02:08 So, that's a relatively small thing to change but you want to make it easier for the community to change stuff like that, and if changing every little thing like that requires a community update, it's going to be really hard for the community to iterate quickly on this part of the platform which really needs to be very flexible.
02:25 If you wanted to introduce other things, not just videos, let's say you wanted to introduce like eBooks or, you know, some other mild variation of what we already have, it would also be very inhibiting if you'd have to do a runtime upgrade because we have to do a runtime upgrade, you have to dive into the rust code, you have to change it, you have to figure out how to take all the old stuff in your state and turn it into the new stuff through a migration step that runs inside of the consensus of your blockchain, you have to update all sorts of dependencies and libraries and infrastructure to reflect how the new system works, you have to test a lot in advance.
03:08 I mean, if you do it significantly, if the change is significantly big, you should probably also do a test, integration test where you run through a simulated upgrade with some representative state in your system, you see how it works after the runtime upgrade, does your account still work, does your voting system still work, and so on.
03:29 So, it's a lot of work.
03:31 And if you make a mistake, you can permanently destroy your chain.
03:36 So, it’s risky, it's hard, and it's, you know, requires a lot of care.
03:44 So this is a very long-winded way of explaining why we ended up having the old content directory that we had.
03:53 And the point of that content directory was that it was sort of very abstract, almost to the extent that it was like a relational database where it allowed the community to define schemas and concepts on chain so that you didn't have to do runtime upgrades to define new things or change the way things were represented.
04:15 That's great.
04:16 The problem was that it was extremely complicated.
04:19 It became really hard to both have work properly on chain, it became really hard for people to understand how it worked.
04:28 And really what it turned out to be was that you couldn't even get something that was all that flexible, so you couldn't actually get all the flexibility that you wanted.
04:37 So, what we did in this release is we just said screw it.
04:41 What we're going to do is we're going to put the heart of what it means to be in the content directory on chain, and then we're going to make the metadata associated with all the different things on the chain, such as videos and channels, and so on.
04:58 We're going to make sure that that's actually very easy to change.
05:02 So, you don't need to change the low-level business logic of the chain itself in order to make the sort of smaller tweaks that I described, such as the fact that a video may have a language.
05:15 So, you sort of lift that out of the chain entirely.
05:18 We also just decided that this is the way our content directory is supposed to work.
05:23 So, that's a pretty big decision.
05:25 And that's what's landing in Sumer.
05:30 So, let me go through now, just very quickly.
05:34 So, the video of myself which is not that useful is covering up a part of the diagram which is useful.
05:43 What's supposed to be there is a square which shows the unchanged storage system.
05:50 I’m going to figure out later whether I change that or not but let's just go with the flow.
05:53 So, the on-chain content directory has in this representation, as you can see, memberships.
06:02 Members own channels. 06:04 Channels have within them stuff like videos, and playlists, and series.
06:09 All those actually exist in the chain but they haven't been fully implemented, and they will not be implemented in the consumer product like in Atlas itself. 06:16 It has the idea of curators and curator groups.
06:19 These are people who are sort of employed in the content working group to manage and make sure that everything in the content directory is going according to plan, and they can also own channels themselves on behalf of the platform to feature official platform content and that kind of stuff.
06:37 Now the interesting part here is that on chain you just have this sort of index of all these things, you know, what videos exist, who owns them and this sort of stuff.
06:47 You also have an index of what data exists, so like the images, the cover photos, the actual video media files.
07:00 There's like a list of them you can think of or like a map basically which holds a representation of who owns everything, how much space has member number X used out of all the space available to them to publish to their channel and so on.
07:17 And, of course, when the storage infrastructure is supposed to be replicating what part of the data.
07:24 Right now, of course, that's fully replicated in the current storage system but that would be changed in a future version which I’m going to get to in one of the later videos.
07:35 But basically, that index also lives on chain in the data directory.
07:41 And then of course the actual storage is on separate off-chain infrastructure and storage nodes that are also responsible for shipping the data to users.
07:48 And, as you can see, one of the things that actually are possible in this release is for things outside the content directory to also use data.
08:00 So, stuff like your membership avatars, we are aiming to have stored in the same storage system.
08:09 So, before, you know, for your avatar you really have to reference some URL somewhere but for what we're going to be introducing, the first step of that in this Sumer release is that you could also store assets like that in the storage system itself just like the videos for the content directory.
08:28 Likewise, that could be used in other parts of the system, for example, as attachment in proposals or in forum posts and so on.
08:37 So, it’s going to be a general infrastructure piece for the rest of the runtime.
08:41 So, that's the first part of what we're doing in Sumer on the content directory.
08:46 The next step is that we're launching Atlas Studio.
08:50 So, Atlas is the sort of the viewer product where you can see videos and channels and so on.
08:57 And Atlas Studio is sort of the flip side of that experience where you can actually see all your channels, make channels, upload stuff to your channel, manage it, delete stuff - basically like the channel publisher owner experience.
09:13 That really is a very big step in the direction of making it easier for people to publish content to the system which before or at the current time has to be done through a command line interface which is a very rough experience.
09:28 I think I can show a few outtakes of what that experience looks like.
09:32 You'll have, you know, a nice experience for filling in the basic metadata and setting up your channel and editing it.
09:40 You will have a way to view all of your videos, and change and edit the metadata associated with them.
09:48 You have drafts for stuff that you haven't committed to chain locally stored.
09:53 This all runs in the browser, just as Atlas itself does.
09:57 There'll be a smooth sort of upload flow for providing the media files and the basic metadata for videos in a step-by-step way which ends with you signing a transaction which, now actually that's interesting, uses the Polkadot JS signer extension rather than the native wallet or, I should say, local storage wallet that is in the normal Pioneer product that we're currently using.
10:30 So, that's also step in the right direction of having people use an external key manager.
10:37 So, there’s also, as I mentioned, we can store assets now like images on the storage infrastructure, so that means we're going to be helping you set and provide the right assets, manage how they're going to be displayed as part of those upload flows.
10:53 I think, it's going to be a very big improvement.
10:56 So, that's Atlas Studio which is the second major goal to launch for this release.
11:02 I also forgot, of course, we're going to be, if you have a look at the experience here for uploading and editing videos, you can see there's sort of like a tab system here, and that's because we want to make it easier for people to manage multiple things at the same time.
11:18 With that, of course, comes the need to manage a lot of different uploads at the same time as well, so there would be a separate area to manage all the different assets that are uploading at any given time. Uploads can fail, you could lose your connection and so on.
11:34 So, we'll have a graceful way for you to retry anything that hasn't worked in the past.
11:42 I don't think we could have had anything reasonable even in the CLI to make this possible.
11:47 This is a very big step in the right direction, and it's a huge effort from a lot of people, designers and developers and infrastructure pieces that are needed to get this to work.
11:58 That's fantastic.
12:01 Then the last piece of the puzzle is the Operations working group.
12:07 So, what is this?
12:08 Well, I am going to get to what a working group is in a little bit more detail later but if you're a little bit familiar with Joystream, you’ve probably noticed that there's the council and then there are these groups that are responsible for specific things, and the operations working group is a new group like that, and what's special about it is that it's supposed to be for any kind of activity that doesn't have at least yet an on-chain footprint or a role.
12:34 So, let's say if you're a forum moderator, that implies that you can do certain things in the forum that other people can’t do.
12:44 There's an on-chain forum in Joystream, as most people probably noticed.
12:48 Likewise for the storage system and so on.
12:49 The operations group is meant for all of those activities we're currently doing and which will be part of the system in the future which don't really have any direct privilege on chain.
13:00 We just want to provide the basics of what a working group allows you to model - stuff like what the roles are so everyone can see, it's transparent how people got into the roles, how they applied, what were the merits for people being admitted.
13:17 People have predictive, they have predictable reward schedules for what they will be paid, they have predictable stake at risk, so they can be given a little bit more responsibility in terms of what they can do, what they can be tasked with on behalf of the group and of the system overall.
13:39 So, the examples we're going for at the moment are things like developers, we have at least one of the founding members, I believe, is looking to be one of the first developers in the operations working group.
13:51 In general, managers, marketers, anyone who would like you could think of almost like a role or a job but doesn't require you to do a lot on chain as VM.
14:00 So, that's the operations working group.
14:02 I’m hoping that this will be sort of a sandbox for discovering lots of roles that we haven't explicitly modeled into the system.
14:10 Maybe we will as a result of what we find out but I think it's high time for something like this.
14:17 What is actually… again my little preview thing is covering part of the image. I'm not sure, if I can actually move it now. Can I do that?
14:30 No, I can’t.
14:30 All right. So, I'll just try to explain. The goal of this is just to show how the working group fits into the overall system of Joystream.
14:42 There is some general information in this community update series so I'm sort of straddling the line between very general stuff and stuff very specific to the releases.
14:54 I think in the future we'll do some like deep dives where we try to go systematically through each one of these, and give you a more fine-grained and a thorough introduction.
15:05 I just want to sort of tease you with that here.
15:08 The governance system in Joystream is actually a lot more, it is deeper, I would say, than what you find in a lot other crypto system.
15:19 In a lot of other crypto systems, you just have a flat coin voting, sort of voting pool which has proposals.
15:28 Typically, they're actually limited to things like signaling and spending maybe in upgrading the protocol.
15:34 So, you don't even really have that rich of a portfolio of proposals to choose from.
15:38 In Joystream that set of proposals are very very very broad.
15:42 Of course, at the root, sort of the root of trust for the whole system is a coin vote which happens not on individual proposals but on election cycles where you elect a council.
15:55 A council is a sort of one actor-one vote where you have council members vote on proposals.
16:05 I think the current setting we have for that is every two weeks there is a new council elected.
16:14 I’m not actually at all sure we are confident about what that number should be on main net but that's what we have at the current time.
16:20 That's mostly just informed by what's practical in order to have new people in the community, learn what's going on.
16:28 We'll see, that's interesting, it will be interesting to figure out what that ought to be but anyway there's a council which lives for a council period.
16:36 The same, the members can stand for council, and they can be reelected for future councils.
16:43 The main responsibility of the council is to vote on proposals, and the proposals do the things that I've just described, including hiring leads for individual working groups.
16:55 There's one working group per subsystem you could think of it.
17:01 There's a membership subsystem which is primarily at least in the Olympia runtime, which I actually haven't mentioned, but that's the third community update, I think, so it’s coming.
17:15 Prop is mostly preoccupied with invitations to grow the membership pool.
17:21 You have the storage working group which is primarily about operating the storage system, storage infrastructure.
17:27 You have the forum for operating and curating the communication on the forum.
17:32 You have the operations working group that we are talking about here.
17:35 It's these different subsystems that run some part of what the overall platform needs to work.
17:43 Inside of each working group you basically have a leader which is someone who applies to occupy that role through a proposal to the council.
17:54 And that leader is basically responsible for spending money out of budget that is allocated to that group from the council for all sorts of things.
18:02 So, you can imagine, for example, if you're a storage working group leader then you need to figure out, well, how much money do we need for the next let's say month, and then you have to go to the council to have them give you that much for your budget.
18:20 The leader is able to pay the rewards for himself and everyone else, all the other workers, as they're called, in the working group, for providing the service to the system.
18:32 The leaders are also able to change what someone has as their reward and can slash them if they do something they're not supposed to do.
18:42 And, of course, same applies to the leader with respect to the council.
18:45 The council can update the reward and slash them and fire them and all this sort of stuff.
18:50 So, the working group is sort of the lowest sort of bureaucratic organ in the overall governance hierarchy of the Joystream system.
19:02 And we're getting a new working group in Sumer.
19:05 That hopefully was a useful introduction to working groups and the operations working group.
19:12 I think that's the last of it, so thank you for joining me for this Sumer update, see you in a bit.
Summary: Welcome to part three, glad you're still sticking with me.
This is about the Olympia network. The Olympia network is a mega release we've been working on for a long time asynchronously with everything else and particularly on the runtime side, also on the Pioneer side, obviously, and I’m going to get to it.
It's such a big release that it's not even scheduled to be the release immediately after the Sumer release.
The reason I’m putting it on the table is because it's probably one of those big milestones which may or may not be the last release, even before main net probably going to have one or two big releases, even after that, but it's a very important piece milestone for where we're trying to go.
It's also something that we were working on for such a long time that I thought it was worth sharing.
Summary: What's going on in this release?
We are doing two things.
One is that we're shipping a new updated simplified benchmarked and audited runtime which sees major improvements really across the board and new functionality and features for every subsystem.
And then it's introduction of Pioneer 2.
Pioneer, for those who don't know, is the governance app where you vote and stake and buy memberships and run for the elections in the council and forum, etc. That's all that has to do with participating in the system. Pioneer 2 is the sort of user facing application for doing that through a user interface.
It's a tremendous piece of work in terms of on the infrastructure, the design, the application development itself. There are a lot of pieces that are coming together, and we really could have released the runtime improvements that we already have but it just doesn't make sense for us to try to upgrade the version of Pioneer which is currently live, that we're calling Pioneer one, and try to upgrade them to work with the new runtime.
It's just going to be a lot of work for very temporary benefits, so our thinking is that we really will go live once Pioneer 2 is ready, and that will simultaneously reveal a system which is quite different in many ways from what we see today.
The overall structure is the same but there will be important improvements everywhere.
Summary: I think the best way to get a flavor for what the Olympia runtime currently looks like, and remember it's a moving target whenever we develop something new that we're not ready to put out right away, it will sort of get go live in the Olympia runtime. We can put it in the context of what we currently expect will be in the main net runtime.
You could see that on the runtime side we're really getting there.
There are basically two major subsystems. Well, it is an open question whether the channel tokens and DAOs is a subsystem, but two big pieces that we haven't started on at all. Everything else is in some reasonable state of development.
In addition, we're working with SR Labs, one of the premier auditing firms that work with Polkadot and that old ecosystem, and they've already audited a substantial part of our Olympia runtime to help us identify problems. That's gone really well, and we're probably going to do another audit once we're at the finishing line.
We've already done a very meaningful step towards getting production ready, I think, and at the same time we've also done benchmarking, as I mentioned prior. What is benchmarking? This is one of the important or necessary steps involved in deriving the fees that will be used in your blockchain.
If you're used to Ethereum, you will know that the fees associated with doing anything is computed on the fly because the whole system is a dynamic, and the set of contracts changes.
In substrate there's sort of a step involved in the development process where you try to compute basically how expensive it is to do all the operations that people can do in the system - that's called benchmarking. That literally boils down to measuring how much time each action or transaction, if you will, takes on certain reference hardware.
We've done that for a big part of the system - we've built that in-house skill, and we will be doing that for all the modules that go into Olympia which means we will have meaningful transaction fees as well. I think at the current runtime basically every transaction has the same nominal fee which is sort of a random number that won't be the case in Olympia.
There is an extra step from benchmarking to getting fees which is more about figuring out how much you're going to charge per unit of computation and per unit of block space, so to speak, in terms of your native token, but that's a smaller exercise.
Let me try to just briefly talk about some of the things that have changed. It will be way too much to try to cover all this but one of the very important things we've changed is that, what's referred to as the referendum module here, which has to do with electing the council.
You're now able to use stake that you're using for something else. Let's say you're a validator or you're staking as a working group lead or in a proposal or something, you're able to take that stake and redeploy it to vote or stand for the council.
This was a big step in the right direction in terms of making it much cheaper for people to participate in governance. In the current system that's live you have to pick whether you want to participate in governance or you want to stake, and then it's really easy to get to the selfish thing of just thinking about your own private returns on your own T-Joy account and stake rather than thinking about managing the system overall. If everyone does that, it doesn't work out as well as we would like.
That’s a very big change in the tokenotics of the system overall. That stake is reusable towards this one specific thing of being participating in elections. We are introducing the new content directory that I've talked about in Sumer.
We're introducing the idea of a constitution which is a very simple idea. We're not, I think, the first chain to do this but it's sort of a social commitment to all the conventions and standards and improvement proposals, if you want to follow bitcoin or Ethereum parlance of things, that are on the social layer of the system.
There are all sorts of metadata standards, for example, about how you encode an application for a working group. That would be in the constitution, and all sorts of policy things that the chain itself doesn't actually model and capture goes into the constitution.
There's a council blog where the council can speak in one voice to the system.
We’re adding crowdfunded bounties which is a way for community members to fund the creation of all sorts of goods that can be useful for the platform where they don't depend on the council to contribute. If you want to improve some software or really anything, you can get people on the system to fund a bounty where someone is tasked with the responsibility of following up with the bounty, and distributing the funds according to what people contribute.
I think that’s sufficient for you to just get a flavor for some of the things that are changing.
Summary: Then we have Pioneer itself.
Pioneer is the product where you actually engage in governance and participate in the community, so it's extremely important, given that this is a video platform DAO, and we have for a very long time been using and trying to maintain and evolve a fork of the Polkadot apps application.
That has a lot of limitations and problems not least of which is that you really can only access information that's conveniently in the current state of the chain, and that really limits your ability to do all sorts of searches and queries and look back into history about who has done what at what time and what happened, which is a critical precondition for people to accumulate reputation and you being able to distinguish who's a bad guy or girl for various positions and roles.
Pioneer 2 is really focused on this goal of conveniently lifting out all the historical information that exists in the system where you can understand what the history of a person is and also aggregating and summarizing a lot of the complicated state that is in the system into a more digestible form.
A lot of what enables that is, on one hand, of course, a product that's been redesigned from scratch by a team of excellent designers but also this infrastructure piece called Hydra which I'm going to talk about in the next update which allows you to look through all of the transactions and all the events and all the state in one simple query and allows you to do really cool things like, for example, search for anywhere you're mentioned in the forum, or in a proposal or you could look at all the time someone was fired in one easy click.
There are all sorts of ways of lifting out all the information which currently is either not possible to get out or your application has to go and talk to an archival node for five minutes before it could fetch and filter and query and search for whatever you're looking for. Pioneer 2 is really a big piece of making it practically possible for the DAO to actually work.
That’s it - the changed runtime, Pioneer 2 – that’s what is coming up in Olympia.
Video 4 Olympia Network
00:01 All right, welcome to part three, glad you're still sticking with me here.
00:06 So, this is about the Olympia network.
00:10 The Olympia network is sort of like a mega release we've been working on for a long time sort of asynchronously with everything else and particularly on the runtime side, also on the Pioneer side, obviously, and I’m going to get to it.
0:21 And it's such a big release that it's not even scheduled to be the release immediately after the Sumer release.
00:32 The reason I’m sort of putting it on the table is because it's probably one of those big milestones which may or may not be the last release, even before main net probably going to have one or two big releases, even after that, but it's a very important piece milestone for where we're trying to go.
00:51 And it's also something that we were working on for such a long time that I thought it was worth sharing.
00:58 So, what's going on in this release?
01:02 We are doing two things.
01:04 One is that we're shipping a new updated simplified benchmarked and audited runtime which sees major improvements really across the board and new functionality and features for, I would say, every subsystem.
01:24 And then it's introduction of Pioneer 2, version 2.
01:28 Pioneer, for those who don't know, is the governance app where you vote and stake and buy memberships and run for the elections in the council and forum and blah blah.
01:36 So that's all the stuff that actually has to do with participating in the system.
01:43 Pioneer 2 is the sort of user facing application for doing that through a user interface.
01:51 And I want to say that really probably the big bottleneck for going live with Olympia is actually Pioneer itself.
02:01 It's a tremendous piece of work in terms of on the infrastructure, the design, the application development itself.
02:12 There are a lot of pieces that are coming together, and we really could have released the runtime improvements that we already have but it just doesn't make sense for us to try to upgrade the version of Pioneer which is currently live, that we're calling Pioneer one, and try to upgrade them to work with the new runtime.
02:31 It's just going to be a lot of work for very temporary benefits, so our thinking is currently that we really will go live once Pioneer 2 is ready, and that will simultaneously reveal a system which is quite different in many ways from what we see today.
02:50 The overall structure is, of course, the same but there will be, you know, important improvements everywhere.
02:55 So, I think the best way to get a flavor for what the Olympia runtime currently looks like, and remember it's a moving target whenever we develop something new that we're not ready to put out right away, it will sort of get go live in the Olympia runtime.
03:15 And we can sort of put it in the context of what we currently expect will be in the main net runtime.
03:20 You could see that on the runtime side we're really getting there.
03:23 There are basically two major subsystems, well, it is an open question whether the channel tokens and DAOs is a subsystem, but two big pieces that really we haven't started on at all.
03:40 Everything else is in some reasonable state of development, to put it that way.
03:45 In addition, again, my image is covering that, but we're working with SR Labs, one of the premier auditing firms that work with Polkadot and that old ecosystem, and they've already audited a substantial part of our Olympia runtime to help us identify problems, and that's gone really well, and we're probably going to do another audit once we're sort of at the finishing line.
04:15 But we've already done a very meaningful step towards getting production ready, I think, and at the same time we've also done benchmarking, as I mentioned prior.
04:25 So, what is benchmarking?
04:25 This is one of the important or necessary steps involved in deriving the fees that will be used in your blockchain.
04:36 If you're used to Ethereum, you will know that the fees associated with doing anything is sort of computed on the fly because the whole system is a dynamic, and the set of contracts changes, and so on.
04:50 In substrate there's sort of a step involved in the development process where you try to compute basically how expensive it is to do all the operations that people can do in the system - that's called benchmarking.
05:05 That literally boils down to sort of measuring how much time each action or transaction, if you will, takes on certain reference hardware. I am skipping ahead here.
05:16 And we've done that for a big part of the system - we've sort of built that in-house skill, and we will be doing that for all the modules that go into Olympia which means we will have meaningful transaction fees as well.
05:31 I think at the current runtime basically every transaction has the same nominal fee which is sort of a random number that won't be the case in Olympia.
05:40 There is an extra step from benchmarking to getting fees which is more about figuring out how much you're going to charge per unit of computation and per unit of block space, so to speak, in terms of your native token.
05:57 But that's, you know, that's a smaller exercise.
06:01 So, let me try to just briefly talk about some of the things that have changed.
06:04 It will be way too much to try to cover all this but one of the very very important things we've changed is that, what's referred to as the referendum module here, which has to do with electing the council.
06:15 You're now able to use stake that you're using for something else. Let's say you're a validator or let's say you're staking as a working group lead or in a proposal or something, you're able to take that stake and redeploy it to vote or stand for the council.
06:34 This was, I think, a big step in the right direction in terms of making it much cheaper for people to participate in governance. In the current system that's live you really have to pick whether you want to participate in governance or you want to stake, and then it's really easy to get to basically do the, you know, the selfish thing of just thinking about your own private returns on your own T-Joy account and stake rather than thinking about, you know, managing the system overall.
07:07 If everyone does that, it doesn't work out as well as we would like.
07:10 That's a very big change in the tokenotics of the system overall.
07:15 That stake is basically reusable towards this one specific thing of being participating in elections.
07:23 We are introducing obviously the new content directory that I've talked about in Sumer.
07:28 We're introducing the idea of a constitution which is a very simple idea, actually.
07:32 We're not, I think, the first chain to do this but, basically, it's sort of a social commitment to all the conventions and standards and, you know, improvement proposals, if you want to follow sort of bitcoin or Ethereum parlance of things, that are sort of on the social layer of the system.
07:53 There are all sorts of metadata standards, for example, about how you encode an application for a working group, for example, that would be in the constitution and all sorts of policy things that the chain itself doesn't actually model and capture goes into the constitution.
08:09 There's a council blog where the council can sort of speak in one voice to the system.
08:17 We’re adding crowdfunded bounties which is basically a way for community members to fund the creation of all sorts of goods that can be useful for the platform where they don't depend on the council to contribute.
08:33 So, if you want to improve some software or really anything, you can get people on the system to fund a bounty basically where someone is tasked with the responsibility of following up with the bounty, and distributing the funds according to what people contribute and so on.
08:55 What else should I cover?
08:57 I think maybe that’s sufficient for you to just get a flavor for some of the things that are changing.
09:03 So, that's the Olympia runtime and some of the things that are being changed.
09:09 Then we have Pioneer itself.
09:12 Pioneer is the product where you actually engage in governance and participate in the community, so it's extremely important obviously given that this is a video platform DAO, and we have really for a very long time been using and trying to maintain and evolve a fork of the Polkadot apps application.
09:35 You know, that has a lot of limitations and problems not least of which is that you really can only access information that's in the current state, conveniently in the current state of the chain, and that really limits your ability to do all sorts of searches and queries and look back into history about who has done what at what time and what happened and so on, which is a critical precondition really for people to accumulate reputation and you being able to distinguish, you know, who's a good guy, who's a bad guy or girl for various positions and roles and everything.
10:17 Pioneer 2 is really focused on this this goal of conveniently lifting out all the historical information that exists in the system where you can understand what the history of a person is and also actually frankly sort of aggregating and summarizing a lot of the complicated state that is in the system into a more digestible form.
10:41 And, well, a lot of what enables that is, on one hand, of course, a product that's been redesigned from scratch by a team of excellent designers but also this infrastructure piece called Hydra which I'm going to talk about in the next update which allows you to sort of look through all of the transactions and all the events and all the state in one simple query and allows you to do really cool things like, for example, search for anywhere you're mentioned in the forum, for example, or in a proposal or you could look at all the time someone was fired, for example, in one easy click.
11:22 There are all sorts of ways of lifting out all the information which currently is sort of either not possible to get out or your application has to like go and talk to an archival node for you know five minutes or something before it could fetch and filter and query and search for whatever you're looking for.
11:43 So, Pioneer 2 is really a big piece of making it practically possible for the DAO to actually work.
11:52 So, that is it.
11:54 The changed runtime, Pioneer 2 – that’s what is coming up in Olympia.
11:59 Thank you very much, see you soon for Hydra.
Summary Thank you for joining me on this part four on Hydra v3.
This is largely just going to be about what Hydra is but we are working towards v3 which is a major milestone for us in terms of the functionality that's needed, and we really think that with this release it's really becoming possible for people to build very powerful front-end applications for substrate chains using Hydra, so we're extremely excited about it.
It's actually a pretty astounding achievement to be able to build and manage this on top of everything else we're doing because you will find many other projects that are very large teams entirely devoted to building something like Hydra. So, it's something we're really proud of, and we want to assist other people in adopting as well.
The best way to understand Hydra is in terms of what problem it is solving.
Summary Imagine a hypothetical blogging blockchain which is a substrate chain which has the single purpose of implementing some kind of a social blogging platform. In fact, a one well-known substrate project called Subsocial actually has implemented Hydra so it's not entirely hypothetical, but just for the sake of an argument imagine this kind of a blockchain.
You have users submitting extrinsic or transactions where they make threads and posts, etc.
Summary And then you can imagine building some sort of an application that's supposed to display something about this blogging infrastructure like allowing people to post, allowing people to read what's happening across different blogs and so on.
The naive way you would do it, and the way most apps for substrate have been built is that you just build some front-end app, you hook it up to your substrate full node, and it queries it in order to ask some simple questions about what the structure of the blog is and who's doing what.
Summary The problem that you'll pretty quickly run into is that there are a lot of very simple queries that are needed in order to render the user experience that people are used to, certainly in Web 2.0 world that just are not possible if you ask a full node directly.
If you ask for any of these examples and really any number of other examples you could think of as they are totally reasonable, the full node won't have a pre-prepared index over its history and state which would allow you to easily query and ask for those.
Usually either you have a front-end app which takes a very long time to sync up because it downloads everything or a big part of either what's in the state or history in order to do processing on the client side in order to show the right thing for the user. That's slow, complicated, and, in general, just doesn't really scale.
This is the problem that you'll run into writing any blockchain application, if you're going to make something that has a non-trivial user experience, you're going to have to somehow solve this problem. Specifically for substrate chains, Hydra is the framework approach we've taken to solve this.
Summary What is Hydra? It's a software framework that makes it very easy for someone who's developing a substrate chain, such as Joystream, to focus only on the parts that are relevant to them. They get this whole set of tools and nodes that automatically does everything else they need in order to provide this API which can, for example, respond to the sort of queries that I showed you before.
Summary We've been working on it for a while.
We actually were really proud to see that the Kusama judges picked it as the first entrant winner in the open hack category, so that was really cool.
Summary From the way I am describing it, you may feel that you've heard of this before, in particular coming from the Ethereum space, and this is basically because this is very similar to what the Graph tries to do for Ethereum. Basically, it tries to give that kind of a service for smart contracts whilst we do it for standalone chains.
There are some big important differences between the Graph and the Hydra framework.
One important difference is, at least before the way the Graph used to work was that the Graph company hosted a service where everyone who built an app that was talking to the Ethereum chain would talk to a server that the Graph company was running.
They were not that happy with that, it's not really in the spirit of the Web 3 vision, so they always had the goal of building this distributed peer-to-peer type of network that would replace their role in provisioning that service. That's not an easy thing to do but that's something they've started to roll out, so I think over the next coming months or so there's going to be some version of what the hosted version of the Graph does for that is decentralized in some way. The way we run Hydra and generally people are expected to run Hydra is that the person who hosts the front-end application would pretty much run the query node instance. That's the way we're envisioning this being provisioned. Maybe we end up shipping a working group which has people running query nodes, this is what we call Hydra nodes basically, where people are incentivized by our DAO to run them for the benefit of people using either Atlas or Pioneer or any other major front-end application but this is one of those decisions that we are still quite early on in terms of how it's going to be provisioned at scale.
Summary How is it that this actually works for a developer?
A developer has to define two things.
The whole point of Hydra is to alleviate the burden of having to do everything - talking to the chain, and managing a database, and putting your events in there, and making an API. It's a lot of work to get that to happen every time for a new substrate chain.
So, first, a developer has to define the way the data in their system is organized in a really nice simple sort of GraphQL like markup language. If we take the Subsocial example, you say that you have a blog, and you have posts, and blogs have authors, and posts are part of blogs. You define a very convenient way how the data is organized.
Then you would define what are called mappings which are basically rules which say when I see this kind of an event or this kind of a transaction in the substrate chain, I'm going to put something in the database which will be queryable later - either put something or update something or delete something, basically, update the database that holds the information that the front-end apps are interested in.
If you provide these two, you get everything else for free.
Summary The way Hydra works in production is that your application talks to a GraphQL server which has the API, that's the API which will allow you to ask those pretty questions that I mentioned in the beginning of the of the slide deck, that talks to a specific database which holds the data that I just talked about which is managed, sorted by these mappings.
Then there is a processor which is this long running process that runs these mappings whenever it sees that the underlying full node has produced some new blocks and some new events and some new transactions. This indexer database holds a long-standing index of everything that’s happened in your substrate full node. That is the basic architecture that makes a single Hydra node come together.
You can think of the mappings as defining how the query database looks, and then you can think of the mappings as the logic that runs in the processor. It's quite a nice abstraction.
We are extremely proud of having been able to have done that on a relatively small team.
A lot of these abstractions have been identified by the Graph, and they've done an amazing job, but it certainly has not been easy to do this with a smaller project with a separate purpose.
We're very happy about having developed this, and we hope more people will continue to adopt it.
That's the story on Hydra of which v3 is the next major release.
Video 5 Hydra v3
00:01 All right. So, thank you for joining me on this part four on Hydra v3.
00:07 Now this is largely just going to be about what Hydra is but we are working towards v3 which is a major milestone for us in terms of the functionality that's needed, and we really think that with this release it's really becoming possible for people to build very powerful applications, front-end applications for substrate chains using Hydra, so we're extremely excited about it.
00:35 As I’ll get to, you know, it's actually a pretty astounding achievement to be able to build and manage this on top of everything else we're doing because you will find many other projects that are very large teams entirely devoted to building something like Hydra, so it's something we're really proud of, and we want to assist other people in adopting as well.
00:55 So, Hydra. What is the, I guess, the best way to understand it is just in terms of what problem is it solving.
01:03 Imagine a hypothetical blogging blockchain which is sort of a substrate chain which has the single purpose of sort of implementing some kind of a social blogging platform, and, in fact, a one well-known substrate project called Subsocial which is covered under my little video in the top bottom right corner actually has implemented Hydra so it's not entirely hypothetical, but just for the sake of an argument imagine this kind of a blockchain.
01:37 So, you have users submitting extrinsics or transactions where they make threads and posts, this sort of stuff.
01:46 So, that's pretty, you know, simple.
01:49 And then you can imagine building some sort of an application that's supposed to display something about this blogging infrastructure like allowing people to post stuff, allowing people to read what's happening across different blogs and so on.
02:06 So, the naive way you would do it and the way most apps for substrate have been built is that you just build some front-end app, you hook it up to your substrate full node, and it queries it in order to ask some simple questions about what the structure of the blog is and who's doing what and so on.
02:27 The problem that you'll pretty quickly run into is that there are a bunch of very simple queries that are needed in order to render the sort of user experience that people are used to, certainly in Web 2.0 world that just are not possible if you ask a full node directly.
02:45 If you ask for any of these examples and really any number of other examples you could think of as they are totally reasonable, the full node won't have a pre-prepared index over its history and state which would allow you to easily query and ask for those.
03:05 The thing that you see people doing is either you have like a front-end app which takes a very long time to sync up because it downloads everything or a large chunk of either what's in the state or history in order to do a bunch of processing on the client side in order to show the right thing for the user. 03:26 That's slow, complicated, in general just doesn't really scale.
03:29 This is really the problem that you'll run into writing really any blockchain application, you will, if you're going to make something that's, I would say, has a non-trivial user experience, you're going to have to somehow solve this problem.
03:44 And specifically for substrate chains, Hydra is the framework approach we've taken to solve this.
03:52 So, what it is? It's a software framework where it makes it very easy for someone who's developing a substrate chain, such as Joystream, to focus only on the parts that are relevant to them.
04:06 They get this whole set of tools and nodes that automatically does everything else they need in order to provide this API basically which can, for example, respond to the sort of queries that I showed you before.
04:22 So, that's the Hydra framework.
04:26 We've been working on it for a while.
04:29 We actually were really proud to see that the Hakusama judges picked it as the first entrant winner, so that was pretty cool in the, I believe it was the open category if I hadn't… I think that was, yeah, open hack, so that was really cool.
04:46 For some of you, maybe the way I am describing it, you may feel like you've heard of this before, in particular coming from the Ethereum space, and this is basically because this is very very similar to what the Graph tries to do for Ethereum.
05:01 Basically, it tries to give that kind of a service for smart contracts whilst we do it for standalone chains. 05:12 There are some big important differences between the Graph and the Hydra framework.
05:16 One important difference is, well, at least before the way the Graph used to work was that the Graph company sort of hosted a service where everyone who built an app that was talking to the Ethereum chain would sort of just talk to a server that the Graph company was running.
05:37 They were sort of not that happy with that, it's not really sort of in the spirit of the Web 3 vision so they always had the goal of building this distributed peer-to-peer type of network that would replace their role in provisioning that service.
05:57 That's not an easy thing to do but that's something they've started to roll out, so I think over the next coming months or so there's going to be some version of what the Graph, the hosted version of the Graph does for that is decentralized in some way.
06:14 I mean I could get into the details of what that is but I think that would be a big distraction here.
06:18 The way we run Hydra and generally people are expected to run Hydra is that the person who hosts the front-end application would pretty much run the query node instance.
06:30 That's sort of the way we're envisioning this being provisioned.
06:35 Maybe that we end up shipping a working group which has people running query nodes which, this is what we call Hydra nodes basically, where people are incentivized by our DAO to run them for the benefit of people using either Atlas or Pioneer or any other major front-end application but this is one of those decisions that we are still quite early on in terms of how it's going to be provisioned at scale.
07:05 How is it that this actually works for a developer?
07:08 What a developer has to do is they have to define two things.
07:12 The whole point of Hydra is to alleviate the burden of having to do everything like talking to the chain, and managing a database, and putting your events in there, and making an API.
07:25 It's a lot of work to get that to happen every time for a new substrate chain.
07:29 So, what a substrate developer has to do is, first, they have to just define the way the data in their system is organized in a really nice simple sort of GraphQL like markup language.
07:45 There you would say, for example, if we take the Subsocial example that you have, you know, a blog, and you have posts, and blogs have authors, and posts are part of blogs.
07:56 You would sort of define a very convenient way, in a way that developers are very comfortable with, how the data is organized.
08:03 And then you would define what are called mappings which are basically rules which say when I see this kind of an event or this kind of a transaction in the substrate chain, I'm going to put something in the database which then will be queryable later.
08:17 Either put something or update something or delete something, basically, update the database that holds the information that the front-end apps are interested in.
08:26 If you provide these two, you basically get everything else for free.
08:30 So, the way Hydra sort of works in production is that your application talks to a GraphQL server which has the API, that's the API which will allow you to ask those pretty questions that I mentioned in the beginning of the slide deck here that talks to a specific database which holds the data that I just talked about which is managed, sorted by these mappings.
08:58 Then there is a processor which is this long running process that runs these mappings whenever it sees that the underlying full node has produced some new blocks and some new events and some new transactions.
09:13 Basically, this indexer database holds a sort of long-standing index of all the stuff that's happened in your substrate full node.
09:25 That is the basic architecture that makes a single Hydra node sort of come together.
09:31 You can basically think of the mappings as defining how the query database looks, and then you can think of the mappings as the logic that runs in the processor.
09:42 It's quite a nice abstraction.
09:44 We are extremely proud of having been able to have done that on a relatively small team.
09:51 A lot of these abstractions have been identified by the Graph, and they've done an amazing job, but it certainly has not been easy to do this with a smaller project with a separate purpose.
10:06 We're very happy about having developed this, and we hope more people will continue to adopt it.
10:10 That's the story on Hydra of which v3 is the next major release.
10:17 So, that's it, see you for the next video.
Summary: Hi, and welcome to part five of the Community Update.
Here I'll be going through some new specifications: some of them are finished, others are in progress, others we haven't started on - just to give people a flavor for some of the things that are coming down the pipe.
Summary: Let's start with the new storage distribution system that we have already started to implement.
The current storage system is the simplest possible thing you could imagine. It has an on-chain index of what all the data that exists in the system is – the hashes, and the sizes of things, and who owns them, etc. There is a designated role as a storage provider that means that you're obligated to store all the data that is in the system. That's the first clue that really wouldn't work at scale. And you also have to distribute all the data that you are storing as a storage provider.
I think we've largely gotten away with it because there hasn't been a ton of load in the system because the publishing and consumer side of things hasn't been the way we have prioritized our development roadmap. We're a DAO and governance-focused project so we've invested a large share of our resources in developing that first.
In the last six months, maybe a little bit more than that, we’ve shifted our attention or, I should say, broadened our investment horizon to also cover our investment scope, to cover more on the content side which is also going to result in the storage system needing to handle a little bit more scale and a little bit more realistic policy space.
What's coming up in the v2 version?
There are a few highlights that are worth mentioning.
First of all, we're going to be separating the role of holding on to data and replicating internally in the infrastructure and distributing data to end users who are, for example, sitting in Atlas.
Those are two very different activities from an infrastructure point of view, from an economic point of view. One is about having very reliable infrastructure that doesn't explode or catch on fire, that is not so bandwidth sensitive. Then you have this distribution activity which is about very quickly getting a much smaller subset of data to potentially a large number of people simultaneously. That's, for example, a role where it's important where you're located and who you expect will be in touch with you, and the latency involved. Those have been separated to do distinct roles.
Another big improvement is that not everyone has to either store everything, if they're a storage provider, or distribute everything, if they're distributor. We don't have any erasure coding or other scheme that tries to get away with avoiding storing and full replicas to the degree of safety and redundancy that you want, that hasn't really been important to us. The first step has been to move away from everyone storing everything. Maybe we will incorporate that in the future but I'm not sure how important it is for the main net level of load that we're imagining.
Another big difference is that not only members or, I should say, channels can store data in the storage system and have it distributed. The council working groups could store assets as well which is very important because there's going to be an increasing set of assets of all kinds - binaries and source code and documents of different kinds - that you want different parts of the system to be able to persist as different people are flowing through those roles. That's why we've introduced the capability of those different subsystems to have their own designated storage spaces.
Also, we're taking much more seriously the need to be able to reclaim space or delete content. That's something that really hasn't worked in any sort of scale, even with Sumer.
Lastly, we are allowing the distribution policy, how it is you allocate your bandwidth resources across space and time, to be much more flexible. For a given channel, for example, for a given piece of content there's going to be a predictable geographic bias in who will want to access it quickly. If you're some Spanish cooking show, overwhelmingly there are some parts of the world which are going to want to access that content, and you would want to be able to optimize for the location of the distribution infrastructure that services those, as opposed to something else like a Finish knitting show.
This level of sophistication is more than enough for main net purposes.
Summary: Then we have the concept of gateways.
The issue the gateways attempt to address is the fact that it's really important for the tokenomics of the system to work specifically in the sense that if you're a user coming in to view content, you're consuming obviously expensive infrastructure resources, like bandwidth and storage, but you're also enjoying the fact that someone has made a fixed investment of creating the content that you're also viewing.
For the system to work overall there has to be a way to get the viewers to contribute some value back to the platform and everyone else. The obvious way to do that is just requiring all the viewers to have Joy token and create memberships and have to have a signer and an ex in their browser and find their way to some front-end application hosted somewhere, and they have to acquire Joy in some way in order to view the content.
I think it goes without saying that would be a huge barrier to entry, and it would really restrict your ability to onboard people who don't even know anything about crypto, don't know how to or don't want to deal with how to acquire it, how to manage it, how to store it, how to spend it. It's still not a great experience if every time you have to watch something, a big signer thing popped up and asked you to sign off on spending some Joy. Even if you made something a little bit lumpier, like if you paid for x number of views or for some period of time, it's still a very steep onboarding experience.
I think, one of the main things we have to unlock is a way for a general audience to, in an economically sustainable way, enjoy and consume the content, and that's what gateways are supposed to do.
Gateways are front-end operators who are free to monetize and own the relationship with the end user in whatever way they see fit. They can monetize through advertising, they can monetize through some in-app purchase in some app store, maybe on a smart TV - they're free to do that however they see fit.
And specifically, this ability to support advertising which is pretty important in order to be able to reach scale within a timely manner, you definitely would need to allow that at least in the mix. And that certainly requires you to be able to own the relationship and own the front-end primarily to avoid abuse and other things that will happen if you don't do that properly.
Gateways have a business model around delivering a front-end user experience owning the relationship with the end user, and take on the burden of acquiring Joy and burning it in order to actually give their registered users access to the infrastructure and to the content. They internalize all the small transaction costs of everyone trying to do that on their own. The gateways do that on their behalf, and they have long-standing relationships with infrastructure providers, with the leads and the gateway working group.
You should think of them as a new role in order to make it much easier to acquire and retain users that are not eager to instantly jump on the on the Joy bandwagon to acquire in order to use the application.
The gateways are really important. It is not clear, probably the work will be in parallel with the v2 storage system work, but it's probably not going to come out at least two or three networks into the future.
Summary: Then we get to channel tokens and DAOs.
This is something I'm really excited about.
They are called social tokens. It's a way for creators and small communities to issue tokens that give you a claim on the value that's generated by a channel. I suppose we could also type to videos but this specific specification has to do with channels and the revenues that channels generate, and it gives you governance rights and how that channel is managed to the extent that the channel token issuer is interested in doing that, and it really tries to formalize something that's been attempted a good number of times.
For people who have been in the space for a while there was something called Tatiana coin which tried to do a simpler version of this where you would buy it, and that would give you the right to a certain number of songs. This was a musician smart media token.
There was a Steemit initiative which was supposed to give you the ability to create a community or monetize your community by issuing a token tied to it. I'm not entirely sure how the tokenomics was supposed to work. I think it was perhaps a little bit more speculative where it wasn't clear where the value would come from but here the value is really supposed to come from the value generated by the channel itself.
I’m not sure how we're going to explain this but the idea itself is something that's been around for a while. If you're a creator, you can issue one of these tokens for your channel that you could raise Joy in order to be able to fund various expenditures, and you could also trade the channel tokens.
Summary: Lastly, we have crowdfunded bounties.
This has actually been implemented already.
This is an idea for solving the problem that sometimes community members would want to organize within themselves in order to produce some sort of public good that has a platform-wide benefit, or maybe even a benefit within some subsection of the community which it's not worth it or it's not clear that it's going to be feasible to get the council with all of its priorities to actually accept and to fund, or maybe even that there is some budget constraints for the council, so they couldn't even do it if they wanted to do it.
The idea is to implement something called an insurance contract which is very similar to, I guess it was called tipping point at one point. I don't remember now; it was this huge startup which was trying to incentivize collective action by saying “I'm going to do something only if a sufficient number of other people or sufficient amount of money has been dedicated to do it”.
To some extent you could think of the free state project in the United States as a similar type of initiative for political collective action, but basically, it's the same idea where you can make a bounty and you could say “this is going to fund x if y amount of funds is provided within a certain amount of time or at any time”, so it just runs forever.
If the funds are secured, people can come and work on a bounty, and there's going to be a dedicated person for each bounty who's assigned to as adjudicating whether someone's contribution is good or bad or worthy and how the funds should be distributed.
So, basically, a bounty system combined with a crowdfunding system.
There's actually a little bit more sophistication in this because we're also trying to model something called the dominant assurance contract which tries to make it incentive compatible to contribute to one of these by allowing the bounty to be owned by an entrepreneur, who puts up a little bit of money, where, if all the people who contribute to the bounty, if they contribute to it and it fails, so it doesn't reach the goal, whatever the goal is for whatever purpose, they all get to split the little prize or it's called the cherry in the bounty that's provided by the entrepreneur.
Let's say you want to make a smart TV app for Joystream, you could make one of these bounties where only you could work on it, so only you would get the raised funds. You put up, for example, two thousand dollars which will be released to the funders if an insufficient number of people end up contributing to reach whatever goal you need, and you need, for example, 120 000 in order to do this. That actually makes it now in the interest of people otherwise who would sit idle and not be able to contribute because they get to speculate on the outcome that it doesn't actually work.
It's going to be some time until it's actually exposed in Pioneer so you can use it but on the runtime side this all has already been implemented.
There are other things but I think these major four new specifications are the most interesting ones to cover at the moment.
Video 6 New Specifications
00:01 Hi, and welcome to part five of the Community Update.
00:06 Here I'll be going through some new specifications for features that are either, actually some of them are finished, others are in progress, others we haven't started on, just to give people a flavor for some of the things that are coming down the pipe.
00:23 So, let's start with the new storage distribution system.
00:26 This we have already started to implement.
00:31 A quick sort of refresher for those who may not know.
00:34 The current storage system is sort of the simplest possible thing you could imagine.
00:37 It has an on-chain index of what all the data that exists in the system is – like the hashes, and the sizes of things, and who owns them and so on.
00:48 There is a designated role as a storage provider that means that you're obligated to basically store all the data that is in the system.
00:57 So, that's the first clue that really wouldn't work at scale.
01:02 You also have to distribute all the data that you are storing as a storage provider.
01:08 That's sort of the way the system works today.
01:11 I think we've largely gotten away with it because there hasn't been a ton of load in the system because the publishing and consumer side of things hasn't been the way we have prioritized our development roadmap.
01:25 We're a DAO and governance-focused project so we've invested a large share of our resources in developing that first.
01:36 In the last six months we've, maybe a little bit more than that, we've sort of shifted our attention or, I should say, broadened our investment horizon to also cover our investment scope, to cover more on the content side which is also going to result in the storage system needing to handle a little bit more scale and a little bit more realistic policy space.
02:01 It couldn't have come any sooner.
02:04 What's coming up in the v2 version?
02:06 There are a few highlights that are worth mentioning.
02:11 First of all, we're going to be separating the role of holding on to data and replicating internally in the infrastructure and distributing data to end users who are, for example, sitting in Atlas.
02:25 Those are really two very different activities from an infrastructure point of view, from an economic point of view.
02:31 One is about having very reliable infrastructure that doesn't explode or, you know, catch on fire or whatnot.
02:38 It's not so bandwidth sensitive.
02:40 Then you have this distribution activity which is about very quickly getting a much smaller subset of data to potentially a large number of people simultaneously.
02:53 That's, for example, a role where it's important maybe where you're located and who you expect will be in touch with you, and the latency involved and so on.
03:05 So, those have been separated to do distinct roles.
03:08 Another big improvement is obviously that not everyone has to either store, if they're a storage provider, everything or distribute, if they're distributor, everything.
03:19 It's sort of sharded so that some of the storage providers store, basically, it's partitioned into different families of storage providers.
03:31 We don't have any erasure coding or other scheme that tries to get away with avoiding storing and full replicas to the degree of safety and redundancy that you want, that hasn't really been important to us.
03:50 The first step has really just been to move away from everyone storing everything.
03:56 Maybe we will incorporate that in the future.
04:01 I think, SIA, for example has that Messiah but I'm not sure how important it is for the main net level of load that we're imagining.
04:11 So, that's a very big difference.
04:13 Another big difference is that not only members or, I should say, channels can store stuff in the storage system and have it distributed.
04:21 The council working groups could store assets as well which is very important because there's going to be an increasing set of assets of all kinds, you know, binaries and source code and documents of different kinds that you want different parts of the system to be able to persist as different people are flowing through those roles.
04:45 That's why we've introduced the capability of those different subsystems to have their own designated storage spaces, so to speak.
04:53 Also, we're taking much more seriously the need to be able to reclaim space or basically delete content.
05:00 That's something that really hasn't worked in any sort of scale even with Sumer.
05:05 So, that's something we're also introducing.
05:08 Lastly, we are allowing the distribution policy, basically how it is you allocate your bandwidth resources across space and time to be much more flexible because what you can imagine is that for a given channel, for example, for a given piece of content there's going to be a predictable geographic bias in who will want to access it quickly.
05:37 If you're some Spanish cooking show, overwhelmingly there are some parts of the world which are going to want to access that content, and you would want to be able to optimize for the location of the distribution infrastructure that services those, as opposed to something else like a Finish knitting show.
05:59 That's another very important distinction.
06:05 This level of sophistication is more than enough for main net purposes.
06:10 So, that's the v2 storage distribution system, and the work has already started.
06:14 Then we have the concept of gateways.
06:20 The issue the gateways attempt to address is the fact that it's really important for the tokenomics of the system to work specifically in the sense that if you're a user,
a consumer coming in to view content, you're consuming obviously expensive infrastructure resources like bandwidth and storage and so on, but you're also, of course, enjoying the fact that someone has made a fixed investment of creating the content that you're also viewing.
06:53 For the system to work overall there has to be a way to get the viewers to contribute some value back to the platform and everyone else.
07:02 The obvious way to do that is just requiring all the viewers to have Joy token and create memberships and have to have a signer and an ex in their browser and find their way to some front-end application hosted somewhere, and they have to acquire Joy in some way in order to view the content.
07:25 I think it goes without saying that would be a huge barrier to entry, and it would really restrict your ability to onboard people who don't even know anything about crypto, don't know how to or don't want to deal with how to acquire it, how to manage it, how to store it, how to spend it.
07:42 It's still not a great experience if every time you have to watch something, a big signer thing popped up and asked you to sign off on spending some Joy, that would not be a great experience.
08:00 Even if you made something a little bit more lumpy, like if you paid for x number of views or for some period of time, it's still a very steep onboarding experience.
08:20 I think, one of the main things we have to unlock is a way for just a general audience to, in an economically sustainable way, enjoy and consume the content, and that's what gateways are supposed to do.
08:32 Gateways are basically front-end operators who are free to monetize and own the relationship with the end user in whatever way they see fit.
08:43 If they monetize through advertising, that's fine, if they monetize through some sort of in-app purchase in some app store, maybe on a smart TV, I don't know, they're free to do that however they see fit.
08:59 And, specifically, this ability to support advertising which, I think, is pretty important in order to be able to reach scale within a timely manner, I think you definitely would need to allow that at least in the mix, and that certainly requires you to be able to own the relationship and own the front-end primarily to avoid abuse and other things that will happen if you don't do that properly.
09;27 What gateways do is they basically have a business model around delivering a front-end user experience owning the relationship with the end user, and what they do is they take on the burden of acquiring Joy and burning it in order to benefit, well, in order to actually give their registered users access to the infrastructure and to the content.
09:50 They sort of internalize all the small transaction costs of everyone trying to do that on their own.
09:55 The gateways do that on their behalf, and they have long-standing relationships with infrastructure providers, with the leads and the gateway working group and so on.
10:06 You should think of them as a new role in order to make it much easier to acquire and retain users that are not eager to instantly jump on the on the Joy bandwagon to acquire in order to use the application.
10:25 Those are gateways.
10:27 They're really important, and they're probably going to be developed, well, it's not clear, probably the work will be in somewhat in parallel with the v2 storage system work, but it's probably not going to come out at least two or three networks into the future.
10:45 So, those are gateways.
10:45 Then we get to channel tokens and DAOs.
10:48 This is something I'm really excited about.
10:51 This is sort of, I think now it's being called social tokens.
10:56 Basically, it's a way for creators and small communities to issue tokens that give you a claim on the value that's generated by a channel.
11:09 I suppose we could also type to videos but this specific specification has to do with channels and the revenues that channels generate, and it gives you governance rights and how that channel is managed to the extent that the channel token issuer is interested in doing that, and it really tries to formalize something that's been attempted a good number of times.
11:31 For people who have been in the space for a while there was something called Tatiana coin which basically tried to do something like this, well I would say simpler version of this, where you would buy it, and that would give you the right to certain, I believe it was a certain number of songs or something.
11:48 This was a musician, I think, smart media tokens, perhaps a little bit closer to this.
11:54 There was a Steemit initiative which was supposed to give you the ability to create a community or monetize your community by issuing a token tied to it, I'm not entirely sure how the tokenomics was supposed to work.
12:05 I think it was perhaps a little bit more speculative where it wasn't clear where the value would come from but here the value is really supposed to come from the value generated by the channel itself.
12:19 So, that’s channel tokens or we're calling them channel bowels, social tokens.
12:24 I’m not sure how we're going to explain this but the idea itself is something that's sort of been around for a while, and of course if you're a creator, you can issue one of these tokens for your channel that you could raise Joy in order to be able to fund various expenditures, and you could obviously trade the channel tokens also.
12:53 So, those are the channel tokens.
12:56 And then, lastly, we have crowdfunded bounties.
12:58 This has actually been implemented already.
13:03 So, this is an idea for solving the problem that sometimes community members would want to organize within themselves in order to produce some sort of public good that has a platform-wide benefit but, or maybe even a benefit within some subsection of the community, which it's not worth it or it's not clear that it's going to be feasible to get the council with all of its priorities to actually accept and to fund, or maybe even that there is some budget constraints for the council, so they couldn't even do it if they wanted to do it.
13:38 The idea is to implement something called an insurance contract which is, basically, very similar to, I guess it was called tipping point at one point, and then was, I don't remember now, it was this huge startup which was trying to basically incentivize collective action by saying “I'm going to do something only if a sufficient number of other people or sufficient amount of money has been dedicated to do it”.
14:08 To some extent you could think of the free state project in the United States as a similar type of initiative for political collective action, but basically it's the same idea where you can make a bounty and you could say - this is going to fund x if y amount of funds are provided within a certain amount of time or at any time, so it just runs forever.
14:30 And then people, if the funds are secured, people can come and work on a bounty, and there's going to be a dedicated person for each bounty who's assigned to as adjudicating whether someone's contribution is good or bad or worthy and how the funds should be distributed.
14:48 So, basically, a bounty system combined with a crowdfunding system.
14:53 There's actually a little bit more sophistication in this because we're also trying to model something called the dominant assurance contract which tries to make it incentive compatible to contribute to one of these by allowing the bounty to be owned by an entrepreneur who puts up a little bit of money, where if all the people who contribute to the bounty, if they contribute to it and it fails, so it doesn't reach the goal of, whatever the goal is for whatever purpose, they all get to split the little prize or it's called the cherry in the bounty that's provided by the entrepreneur.
15:39 If you are to make a concrete, let's say you want to make a smart TV app for Joystream, you could make one of these bounties where only you could work on it, so only you would get the raised funds, and you put up, let's say, two thousand dollars which will be released to the funders if an insufficient number of people end up contributing to reach whatever goal you need, let’s say you need you need 120 000 in order to do this.
16:13 That actually makes it now in the interest of people otherwise who would sit idle and not be able to contribute because they get to speculate on the outcome that it doesn't actually work.
16:25 So, that is already implemented, it's going to be some time until it's actually exposed in Pioneer so you can use it but on the runtime side this all has already been implemented.
16:35 There are other things but I think these major four new specifications are the most interesting ones to cover at the moment.
16:44 That's it, see you in the next video.
Summary: This is the last section of the first update.
Here we're going to talk about what we're doing on the community side.
We have a few different things going on in the way we're building the Joystream community up until main net but I think the most important initiative by far is the founding members program.
The point of the community program in the Joystream project is to build a DAO which is capable of operating the platform and evolving on its own autonomously based on the technology and the policies and the processes that we've established before main net.
At main net the community will fully run this system, and that requires you to have a lot of people with a lot of different skills, interests, infrastructure, etc.
The founding member program is about identifying the people who can play that role, motivating them to put in the effort to learn and develop their own ideas for what should happen after the main net launch.
For that matter what we should be doing prior to launch in order to make the tools and the documentation as effective as possible and to then distribute to these community members the token which is required to exercise the governance that allows that evolution to happen.
Summary: The founding member program is really the program for trying to find those specific people, identifying them, celebrating them, rewarding them and following their lead towards main net.
So far, we've inducted five members, and that has largely been based on the contributions of these individuals in the past before the program began. Most of these are making an exceptional contribution to the development of the system on an ongoing basis.
Summary: These are names that you will recognize for show once the main net launches but they are closely followed by a good number of people who are trying to make it through.
The primary and possibly the only way for the broad community to get access to the Joy token is by becoming a founding member which means that you need to, first of all, not be US persons, unfortunately. Secondly, you need to contribute by submitting these summaries on a periodic basis of what you've done, who you'd refer to the platform which allows you to accumulate these scores.
There are two components to the score.
The direct score is basically a representation of your direct contribution - what you do in terms of technical, community, social, whatever it is you do to actually help grow the community and help us go in the right direction.
And there's a referral score which is a way of measuring how effective you've been at drawing in other people who are themselves people with high scores. Then based on that you have a total score, and that total score counts towards becoming a founding member.
The policy that we're applying is dynamic, it's evolving so you are going to have to look up the most recent summary of what has happened in one of these scoring periods to understand what's going on, what's being emphasized. You'd have to go to the website to see where you rank on the leaderboard. But I expect we will pretty soon see some new founding members.
It's an interesting question how many we actually will need by main net, how many people does it take, what distribution of stake should be in order for them to be effective.
This is something we will never get the answer to entirely, but also we'll calibrate our policy to aim towards what our best thinking is at any given time. There are different ways of earning these scores. One, as I mentioned, is by referring people.
Summary: You can work on bounties which is another major way we see of people who are, perhaps, not able to make it into one of the roles that are on the network, there's a limited number or you may need to be quite technical in order to do quite a lot of them.
The bounty program is a way for the community members to contribute in other ways.
If you have ideas for bounties that you would want to do or that you think other people could do – everything from marketing, translating texts and tutorials making to troll videos - there's a lot that's going to be coming out, but if you have ideas for bounties, I'm sure you will get points for coming up with those and helping us broaden our portfolio of different bounties available.
Summary: Obviously, there are the roles in the system itself.
Being a content creator means publishing content under a channel that you own.
Being a curator, which means that you either own or operate channels or you are responsible for making sure that the content in the content directory is following rules and policies that apply at any given time.
I think that's a little bit of work in progress what the details of that actually entail, but being a curator lead means that you manage that group just as I discussed in the prior video with working groups.
I think validator role is one of those roles that a lot of people try to do because it's sort of a sedative, forget-it type of activity, and it's straightforward how to do it, that definitely will also grant you points.
Video 7 Community
00:01 Okay, this is the last section of the first update.
00:04 Very impressed that you made it this far.
00:08 Here we're going to talk about what we're doing on the community side.
00:11 We have a few different things going on in the way we're building the Joystream community up until main net but I think the most important initiative by far is the founding members program.
00:27 I guess maybe for a bit of context - what is the point of the community program in the Joystream project, what we're trying to do here is we're trying to build a DAO which is capable of operating the platform and evolving on its own autonomously based on the technology and the policies and the processes that we've established before main net.
00:51 The goal is really at main net you guys, the community, will fully run this system, and that requires you to have a lot of people with a lot of different skills, interests, infrastructure, all sorts of things, and that's what we're trying to get to.
01:11 The founding member program is about identifying the people who can play that role, motivating them to actually put in the effort to learn and develop their own ideas for what should happen after the main net launch.
01:24 For that matter what we should be doing prior to launch in order to make the tools and the documentation, everything as effective as possible, and, of course, to then distribute to these community members the token which is required to actually exercise the governance that allows that evolution to happen.
01:47 The founding member program is really the program for trying to find those specific people, identifying them, celebrating them, rewarding them and following their lead towards main net.
02:01 So far, we've inducted five members, and that has largely been based on the contributions of these individuals in the past before the program began.
02:14 Most of these are making an exceptional contribution to the development of the system on an ongoing basis.
02:22 These are names that you will recognize for show once the main net launches but they are closely followed by a good number of people who are trying to make it through.
02:37 The way the founding member program works or the way you become a founding member is, I should say that is the primary and possibly the only way for the broad community to get access to the Joy token, so that's really the way you get there, primarily it's by becoming a founding member which means that you need to, first of all, not be US persons, unfortunately, secondly, you need to contribute by submitting these summaries on a periodic basis of what you've done, who you'd refer to the platform which allows you to accumulate these scores.
03:21 There are two components to the score.
03:21 The direct score is basically some sort of a representation of your direct contribution, what you do in terms of technical, community, social, whatever it is you do to actually help grow the community and help us go in the right direction.
03:39 And there's a referral score which is basically a sort of a way of measuring how effective you've been at drawing in other people who are themselves people with high scores.
03:53 Then based on that you have a total score, and that total score counts towards becoming a founding member.
04:00 The policy that we're applying, it's dynamic, it's evolving so you are going to have to look up the most recent summary of what has happened in one of these scoring periods to understand what's going on, what's being emphasized, certainly your score, you'd have to go to the website to see where you rank on the leaderboard.
04:22 But I expect we will pretty soon see some new founding members.
04:27 It's an interesting question how many we actually will need by main net, how many people does it take, what distribution of stake should be in order for them to be effective.
04:39 This is something, I think, we will, first of all, never get the answer to entirely but also we'll probably sort of calibrate our policy to aim towards what our best thinking is at any given time.
04:54 So, that's how you get into the founding member program.
04:58 There are different ways of earning these scores.
05:02 Obviously, as I mentioned, you could refer people, that's one way.
05:05 You can work on bounties which is another major way we see of people who are perhaps not able to make it into one of the roles that are on the network, there's a limited number or you may need to be quite technical in order to do quite a lot of them.
05:29 So, the bounty program is a way for the community members to contribute in other ways.
05:37 Definitely if you have ideas for bounties that you would want to do or that you think other people could do – everything from marketing stuff, translating texts and tutorials making to troll videos - there's a lot of stuff that's going to be coming out but if you have ideas for bounties, I'm sure you will get points for coming up with those and helping us broaden our portfolio of different bounties available.
06:02 So, that's the bounty program.
06:03 Obviously, there are the roles in the system itself.
06:08 Being a content creator so publishing content under a channel that you own.
06:14 Being a curator, which basically means that you either own or operate channels or you are responsible for making sure that the content in the content directory is following all sorts of rules and policies that apply at any given time.
06:28 I think that's a little bit of work in progress what the details of that actually entail, but being a curator lead means that you manage that group just as I discussed in the prior video with working groups.
06:40 Being a validator, I think this is one of those roles where a lot of people try to do it because it's sort of a sedative, forget-it type of activity, and it's kind of straightforward how to do it, that definitely will also grant you points.
06:55 But I think the way you should be thinking about this is probably the more unique and the more substantial your contributions are, the more points you're likely to earn.
07:09 So, if you see that there are tons of other validators, that's probably not the best place for you to go if you're trying to distinguish yourself.
07:16 Obviously, being on the council is one of the most important roles, it's one of those roles that requires you to understand more of the platform as a whole so it's a great opportunity to learn to develop relationships with other founding members and other non-founding members that are trying to get there.
07:36 We have storage providers and the lead for that group.
07:41 I think that's basically the main roles that are live on the Antioch network, I'm maybe forgetting something but I think that's right.
07:52 So, those are the ways you can earn points for your, I should say for all these roles you will of course also earn test net tokens which have monetary value so not only are you getting points towards actually becoming a founding member, but you're also getting cash basically for whatever it costs you in terms of infrastructure and time and so on.
08:16 So, it should be pretty attractive.
08:19 That's, I think, a good concise summary of what we're doing on the community side.
08:22 Thank you for watching this video and watch out for the next community update. Bye!