メインコンテンツまでスキップ

Should you use Redstone for your next onchain game?

· 約12分

Cover image

The Lattice team has recently announced Redstone - their new L2 built using their new contribution to OP Stack (the stack that powers the Optimism L2).

The question on everyone's mind is, therefore, "Should onchain games be built on this L2, and how does it compare to alternatives?". Many have been reaching out to our team at Paima Studios for our opinion given that we are one of the top builders of onchain games with games live on multiple chains, and so we will do our best to dive into the nuances.

注記

At the time of writing, Redstone has only recently been announced. Web3 is a very fast-moving space, and so we encourage you to read this blog post with an open mind to Redstone as they inevitably announce more about their work

To understand Redstone and why it exists, you have to first understand how it compares to the other alternatives that are actively being used in the market and their tradeoffs. Notably, in this blog post, we will focus on giving you the correct mental model so that you can understand what Redstone proposes no matter what they announce next.

Where it all began

So you want to build an onchain game? Given Redstone is an Ethereum L2, we will assume you've already set your mind on leveraging Ethereum.

So, why can't you deploy your game to Ethereum directly? You may know it's because it costs too much (over >$1 per game move at the time of writing), but do you know why it costs too much? It turns out there are two main costs: cost of execution, and cost of data storage - both of which are prohibitively expensive for a game. However, just like CPUs are more expensive than harddrives, execution cost is significantly higher than storage costs. So what if we could come up with a way to convert execution cost into storage cost? Good news: rollups do exactly this.

Rollups as a scaling solution

Rollups come in many forms, each converting execution costs to storage costs in their own way:

  1. Optimistic rollups: run the calculation offchain, then store all the data required to execute the function (just data, no execution) along with your locally computed value for the result. Only actually run the execution if somebody believes the result you posted is incorrect ("fraud proof").
    Popular examples: Arbitrum, Optimism
  2. ZK rollups: run the calculation offchain, then store all the data required to execute the function (just data, no execution) along with your locally computed ZK proof of the result.
    Popular examples: ZK Sync, Starknet, Polygon zkEVM
  3. Sovereign rollup: run the calculation offchain, then store all the data required to execute the function (just data, no execution).
    Popular examples: Rollkit, Paima Engine

Leveraging these solutions brings the cost of a transaction for your game down to approximately $0.05 (see l2fees for up-to-date values), which is definitely a big step in the right direction.

Reducing the cost of L2s

Clearly, reducing the costs of these L2s is key for games to be successful. Although rollups are definitely getting cheaper (computers getting better, ZK technology getting better, etc.), the primary costs is not running the offchain computation, but rather the cost of posting the data to the L1.

To tackle this, Ethereum will be introducing a new way to store data that is much cheaper (called EIP4844) where the data is only stored temporarily (in practice, ~2 weeks so that it's long enough for any fraud-proof to be posted and for data to be replicated by nodes across the world).

EIP4844 comes with some downsides though:

  • Data is only temporary (you will need to find another storage solution to keep it hosted afterwards)
  • Data is limited, capping at around 2 MiB per block (shared between all rollups on Ethereum)

So as you can see, although efforts are being made to lower costs, they will not be sufficient to make onchain games feasible on the L2 given the continued growth in interest in the blockchain space (speed of adoption is faster than the speed of technical innovation)

Alternative #1: Store the data on a centralized server (or set of servers)

One alternative to keep the cost low is to simply store the data in a centralized server that people trust you to operate, and only posting a hash of the data onchain. A variant of this ideas is to use a group of independently operated machines aggregated as a multisig. Such a scheme is called a "Data Availability Committee" (DAC) and this is what is used by Arbitrum Nova, Arbitrum Orbit and Polygon CDK.

These schemes are much cheaper ($0.001 / tx for Arbitrum Nova if you ignore the fee market) in exchange for the network being more centralized. The main risk is that if the DAC every stops hosting the data (ex: they post a hash and never share the data for that hash), the network halts.

A special note on Arbitrum

You may be curious why Arbitrum appears twice on the list. Arbitrum provides 3 main offerings at the moment:

  • Arbitrum One (the main Arbitrum network which is a full rollup with data on Ethereum)
  • Arbitrum Nova (a L2 that uses a DAC)
  • Arbitrum Orbit (a stack to create L3s for Arbitrum One)

As you can see, the problem with Nova is that there is no good way to leverage DeFi for your game (users would have go to (Nova -> ETH L1 -> One) and spend a lot on gas just to bridge), whereas the new Orbit stack allows you to easily go (Orbit -> One). Additionally, since Orbit is a stack for creating L3s, you can use an existing L3 like Xai Games that powers its own DAC, or spin up your own L3 (although if you have a game-specific L3 whose only connection to Ethereum is occasionally posting hashes, you may arguably be better off with web2.5)

Alternative #2: Store the data in another decentralized network

Instead of waiting for EIP4844 to be implemented with limited bandwidth in mainnet, other projects like Celestia, Avail and EigenDA have decided to implement a similar concept as a separate chain (called a Data Availability ("DA") layer), and by focusing purely on this use-case, they offer higher data limits than Ethereum mainnet plans to support as well. These platforms do not support smart contracts, and are instead meant to purely be used as the data layer for L2s.

Notably, it's possible to create an OP Stack with data on Celestia as well as an Arbitrum Orbit with data on Celestia as well. This does come with some trade-offs:

  1. Trust. Your rollup now depends on the DA layer for security on top of Ethereum (but arguably better than a DAC)
  2. Cost. Your rollup now needs to pay the DA network for its security (which you have to pay in the native token of the DA layer)
  3. Speed. Celestia block times are 15s, and Avail block times are 20s. For example, the data needs to settle to Celestia before it can be bridged to EVM with Celestia's blobstream contract. Take this point with a grain of salt though, as all L2s general emulate faster block times than Ethereum can really provide (given Ethereum's block times are only 15s despite Arbitrum using a block time faster than this).

This kind of setup is notably used by Mantle (OP Stack + EigenLayer DA) and Manta Pacific (OP Stack + Celestia). The cost for these are still to be seen, but the Celestia team claims approximately $0.001, meaning the cost of storage on a DA layer (relative to the execution cost from a fee market on the EVM side) is minimal.

Alternative #3: Store the data in a DAC that can be challenged

Finally, we can talk about what Redstone is offering. If you don't like the trade-offs of storing data on a DA layer, but don't like the centralization of a DAC, you can instead build a DAC where you can financially punish the committee if they do not make the data available.

To help understand what this means, let's go over a flow of how the DAC protocol works:

How to write data

  1. Sequencer for Redstone receives your transaction
  2. Sequencer sends the data to the DAC to be stored
  3. DAC returns an acknowledgement that the data is stored
  4. Sequencer posts the hash of the data to the L1

How to read data

  1. Sync an Ethereum chain looking for hashes that were submitted to the rollup contract
  2. Query the data for the hash from the DAC
  3. Compute the state of the L2 based on this data

So what does Redstone change

When reading data, if the data is not available, you can challenge the DAC by claiming it did not make the data available (aka the data is not downloadable from their server).

To properly incentivize everybody to be honest, we setup the following slashing rules:

  1. If a challenger is dishonest (the data really was available), they are slashed (otherwise, you could financially attack the network by challenging every block)
  2. If the DCA is dishonest (the data is unavailable), they are slashed.

This seems like a simple solution, but the difficulty is in figuring out who is at fault if a problem occurs. Think of the following scenario:

  1. Sequencer posts a hash without sharing the real data
  2. Somebody challenges the sequencer
  3. The sequencer, seeing the challenge, and makes the data available

If you're an outside observer that was not monitoring the chain in realtime, the data looks available (if you query the DAC after-the-fact you get the data as expected), so it looks like the challenger was lying even though they were not.

If your solution to this problem is to assume the sequencer would never lie for just a game, then why not use a standard DAC instead. Additionally, assuming the sequencer is honest doesn't compose well with the concept of a shared sequencer "superchain", meaning assets cannot use the shared sequencer to be transferred between OP Stack chains (so you run into the same problem as Arbitrum Nova unless Redstone is deployed as a L3)

How the Lattice team plans to handle this will be the key point to look out for as more documentation and roadmap information is made available.

Alternative #4: Use ZK

Note that the problem with data not being shared (data withholding attacks) that affects Redstone is not exclusive to Optimistic rollups. ZK Rollups whose data is stored offchain (called "Validiums") suffer from the same issue, which is why people are generally more interested in rollups (that post all data to a chain).

Therefore ZK rollups, in general, will not help you reduce the data cost of your game securely. They can definitely help scale your game in many other ways (move more computation to the user's local machine, use recursive proofs to batch many interactions together either rollup-style or state-channel-style, etc.), but that's a topic for a future post.

How can I make the costs for my game even lower if Solidity itself is the problem?

This entire blog post we've talked about how to handle storage costs. However, some games may be CPU-limited (even if they run in a centralized EVM chain they operate). If this is you, you may be interested in using a Sovereign rollup to allow you to scale your game beyond the limits of the EVM using Paima Engine.

Paima Engine allows creating app-specific state machines in Javascript that you can deploy to any EVM chain of your choice (including Redstone!). These sovereign rollups can access EVM information (including MUD engine data), and so can act as a great way to make any part of your game run much faster and cheaper.

Conclusion

In conclusion, reducing the cost of data is the more the most crucial step to lowering onchain gaming costs. There are many different existing solutions with different trade-offs that exist today, and Redstone pitches itself as safer than the standard DAC, but it remains to be seen if it is meaningfully safer, and if the difference is meaningful enough to be a viable alternative to DA layer backed solutions. For projects who need to scale computation on top of data, solutions like Paima Engine exist to tackle the problem.

As a final disclaimer, remember that Redstone details are yet to be fully announced. This blog post should give you the right mental model to understand their future announcements, so let's keep an open mind and see what they propose going forward.

Paima Studios

Paima Studios, founded in April 2022, are the core developers of the Paima Engine: a Web3 engine built using novel layer 2 technology that allows building onchain games, gamification and autonomous worlds. Paima Engine is a safe and easy way to enter Web3 as it can be used with Web2 skills and doesn't expose users or developers to common Web3 risks and hacks.

You can also learn more from our social media:

Want to work together? Do not hesitate to reach out to us through our contact page: https://paimastudios.com/contact/