Apple Vision Pro rethinks after a month of release: The future of XR, RNDR, and spatial computing
Apple Vision Pro rethinks future of XR, RNDR, and spatial computing one month after release.
On June 6th, during the early hours of WWDC (Apple’s Worldwide Developers Conference) and also my fifth day of discovering I have COVID-19 for the second time, I chatted with a friend over a cup of health tea. We waited for an hour, hoping that the One More Thing wouldn’t be delayed again. When Cook appeared at 2 a.m. and gave a big wave of his hand, “One More Thing,” my friend and I cheered on our end of the screen:
“Macintosh introduced personal computing, iPhone introduced portable computing, and Apple Vision Pro is going to introduce Social Computing”
As a lover of cutting-edge technology, I cheered for the new toy I could have next year, but as a Web3 investor who cares about gaming, the metaverse, and AI, this is a sign of a new era that makes me tremble.
You may be skeptical, “What does MR hardware upgrade have to do with Web3?” So, let’s start with Mint Ventures’ thesis on the metaverse racetrack.
Our thesis on the Metaverse, or Web3 world
The asset premium in the blockchain world comes from:
- What are Gemini’s co-founders and DCG’s founder constan...
- Electric Capital Developer Data Analysis: Slower Influx of New Deve...
- Emerging use cases for ZK – ZKML, ZK games, ZK ID
- The underlying trusted transaction that reduces transaction costs: The protection of ownership and title to physical goods is based on forced ownership and title protection by national violent machinery, while the ownership of virtual world assets is based on “trust in data that cannot (or should not) be tampered with under consensus,” as well as recognition of the assets themselves after ownership has been established. Although you can copy and paste, BAYC still has the price of a house in a third-tier city, not because the pictures of copy-and-paste images and NFT metadata are so different, but because the asset has the possibility of securitization under the premise of consensus on “non-replicability” in the market.
- The high securitization of assets brings liquidity premium
- The “no-permission premium” of decentralized consensus mechanisms corresponds to no-permission transactions.
Virtual world goods are easier to securitize than physical goods:
- From the history of paying for digital assets, we can see that people’s habit of paying for virtual content is not developed overnight, but it is undeniable that payment for virtual assets has penetrated into people’s lives. In April 2003, the debut of the iTunes Store allowed people to discover that in addition to downloading songs from the Internet, which was rampant with piracy, there was also an option to purchase legitimate digital music to support their favorite artists. The debut of the App Store in 2008 saw the popularity of one-time purchase apps worldwide, while subsequent in-app purchases continued to contribute to Apple’s digital asset revenue.
- This also buried a history of changes in the payment model of the gaming industry. The initial version of the gaming industry was the Arcade Game, and the payment model during the arcade era was “pay for the experience” (similar to movies). The payment model during the console era was “pay for the cartridge/disc” (similar to movies and music albums), and during the later period of the console era, pure digital versions of games were sold, and the digital game market of Steam appeared at the same time, as did in-game purchases that allowed some games to achieve a revenue myth. The history of the gaming payment model update is also the history of decreasing distribution costs from arcade to console, and then to personal computers and mobile game digital distribution platforms that everyone can log into, as well as the game itself that players are already immersed in. The big trend of the game content is that the distribution cost is getting lower and lower, and the target audience is getting wider; and the game assets have changed from “one link in the experience” to “buyable goods.” (Although the small trend of the past decade has shifted to the annual increase in the distribution cost of digital assets, this is mainly due to the low growth, high competition, and attention monopoly of the Internet.)
So, what’s next? Tradable virtual world assets will be a subject we always look forward to.
As the virtual world experience improves, people will spend more and more time immersed in the virtual world, leading to a shift in attention. The shift in attention will also bring about a valuation premium shift from strong attachment to physical assets to virtual assets. The release of Apple Vision Pro will completely change the way humans interact with the virtual world, leading to a significant increase in immersion time and experience.
Source: @FEhrsam
Note: This is our definition of a variation in pricing strategy. In premium pricing strategy, the brand sets the price in a range far above the cost and fills the gap between the price and cost with brand stories and experiences. In addition, cost pricing, competition pricing, supply and demand, and other factors are also considered when pricing goods, but only premium pricing is discussed here.
History and present of the MR industry
Modern society’s exploration of XR (Extended Reality, including VR and AR) began more than a decade ago:
- In 2010, Magic Leap was founded. In 2015, Magic Leap’s whale ad in the sports arena caused a sensation throughout the tech industry, but when the product was officially launched in 2018, it received a lot of criticism for its extremely poor product experience. The company raised $500 million in financing in 2021, with a post-investment valuation of $2.5 billion, making the company’s value less than one-third of its total financing of $3.5 billion. In January 2022, there were reports that the Saudi Arabian sovereign wealth fund had acquired a majority stake through a $450 million equity and debt transaction, and the company’s actual valuation fell to less than $1 billion.
- In 2010, Microsoft began developing Hololens and released its first AR device in 2016, followed by a second in 2019. The price was $3,000, but the actual experience was not good.
- In 2011, the Google Glass prototype was released, and the first product was launched in 2013. It was once very popular and had high expectations, but it ended up poorly due to privacy issues with the camera and poor product experience, with total sales of only tens of thousands. The enterprise version was released in 2019, and a new test version was tested on site in 2022, with a mediocre response. In 2014, Google’s Carboard VR development platform and SDK were released. In 2016, Daydream VR was released, the most widely used VR platform adapted to Android.
- In 2011, Sony PlayStation began developing its VR platform, and in 2016, PSVR made its debut. Although users were enthusiastic about buying it due to their trust in PlayStation, the subsequent response was poor.
- In 2012, Oculus was founded and acquired by Facebook in 2014. In 2016, the Oculus Rift was launched, followed by a total of four models, focusing on portability and lower pricing, and is a device with a higher market share on the market.
- In 2014, Snap acquired Vergence Labs, a company founded in 2011 that specializes in AR glasses, which became the prototype for Snap Spectacles. The first sale was in 2016, followed by three updated versions of the device. Like most of the above products, Snap Spectacles attracted a lot of attention at first, with people lining up outside the store, but later users were few and far between. In 2022, Snap closed its hardware division and refocused on smartphone-based AR.
- Around 2017, Amazon began developing AR glasses based on Alexa. The first Echo Frames were released in 2019, and the second version was released in 2021.
When we look back at the history of XR, we can see that the difficulty of expansion and cultivation in this industry far exceeds the expectations of everyone in the market, whether it is a technology giant with a lot of money and many scientists or a smart and capable start-up company focusing on XR with a financing of hundreds of millions. Since the release of the consumer VR product Oculus Rift in 2016, all VR brands, such as Samsung’s Gear, ByteDance’s Pico, Valve’s Index, Sony’s Playstation VR, HTC’s Vive, etc., have shipped less than 45 million units in total. As the most widely used application of VR devices is still games, before the release of Vision Pro, AR devices that people are willing to use occasionally have not appeared. According to SteamVR’s data, it can be roughly estimated that the monthly active users of VR devices may only be a few million.
Why are XR devices not popularized? The failure experiences of countless start-up companies and the summaries of investment institutions can give some answers:
1. Hardware is not ready
Visually, VR devices are difficult to ignore even if they are the top devices due to the wider field of view and closer distance to the eyes. Single-eye 4k, which is double-eye 8k resolution, is needed for full immersion. In addition, the refresh rate is also a core element to maintain the visual experience. It is generally believed that to achieve the anti-dizziness effect, XR devices need a refresh rate of 120 HZ or even 240 HZ per second to maintain a similar experience to the real world. The refresh rate is also a factor that needs to be balanced with the rendering level under the same computing power: Fortnite supports 4k resolution at 60 HZ refresh rate, while it only supports 1440p resolution at 120 HZ refresh rate.
Because compared with the intuitive nature of vision, auditory seems insignificant in the short term, and most VR devices have not put effort into this detail. But imagine that if the voice of a person on the left or right side in a space is constantly coming from the top of the head, it will greatly reduce the sense of immersion. When the digital Avatar in the AR space is fixed in the living room, if the player hears the same volume of the Avatar’s voice when walking from the bedroom to the living room, it will also subtly reduce the sense of reality in the space.
Interactivity: Traditional VR devices come with control handles, and devices such as HTC Vive require installing cameras at home to confirm the player’s movement status. Although Quest Pro has eye-tracking, the delay is high, the sensitivity is average, and it is mainly used to enhance local rendering. The actual interactive operation is still mainly based on the handle. At the same time, Oculus also installed 4-12 cameras on the headset to confirm the user’s scene status, achieving a certain degree of gesture interaction experience (for example, in the VR world, use the left hand to pick up a virtual phone, and click the right index finger in the air to confirm the game).
Weight: The quality of a device that makes people feel comfortable should be between 400-700g (although this is still a behemoth compared to normal glasses of about 20g). However, in order to achieve the above-mentioned clarity, refresh rate, interaction level, and matching rendering requirements (chip performance, size and quantity), and several hours of basic battery life requirements, the weight of XR devices is a difficult trade-off process.
In summary, to achieve XR as the next generation of mobile phones and become the new generation of public hardware, devices with a resolution of more than 8k and a refresh rate greater than 120HZ are required to avoid users feeling dizzy. This device should have more than a dozen cameras, a battery life of 4 hours or even longer (just take it off during lunch/dinner breaks), no or minimal heat, a weight of less than 500g, and a price as low as 500-1000 US dollars. With the current technology, although it has improved a lot compared to the previous wave of XR boom from 2015 to 2019, it is still difficult to meet the above standards.
However, even so, if users start to experience existing MR (VR+AR) devices, they will find that the current experience, although not perfect, is an immersive experience that 2D screens cannot match. But there is still considerable room for improvement in this experience-for example, taking Oculus Quest 2 as an example, most of the VR videos that can be watched are 1440p, and they do not even reach the upper limit of Quest 2’s 4K resolution, and the refresh rate is far from 90HZ. And the existing VR games only have relatively poor modeling, and there are not many choices available to try.
Source: VRChat
2. Killer App has yet to appear
The fact that the “Killer App” has not yet appeared is due to a historical hardware limitation. Even though Meta has been doing its best to compress profit margins, the hundreds of dollars of MR headsets and relatively simple ecosystem is still not as attractive as the existing game console with a rich ecosystem and a large user base. The number of VR devices is between 25-30 million, while the number of 3A game terminals (PS5, Xbox, Switch, PC) is 350 million. Therefore, most manufacturers have given up supporting VR, and the few games that support VR devices are “layout VR platforms” rather than “only support VR devices”. In addition, due to the problems mentioned in the first point, such as pixelation, dizziness, poor battery life, and excessive weight, the VR experience is not better than traditional 3A game terminals. The “immersive” advantage emphasized by VR supporters is not easy to achieve ideal experience because of the insufficient number of devices. Developers who “layout VR devices” rarely design experiences and interaction modes specifically for VR.
Therefore, the current situation is that when players choose VR games instead of non-VR games, they not only “choose a new game”, but also “give up the experience of socializing with most friends”. Such game scenes are often more about gameplay and immersive experience than socializing. Of course, you may mention VR Chat, but if you delve deeper, you will find that 90% of its users are not VR users, but players who want to experience various avatars and socialize with new friends in front of a normal screen. Therefore, it is not surprising that the most popular game in VR software is rhythm games like “Beat Saber”.
Therefore, we believe that the emergence of the Killer App requires the following elements:
- A significant improvement in hardware performance and all-round details. As mentioned in “Hardware not ready”, this is not a simple operation such as “improving the screen, improving the chip, improving the speaker…”, but the result of the comprehensive cooperation of chips, accessories, interaction design, and operating systems – and this is exactly what Apple is good at: Compared with the iPod and iPhone more than a decade ago, Apple has completed the collaboration of multiple device operating systems through decades of accumulation.
- The eve of the explosion of user device holdings . As analyzed above on the mentality of developers and users, this “chicken or egg” problem, it is difficult for the Killer App to appear when the XR device MAU is only a few million. At the peak of “The Legend of Zelda: Breath of the Wild”, the sales of game cartridges in the United States were even higher than the number of Switch owners – this is an excellent case of “how new hardware enters mass adoption”. Those who buy devices to experience XR will gradually become disappointed due to limited experience content, and will talk about how their headsets are dusty; but players who are attracted by “The Legend of Zelda” will stay because they explore more other games in the Switch ecosystem.
Source: The Verge
- Consistent user experience and device compatibility with stable updates. The former is easy to understand – with or without a controller, there are two different habits and experiences for users to interact with the machine, which is the difference between Apple Vision Pro and other VR devices on the market. The latter can be seen in the iteration of Oculus hardware – significant improvements in hardware performance within the same generation can actually limit the user experience. The Meta Quest Pro, released in 2022, has greatly improved hardware performance compared to the Oculus Quest 2 (also known as Meta Quest 2) released in 2020: the resolution of Quest Pro has been upgraded from Quest 2’s 4K display to 5.25K, color contrast has been increased by 75%, and the refresh rate has been increased from the original 90 HZ to 120 HZ. In addition to the 4 cameras on Quest 2 for understanding the VR environment, 8 external cameras have been added to Quest Pro, making the black and white environmental image become color and significantly improving hand tracking, while also adding facial and eye tracking. At the same time, Quest Pro also uses “gaze rendering” to concentrate computing power on the area the eyes are looking at and weaken the fidelity of other areas, thereby saving computing power and power consumption. As mentioned above, Quest Pro has much more features than Quest 2, but users who use Quest Pro may be less than 5% of Quest 2. This means that developers will develop games for both devices at the same time, which will greatly limit the use of Quest Pro’s advantages and also reduce the attractiveness of Quest Pro to users. History Rhymes, the same story has also happened repeatedly in game consoles, which is why console manufacturers update software and hardware every 6-8 years. Users who buy the first generation of Switch will not worry about the new hardware such as Switch OLED causing incompatibility with newly launched software, but users who buy the Wii series will not be able to play games in the Switch ecosystem. For software developers targeting console games, the games they produce are not for products with a large user base (350 million vs. billions) and strong user dependence (leisure at home vs. carrying all day), and they need stable hardware experience within several development cycles to avoid excessive user diversion. Otherwise, they can only, like VR software developers now, be down-compatible to ensure sufficient user base.
So, can Vision Pro solve the problems encountered in hardware and software? What kind of changes will it bring to the industry?
The Turning Point of Vision Pro
At the press conference on June 7th, Apple Vision Pro was released. According to the “challenges encountered by MR in hardware and software” framework analyzed above, the following analogies can be made:
Hardware:
- Visual: Vision Pro uses two 4K screens with a total of about 6K pixels, which is currently the second-top MR device. The refresh rate can support up to 96 HZ, and it supports HDR video playback. According to the description of the experience of technology bloggers, it is not only highly clear, but also almost completely free of dizziness.
- Auditory: Since 2020, Apple has been using spatial audio on Airpods, which allows users to hear sound from different directions to achieve a three-dimensional audio experience. However, Vision Pro is expected to go further, using “audio beam technology” to fully integrate LiDAR scanning in the device, analyze the acoustic characteristics (physical materials, etc.) of the room, and then create “spatial audio effects” that match the room and have direction and depth.
- Interaction: Gesture and eye tracking without any handle make the interaction experience silky smooth (according to the experience of technology media, it is almost impossible to feel the delay, which is not only the accuracy of the sensor and calculation speed, but also introduces prediction of the eye path. will be further introduced below).
- Endurance: The endurance of Vision Pro is 2h, which is basically the same as Meta Quest Pro (it is not amazing, and it is currently criticized by Vision Pro. However, because Vision Pro is an external power supply and a 5000mA small battery is placed in the head display, it can be guessed that there is room for replacing the power supply for endurance).
- Weight: According to the experience of technology media, it is about 1 pound (454g), which is basically the same as Pico and Oculus Quest 2, and should be lighter than Meta Quest Pro. It is a good experience in MR devices (although this does not include the weight of the power supply tied to the waist). However, compared with the pure AR glasses weighing about 80g (such as Nreal, Rokid, etc.), it is still heavy and stuffy. Of course, most pure AR glasses need to be connected to other devices and can only be used as an expansion screen. In comparison, MR with built-in chips and real immersive experience may be a completely different experience.
- In addition, in terms of hardware performance, Vision Pro not only uses the most advanced M2 series chip for system and program operation, but also adds an R1 chip specially developed for MR screens, surrounding environment monitoring, eye and gesture monitoring, etc. Used for MR-specific display and interaction functions.
On the software side, Apple can not only complete a certain degree of migration with its millions of developer ecosystems, but has actually laid out a series of ecological layouts early with the release of AR Kit:
As early as 2017, Apple released AR Kit: a virtual reality development framework compatible with iOS devices, which allows developers to create augmented reality applications and use the hardware and software features of iOS devices. VR Kit can create a map of the area by using the camera on an iOS device, detecting things such as desktops, floors, and devices in physical space using CoreMotion data, and allowing digital assets to interact with the real world under the camera – for example, in Pokemon Go, you can see Pokemon buried in the ground or perched on trees instead of moving around on the screen with the camera. Users do not need to calibrate anything – this is a seamless AR experience.
https://pokemongohub.net/
- In 2017, AR Kit was released, which can automatically detect location, topology, and user facial expressions for modeling and expression capture.
- In 2018, AR Kit 2 was released, bringing a better CoreMotion experience, multi-person AR games, tracking 2D images, and detection of known 3D objects (such as sculptures, toys, and furniture) possible.
- In 2019, AR Kit 3 was released, adding further augmented reality features, using People Occlusion to display AR content in front of or behind people, and tracking up to three faces. Collaborative sessions are also supported, enabling a brand-new AR shared gaming experience. Motion capture can be used to understand body position and movement, track joints and bones, and achieve new AR experiences that involve people rather than just objects.
- In 2020, AR Kit 4 was released, which can use the LiDAR sensor built into the 2020 iPhone and iPad to improve tracking and object detection. ARKit 4 also added Location Anchors, which place the augmented reality experience at specific geographic coordinates using Apple Maps data.
- In 2021, AR Kit 5 was released, and developers can build custom shaders, programmatic mesh generation, object capture, and character control. In addition, developers can use built-in APIs as well as LiDAR and cameras in iOS 15 devices to capture objects. Developers can scan an object and immediately convert it to a USDZ file, which can be imported into Xcode and used as a 3D model in your ARKit scene or application. This greatly improves the efficiency of 3D model creation.
- In 2022, AR Kit 6 was released, and the new version of ARKit includes the “MotionCapture” function, which can track characters in video frames and provide developers with a predictable character “skeleton” of head and limbs positions, supporting developers to create applications to overlay AR content on characters or hide behind characters for a more realistic integration into the scene.
Looking back at the layout of AR Kit, which started seven years ago, it can be seen that Apple’s technology accumulation in AR is not achieved overnight, but rather, the AR experience is silently integrated into widely spread devices. By the time Vision Pro was released, Apple had already completed some content and developer accumulation. In addition, due to the compatibility of AR Kit development, the products developed are not only for Vision Pro users, but can also adapt to iPhone and iBlockingd users to a certain extent. Developers may not need to be limited by the ceiling of 3 million monthly active users to develop products, but can potentially test and experience products targeting hundreds of millions of iPhone and iBlockingd users.
In addition, the 3D video shooting of Vision Pro also partially solves the problem of limited MR content production today. Most existing VR videos are 1440p, which looks pixelated in the circular screen experience of MR headsets, while Vision Pro’s shooting has high pixel space video and decent spatial audio experience which may greatly enhance the MR content consumption experience.
Although the above configurations are already quite shocking, Apple’s imagination for MR does not stop there: on the day Apple MR was released, a developer claiming to have participated in Apple’s neuroscientific development, @sterlingcrispin, said:
Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, Blockingying attention, remembering a Blockingst experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.
There were a lot of tricks involved to make specific predictions possible, which the handful of Blockingtents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in Blockingrt because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticiBlockingtory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.
To achieve specific predictions, we use a lot of tricks that are detailed in several patents under my name. One of the coolest results is predicting what a user will click before they actually click it. This is a challenging task and I am proud of it. Your pupils react before you click, partially because you expect something to happen after the click. Therefore, by monitoring a user’s eye movement behavior and redesigning the user interface in real-time, we can create more anticipatory pupil reactions in sync with the user’s brain. This is a cool rough brain-computer interface through the eyes. I’d rather choose this method than invasive brain surgery.
Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.
Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.
These neuroscience-related technologies may mark a new synchronous way of human and machine will.
Of course, Vision Pro is not without its flaws, such as its price of $3,499, which is more than twice as much as the Meta Quest Pro and more than seven times as much as the Oculus Quest 2. Regarding this, Runway’s CEO Siqi Chen said:
it might be useful to remember that in inflation adjusted dollars, the apple vision pro is priced at less than half the original 1984 macintosh at launch (over $7K in today’s dollars)
It may be useful to remember that the Apple Vision Pro is priced at less than half of the original 1984 Macintosh at launch (equivalent to over $7,000 today) when adjusted for inflation.
In this comparison, the price of Apple Vision Pro doesn’t seem too outrageous…but it’s hard to imagine that Apple, which has put a lot of effort into MR, can accept such an embarrassing position. The reality in the next few years may not change much, AR may not necessarily require glasses, and Vision Pro may not be widely popularized in the short term. It is very likely to be used only as a tool for developer experience and testing, a production tool for creators, and an expensive toy for digital enthusiasts.
Source: Google Trend
Nevertheless, we can see that Apple’s MR device has begun to stir up the market, shifting the general public’s attraction to digital products back to MR and letting people know that MR is already a more mature product than PowerPoint/demo videos. It lets users know that there is a choice of a head-mounted immersive display beyond tablets, TVs, and mobile phones; it lets developers know that MR may truly become the new trend of the next generation of hardware; it lets VC know that this may be a field with a very high ceiling for investment.
Web3 and related ecosystems
1. 3D rendering + AI conceptual target: RNDR
RNDR Introduction
In the past six months, RNDR has repeatedly led the market as a meme that combines the three concepts of Metaverse, AI, and MR.
The project behind RNDR is the Render Network, a protocol that uses a decentralized network to achieve distributed rendering. The company behind Render Network, OTOY.Inc, was founded in 2009, and its rendering software OctaneRender has been optimized for GPU rendering. For ordinary creators, local rendering occupies a high proportion of machine resources, which creates a demand for cloud rendering. However, if AWS, Azure, and other vendors’ servers are rented for rendering, the cost may also be high. This is where Render Network comes in. Rendering is not limited by hardware conditions, connecting creators and ordinary users who own idle GPUs, allowing creators to render cheaply, quickly, and efficiently, and node users can use idle GPUs to earn pocket money.
For Render Network, there are two types of participants:
- Creators: They publish tasks and use fiat currency to purchase Credit or pay with RNDR. (Octane X for publishing tasks can be used on Mac and iBlockingd, and 0.5-5% of the fee will be used to cover network costs.)
- Node providers (idle GPU owners): Owners of idle GPUs can apply to become node providers and be prioritized based on the reputation of previous completed tasks. After the node completes the rendering, the creator will inspect the rendered file and download it. Once downloaded, the fee locked in the smart contract will be paid to the node provider’s wallet.
RNDR’s tokenomics were changed in February of this year, which is one of the reasons for its significant price increase (However, as of the publication of this article, Render Network has not yet applied the new tokenomics to the network, nor has it provided a specific launch time):
Previously, in the network, the purchasing power of $RNDR was the same as that of Credit, with 1 credit = 1 euro. When the price of $RNDR was less than 1 euro, it was more cost-effective to buy $RNDR than Credit with fiat currency. However, when the price of $RNDR rose above 1 euro, because everyone tended to use fiat currency to purchase, there would be a situation where $RNDR lost its use case. (Although protocol revenue may repurchase $RNDR, other players in the market have no incentive to buy $RNDR.)
The revised economic model adopts Helium’s “BME” (Burn-Mint-Emission) model. When creators purchase rendering services, whether they use fiat currency or $RNDR, they will destroy $RNDR worth 95% of the fiat currency value, and the remaining 5% will flow to the foundation as revenue for engine usage. When nodes provide services, they no longer directly receive revenue from creators’ purchase of rendering services, but receive newly minted token rewards based not only on task completion metrics but also on other comprehensive factors such as customer satisfaction.
It is worth noting that each new epoch (a specific time period, the duration of which has not yet been specified) will mint new $RNDR, with the amount strictly limited and decreasing over time, independent of the amount of tokens burned (see the release document in the official white paper for details). This will bring changes in the distribution of benefits to the following stakeholders:
- Creators/network service users: Each epoch, a portion of the RNDR consumed by creators will be returned, with the proportion gradually decreasing over time.
- Node operators: Node operators will receive rewards based on factors such as the amount of work completed and real-time online activity.
- Liquidity providers: Dex liquidity providers will also receive rewards to ensure that there is sufficient $RNDR available for burning.
Source: https://medium.com/render-token/behind-the-network-btn-july-29th-2022-7477064c5cd7
Compared with the previous model of irregular income repurchases, under the new model, miners can earn more income when there is insufficient demand for rendering tasks, while they will earn less income compared to the original model when the total amount of rendering task prices corresponding to the demand exceeds the total amount of $RNDR rewards released (burned tokens > newly minted tokens), and the $RNDR token will also enter a deflationary state.
Although the recent half-year surge in RNDR has been impressive, the business situation of Render Network has not grown significantly as the coin price: the number of nodes in the past two years has not fluctuated significantly, and the amount of $RNDR allocated to nodes each month has not increased significantly, but the number of rendering tasks has indeed increased- it can be seen that the tasks allocated to the network by creators have gradually moved from a single large amount to multiple small amounts).
https://dune.com/lviswang/render-network-dollarrndr-mterics
Although it cannot catch up with the five-fold increase in coin price in a year, the GMV of Render Network has indeed experienced a significant increase, and the GMV (Gross Merchandise Value, total transaction amount) in 2022 increased by 70% compared to last year. According to the total amount of $RNDR allocated to nodes on the Dune billboard, the GMV in the first half of 2023 is about $1.19M, which is basically the same as the same period in 2022. Such a GMV is obviously not enough for a 700 million US dollar mCap.
Source: https://globalcoinresearch.com/2023/04/26/render-network-scaling-rendering-for-the-future/
Vision Pro 的推出对 RNDR 的潜在影响
In the Medium article released on June 10, Render Network claimed that Octane’s rendering capabilities for M1 and M2 are unique-because Vision Pro also uses M2 chips, rendering in Vision Pro does not differ from ordinary desktop rendering.
But the question is: why release rendering tasks on a device with a battery life of 2 hours, mainly used for experience and play, rather than a productivity tool? If the price of Vision Pro drops, the battery life is greatly improved, and the weight is reduced, and true mass adoption is achieved, it may be the time when Octane can play a role…
What can be confirmed is that the migration of digital assets from flat devices to MR devices will indeed bring about an increase in infrastructure demand. The announcement of cooperation with Apple to study how to manufacture Unity game engines that are more suitable for Vision Pro, and the stock price rose 17% on the same day, also shows the market’s optimism. With Disney and Apple’s cooperation, the 3D rendering of traditional film and television content may usher in similar demand growth. Render Network, which is good at film and television rendering, launched AI-based 3D rendering technology NeRFs in February of this year, using artificial intelligence calculations and 3D rendering to create real-time immersive 3D assets that can be viewed on MR devices -With the support of Apple AR Kit, anyone can use a higher configuration iPhone to photo scan objects to generate 3D assets, and the NeRF technology uses AI-added rendering to render the rudimentary Photoscan 3D into an immersive 3D asset that can refract different light from different angles-this space rendering will be an important tool for MR device content production, providing potential demand for Render Network.
This reminds me of another video. In 2011, 12 years ago, Microsoft released Windows Phone 7 (as a Gen Z who doesn’t remember much about that era, it’s hard to imagine that Microsoft has put in a lot of effort on mobile phones), and made a satirical advertisement about smartphones called “Really?”: people in the advertisement were holding their phones tightly, staring at their phones while riding bicycles, sunbathing on the beach, taking a shower while holding their phones, falling down stairs at parties because they were distracted by their phones, and even dropping their phones into urinals… Microsoft’s original intention was to show users that “Microsoft’s phones will save us from smartphone addiction” – this was certainly a failed attempt, and the name of this “Really?” ad could even be changed to “Reality”. The “presence” of smartphones and intuitive interaction design is more addictive than the anti-human “mobile version Windows computer”, just as the reality that combines virtual and real is more addictive than pure reality.
How to grasp this kind of future? We have several directions that we are exploring:
- Creation of immersive experiences and narratives : First of all, there is video. After the release of Vision Pro, shooting “3D depth” movies has never been easier, and this will also change the way people consume digital content – from “remote viewing” to “immersive experience”. Outside of video shooting, “3D space with content experience” may be another track worth paying attention to. This does not mean randomly building a template library of identical scenes, or extracting several seemingly explorable spaces from games, but rather a space that is “interactive, with native content, and more 3D-friendly” experience. This kind of space may be a handsome piano coach who sits on the piano bench with you, highlights the corresponding piano keys, and gently encourages you when you are frustrated; a virtual girlfriend who understands you and accompanies you for a walk; or a little elf hiding a game key in the corner of your room… The creator economy that is born here can be well used to trust, automatically settle, assetize digital content, and trade with low communication friction based on blockchain. Creators can interact better with fans without the hassle of registering companies and setting up Stripe for payment, or worrying about platforms taking 10% (Substack)-70% (Roblox) of the revenue, or even worrying about the platform going bankrupt and taking away your hard work… A wallet, a composable content platform, and decentralized storage can solve these problems. Similar upgrades will occur in gaming and social spaces, and it can even be said that the boundary between games, movies, and social spaces will become increasingly blurred: when the experience is no longer a big screen suspended several meters away, but a close-up with depth, distance, and spatial audio interaction, players are no longer “watching” viewers, but participants in the scene, and their actions may affect the virtual world environment (such as raising your hand in the jungle, and a butterfly flies to your fingertips).
- Infra and community of 3D digital assets : The 3D shooting function of Vision Pro will greatly reduce the difficulty of creating 3D videos, thus giving birth to a new market for content production and consumption. Corresponding upstream and downstream infrastructures such as material trading and editing may continue to be dominated by existing giants, or may open up new territory like AIGC by start-ups.
- Upgrades of hardware/software to enhance immersive experience : Whether it is Apple’s research on “more detailed observation of the human body to create an adaptive environment” or adding touch and taste immersive experience, they are all tracks with considerable potential.
Of course, entrepreneurs in this field are likely to have a deeper understanding, more creative exploration and thinking than us – welcome to DM @0xscarlettw to discuss the possibilities of the space computing era.
Acknowledgments and references:
Thanks to Mint Ventures Partner @fanyayun and Research Partner @xuxiaopengmint for their suggestions, reviews and proofreading during the writing process of this article. The XR analysis framework comes from @ballmatthew’s series of articles, Apple WWDC and developer courses, as well as the author’s experience of various XR devices on the market.
- https://www.youtube.com/watch?v=YJg02ivYzSs
- https://www.bilibili.com/video/BV1Ps4y1q7K2/?share_source=copy_web&vd_source=fc6336b5a0337d489d6eaf7ae486e621
- https://www.youtube.com/watch?v=OFvXuyITwBI
- https://twitter.com/DrJimFan/status/1665794601154916352
- https://www.matthewball.vc/all/why-vrar-gets-farther-away-as-it-comes-into-focus
- https://twitter.com/blader/status/1666007944285274113?s=20
- https://twitter.com/FEhrsam/status/1665817199284559873
- https://mirror.xyz/0x30bF18409211FB048b8Abf44c27052c93cF329F2/6xR2nFi-Q5WdXIDZpEga4xS3m3AZ61hXyu6dzIEBb_E
- https://rndr.gitbook.io/render-network-foundation-governance/
- https://docs.google.com/spreadsheets/d/1vgNamfJsJeCOUnFGtrdBw7GJCtN25bXEIFOluJQAO64/edit#gid=365524340