Recently pushed an aggressive LoD and streaming optimization test on the experimental branch. The setup was pretty demanding—100 different scenes loaded with roughly 150 million splats in total. The system managed 16 million splats for local streaming while keeping 2 million splats in the active LoD pool.
The results are looking solid so far. Once these performance improvements stabilize across different scenarios, we're essentially looking at the ability to build persistent splat-based worlds that scale without practical size limits. This would be a game changer for applications that rely on massive spatial data and real-time rendering.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
14 Likes
Reward
14
2
Repost
Share
Comment
0/400
TestnetScholar
· 2025-12-22 04:44
150 million spots? This number makes me a bit confused, but it does sound fierce.
---
Everyone is really pushing on LoD optimization, once it's stable, I need to follow it.
---
The concept of an expandable persistent world sounds quite appealing, but I wonder how it will actually perform.
---
Streaming management of 16 million spots... how much VRAM will that require?
---
If it can really load scenes without limits, many things will have to change.
---
Testing scale of 100 scenes is a bit aggressive, let's wait for the updates to see the results.
---
Has there finally been progress on the spot rendering? It should have been optimized long ago.
---
If this is to be put on the mainnet, does that mean it needs to be tested again?
---
Whether it's stable or not aside, the idea isn't bad.
---
Feels like this is paving the way for a bigger project?
View OriginalReply0
PortfolioAlert
· 2025-12-19 05:45
Haha, 150 million dots are running? That number sounds a bit unbelievable.
Wait, after streamlining optimization is stable, can it really be expanded infinitely? Wouldn't that mean rewriting the game engine?
Just asking, why insist on 16 million local dots? Can't we just do everything in the cloud?
Oh my, these performance metrics seem like they could change the entire rendering ecosystem.
I'm a bit looking forward to the stability test reports... Can it really be used?
Recently pushed an aggressive LoD and streaming optimization test on the experimental branch. The setup was pretty demanding—100 different scenes loaded with roughly 150 million splats in total. The system managed 16 million splats for local streaming while keeping 2 million splats in the active LoD pool.
The results are looking solid so far. Once these performance improvements stabilize across different scenarios, we're essentially looking at the ability to build persistent splat-based worlds that scale without practical size limits. This would be a game changer for applications that rely on massive spatial data and real-time rendering.