this post was submitted on 27 Dec 2024
64 points (91.0% liked)

Games

38439 readers
1327 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Weekly Threads:

What Are You Playing?

The Weekly Discussion Topic

Rules:

  1. Submissions have to be related to games

  2. No bigotry or harassment, be civil

  3. No excessive self-promotion

  4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

  5. Mark Spoilers and NSFW

  6. No linking to piracy

More information about the community rules can be found here and here.

founded 2 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 4 months ago (1 children)

Someone mentioned Neural Radiance Caching to me recently, which Nvidia's been working on for a while. They presented about it at an event in 2023 (disclaimer: account-gated and I haven't watched - but a 6-minute "teaser" is available: YouTube).

I don't really understand how it works after having skimmed through some stuff about it, but it sounds like it could be one of several ways to improve this specific problem?

[–] [email protected] 3 points 4 months ago* (last edited 4 months ago) (1 children)

It's at least used in RTX Global Illumination as far as the nvidia site mentions it, and I heard rumors about Cyberpunk getting it, but unsure if it's used in current tech or not. I think I heard mentions of it in some graphics review of a game.

[–] [email protected] 1 points 4 months ago

Yeah I'm also confused about its current state / status in currently-released games. It looks like a significant enough of a feature that I would naively assume that if it was implemented in a currently-released game that the devs would boast about it, so I guess it's not there yet?

[–] mindbleach 1 points 4 months ago

Use stochastic / anisotropic "instant radiosity."

Eye-rays go from the camera to the scene. Those hits are onscreen points. The rays there bounce out in all directions and hit another part of the scene. Those hits are offscreen points. Rays continue until they find a light source.

Every first-bounce offscreen point is now a light source aimed directly at visible geometry. It has some anisotropic spread based on the material properties and the angle to the next hit. Metropolis light transport can even jitter those distant bounces to increase how much unbiased energy enters the visible scene, through that point.

So consider a one-sample-per-pixel frame at 1080p. It has two million dodgy flashlights pointed straight at some part of visible geometry. Any random sampling of those just-offscreen sources - say a hundred per pixel - will be additional real information, for dirt cheap. And that's if you bother with shadow rays. Given that you know which source hit which pixel, you can assume nearby pixels will also be hit, and spread that energy. This can be done with a geometry buffer and no additional rays. It'll be wrong, but it'll be a soft kind of wrong.