• Cyrus Draegur@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 days ago

    I’ve seen an awesome “kludge” method where, instead of simulating billions of photons bouncing in billions of directions off every surface in the entire world, they are taking extremely low resolution cube map snapshots from the perspective of surfaces on a “one per square(area)” basis once every couple frames and blending between them over distance to inform the diffuse lighting of a scene as if it were ambient light mapping rather than direct light. Which is cool because not only can it represent the brightness of emissive textures, but it also makes it less necessary to manually fill scenes with manually placed key lights, fill lights, and backlights.

    • Hadriscus@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Light probes, but they don’t update well, because you have to render the world from their point of view frequently, so they’re not suited for dynamic environments

      • Cyrus Draegur@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        They don’t need to update well; they’re a compromise to achieve slightly more reactive lighting than ‘baked’ ambient lights. Perhaps one could describe it as ‘parbaked’. Only the ones directly affected by changes of scene conditions need to be updated, and some tentative precalculations for “likely” changes can be tackled in advance while pre-established probes contribute no additional process load because they aren’t being updated unless, as previously stated, something acts on them. IF direct light changes and “sticks” long enough to affect the probes, any perceived ‘lag’ in the light changes will be glossed over by the player’s brain as “oh, my characters’ eyes are adjusting, neat how they accommodated for that.”–even though it’s not actually intentional but rather a drawback of the technology’s limitations.