Screen Space techniques operate directly in the final image space (the screen), rather than in object or world space.
Instead of computing how light interacts with geometry, they reuse what the camera has already rendered and manipulate it to produce the desired visual result.
→ If it is not visible on the screen, it does not exist.
Three main spaces to think in:
Screen Space sits at the top of the stack:
Screen Space techniques emerged in real-time rendering (mid 2000s → early 2010s) as a response to performance limitations.
Full physical simulation (ray tracing, global illumination) was too expensive for real-time systems.
Solution:
This led to the rise of:
All Screen Space techniques share the same law:
Use the rendered image as the primary data source → operate in screen coordinates → trade physical accuracy for performance and control
A Screen Space material ignores object-based shading and instead displays content aligned with the camera view.
→ geometry becomes a canvas → the screen becomes the reference
Properties:
Typical usage:
→ color = texture(screenX, screenY)
The material is evaluated per pixel on screen, not per surface in space.
SSR uses screen space to approximate reflections.
Instead of tracing rays in 3D:
→ sample the already rendered image → reuse visible pixels as reflection data
→ operate on the final image → avoid expensive world simulation
Screen Space is part of a larger doctrine:
Reality → Simulation → Approximation → Screen Space → Perception
In immersive environments, Screen Space shifts from optimization to precision.
Used for:
→ What you see = what you output
→ control > realism
Screen Space techniques do not simulate the world.
They control what the audience perceives, using only what is already visible.
→ Geometry becomes support → The screen becomes truth