this is very old news, but since i was explaining this in an email to somebody, but it doesn’t quite qualify for an article in my website, i thought of dropping it here. that’s what boxrooms are for, after all.
so, say you are raymarching or raytracing some objects in a fragment shader, and you want to composite them with some other geometry that you rendered or will render through regular rasterizarion. the only thing you need to do is to output a depth value in your raytracing/marching shader, and let the depth buffer do the rest. the first to do, then, is to understand what “depth” means here.
in a raytracer/marcher, you probably have access to the distance from the ray origin (you camera position) to the closest geometry/intersection point. that distance is NOT what you want to write to the depth buffer, as hardware rasterizers (opengl or directx) don’t store distances to the camera, but the z of the geometry/intersection point. the reason is that this z value is still monotonically increasing with the distance, but has the property of being linear (linear like in “interpolatable across the surface of a play 3D triangle). so, in your raymarcher, compute the intersection point, and use it’s z component for writing to the depth buffer.
well, that will not work just like that. your api of preference will remap your z values to a -1 to 1 range based on the near and far clipping planes you decided to set up. furthermore, the remapping will probably also transform your z values to some other sort of scale that exploits the properties of perspective (like with a curve that compresses values in the far distance). so you will have to implement the same remapping in your shader before you can merge your raytraced/marched objects with the rest of the polygons.
the mapping is simple, though, and is normally configured by the projection matrix. grab your opengl redbook, and have a look to the content of a standard projection matrix. the third and forth row are what we need, since those are the ones that affect the z and w components of your points when transformed from eye to clip space. so, if ze is the z of your intersection point in camera (eye) space, then you can compute the clip space z and w as
zc = -ze*(far+near)/(far-near) – 2*far*near/(far-near)
wc = -ze
the hardware will then do the perspective division and compute the z value in normalized device coordinates before converting it to a 24 bit depth value:
zn = zc/zw = (far+near)/(far-near) + 2*far*near/(far-near)/ze
which you can see it is a formula of the form zn = a + b/ze which produces the desired depth compression. you can check that the boundary conditions are met, by doing
ze = -near -> zn = -1.0;
ze = -far -> zn = 1.0;
yeah, remember that your depths in camera space are negative inwards the screen. so, our raytracing/marching shader should end with something like
float a = (far+near)/(far-near);
float b = 2.0*far*near/(far-near);
gl_FragDepth = a + b/z;
you probably want to upload a and b as uniforms to your shader.
alternatively, if you don’t want mess with all this, you can directly grab the projection parameters from the projection matrix, and do something like
float zc = ( (ModelView)ProjectionMatrix * vec4( intersectionPoint, 1.0 ) ).z;
float wc = ( (ModelView)ProjectionMatrix * vec4( intersectionPoint, 1.0 ) ).w;
gl_FragDepth = zc/wc;
which is a little bit more expensive, but gives the same results…