Hey there, I'm new in the forum and this is my first post!
Just one like to testify how I apreciate the wolfire blog: it's very interesting to read about technical details of development.
As you may see the subject is about a technical detail, another of those directxN can and directxN-1 can't: hardware tesselation. I'm not really a developer, but it seem to me that this technology is one of that allow the independent developers to compete with the high budget production: procedural generated content. Somehow this "tesselation" is about procedural generated mesh.
Are you considering this technology to be implemented in you engine somehow? It would be great to have an OpenGl title with it.
I hope this topic is apreciated.
everybody there, have a hug
Re: Opengl hardware tesselation
Posted: Tue Jan 19, 2010 3:13 pm
by Endoperez
Welcome to the forums! I'm not a technical guy, I hope I'm not too far from the truth...
As far as I understand hardware tesselation doesn't create content, but scales it down procedurally. So, if you have a high-end machine, you have really cool graphics, and furthermore, you have really cool graphics CLOSE to you. And on a poorer machines and farther from the camera, the meshes are automatically made less detailed, so that the game will run smoothly and whatever is close to will still be more detailed than in modern games, and the transition from less-detailed to detailed could even be somewhat smooth.
I think it's most beneficial if the game models will be fantastic. Indie developers will have less artists working on the assets, and even the bests artists have to eat so they're usually working on the commercial games as well, so I don't think indie developers will have any special edge in there.
Re: Opengl hardware tesselation
Posted: Thu Jan 21, 2010 3:43 am
by SamW
I don't know much about the actual and exact details of tessellation implementations (so the rest of this post is just my speculations as I can't seem to find any hard information about this subject right now), but... From what I've seen of direct x 11 hardware tessellation it is able to add detail to the mesh of whatever you are rendering.
Well, since you can't simply add detail where there is no detail, I'd assume that one way to generate more detail is to use some sort of heightmap/displacement or maybe just the bumpmap to fit the tessellated meshes to.
Perhaps the tessellation implementation in direct x 11 (and openGL) doesn't really care where the detail information comes from. Looking around for information about the directX implementation, I saw something about hull shaders and domain shaders. Assuming that the tessellation shader are programmable and one of the things that is programmable is the vertex displacement, then it could be programmed to generate detail procedurally. The question is, how will you program the shader to create detail automatically? You will need some sort of source data/information!
Of course, even without any additional detail information, tessellation can use a simpler interpolation method (eg. Catmull–Clark subdivision) and simply make sharp and blocky edges rounder. But to add detail you will have to generate some sort of detail to fit the mesh to.
Implemented correctly it also appears that one could use it for LOD, but that means the original mesh will be the lowest LOD while tessellation will generate the higer LOD meshes and automatically fit the mesh to whatever detail is available to the shader.
Re: Opengl hardware tesselation
Posted: Thu Jan 21, 2010 11:44 am
by TheBigCheese
If I had to take a guess on how it works, I would guess that the tessellation takes in a very high resolution mesh, and then creates 1-5 different levels of detail by breaking it down. It doesn't need to create any details procedurally because it can just bring up higher and higher quality meshes as you get closer until you reach the original mesh.
Re: Opengl hardware tesselation
Posted: Thu Jan 21, 2010 3:45 pm
by rudel_ic
Tesselation itself is just a strategic subdivision of a surface. You can of course displace the generated vertices programmatically. To do this, you can use a displacement map (baked from a hires model, usually), you can do it procedurally (use noise function to displace rusted metal, for example), you can render the tesselated edges to a buffer to later use it in a pixel shader.. Whatever comes to mind.
DX11 tesselation doesn't magically add detail, you still have to look up displacement in a texture or whatever it is you want to do. Tesselation really just increases the mesh resolution in a way that's supposed to be convenient for typical applications.
It's not too easy to handle tesselation on a variety of levels. For instance, collision detection with detail from tesselated stuff is difficult, same for crazy shit like on-the-fly voxelization of tesselated detail.
That's why it's still generally regarded as eye candy in games. In 'serious' apps, it's a completely different story. Subdivision modeling is based on tesselation, for example.
This tool lets you draw onto a 3D model so that you can add detail. Automatically tesselates as you draw so that extrusion doesn't mess things up, for example.
If it wouldn't tesselate, it would result in the focused face becoming bigger and bigger as you draw (a well-known flaw of the Blender sculpting mode - the reason why it's borderline useless for anything but high-detail work on an already subdivided model, actually).