Unofficial, alpha 12 weekly build

A secret forum for people who preorder Overgrowth!
Brad
Posts: 9
Joined: Sun Feb 01, 2009 5:39 pm

Re: Unofficial, alpha 12 weekly build

Post by Brad » Sat Feb 07, 2009 11:37 pm

Flashlight isn't a huge dynamic directional light, it's small. They could even use vertex lighting only.
Wait. Are we talking about lighting or shadows here? There's a BIG difference in the two subjects. It's quite a breeze to handle most non-global illumination, standard lighting setups (even with multiple sources). Shadows are a whole other story. The flashlight in your example would more than likely be a cone or spot light rather than a directional. Directional lights have no size as they are an infinitely distanced light source with just a normal vector.

Additionally, vertex lighting is not going to interpolate well when you have low-poly or even single quad meshes (such as your paper) representing small detail objects. It's simply impractical overall unless you're willing to tessellate the meshes you use just so you can heighten the depth of the effect from vertex lighting.
Sticking a normal map on a tiny piece of paper isn't smart.
That depends on how close the camera is going to get to the object. Even half a piece of crumpled loose-leaf could benefit aesthetically from a normal map if it's being held in front of the viewer's face. It's actually not costly at all considering that you'd switch to a non-normal-mapped shader when you were just a few feet away and thus would be unable to discern the details anyhow.
And about that usage of static lighting.. How would it look and with what speed would it go if there would be no lightmaps?
Static lighting/shadows through lightmaps would look and perform fine; however, they would use additional memory, require an additional texture-lookup unless stuffed into a spare texture channel (or baked), and most importantly (once again) they would be less dynamic in realtime.
Nothing looks better than a very nice lightmap and runs at a good speed.
PRT can produce similar performance if you're willing to tessellate your meshes and run them through pre-calculation steps (as you seem to be). The end results of doing so would be significantly improved since you'd get wonderful lighting by-products such as color bleeding as well as retaining the ability to move your light sources around dynamically.
That brute force really is increasing batch count too.
Actually, it would REDUCE the batch count if you just pass the terrain in one step rather than attempting to dice it up for on-screen triangle performance (which is what I'm fairly sure swiftcoder is referring to).
I know. It has always been like that. But we can choose what to split and how.
Batching objects together isn't simply a matter of just deciding what triangles to plug where. The objects in the batch have to be related (typically by using the same material/shader) or specially tailored/optimized to be processed together (example above -- atlas mapping).

User avatar
tomascokis
Posts: 431
Joined: Mon Jan 19, 2009 8:34 pm
Location: Australia, Perth
Contact:

Re: Unofficial, alpha 12 weekly build

Post by tomascokis » Sun Feb 08, 2009 4:30 am

You guys are going at it!

mphasis
Posts: 99
Joined: Wed Oct 08, 2008 4:11 am

Re: Unofficial, alpha 12 weekly build

Post by mphasis » Sun Feb 08, 2009 5:12 am

I approve of this argument, it's interesting to read :p.

User avatar
Johannes
Posts: 1374
Joined: Thu Dec 18, 2008 1:26 am
Contact:

Re: Unofficial, alpha 12 weekly build

Post by Johannes » Sun Feb 08, 2009 7:00 am

mphasis wrote:I approve of this argument, it's interesting to read :p.
And yet it doesn't seem to be getting anywhere Image

GDer
Posts: 51
Joined: Sat Jan 31, 2009 6:13 pm

Re: Unofficial, alpha 12 weekly build

Post by GDer » Sun Feb 08, 2009 4:41 pm

Brad wrote: Wait. Are we talking about lighting or shadows here? There's a BIG difference in the two subjects. It's quite a breeze to handle most non-global illumination, standard lighting setups (even with multiple sources). Shadows are a whole other story. The flashlight in your example would more than likely be a cone or spot light rather than a directional. Directional lights have no size as they are an infinitely distanced light source with just a normal vector.
Yes, I forgot to say the other part. :D
Lighting increases the draw call count, shadows do the same. So we should be talking about both of them.
Brad wrote: Additionally, vertex lighting is not going to interpolate well when you have low-poly or even single quad meshes (such as your paper) representing small detail objects. It's simply impractical overall unless you're willing to tessellate the meshes you use just so you can heighten the depth of the effect from vertex lighting.
That paper which I used as an example should take no more than 1/200th part of screen. I think that wouldn't be much. And as it wouldn't look nice if it would be just a plane, it would have about 30 vertices. That's enough to turn off per-pixel lighting in my opinion.
Brad wrote: That depends on how close the camera is going to get to the object. Even half a piece of crumpled loose-leaf could benefit aesthetically from a normal map if it's being held in front of the viewer's face. It's actually not costly at all considering that you'd switch to a non-normal-mapped shader when you were just a few feet away and thus would be unable to discern the details anyhow.
As it gets closer to screen, more pixels of it are drawn. That's bad about the performance. Small paper shouldn't usually be occlusion-culled. But a normal map doesn't always help. It depends on the art direction. In many cases, a normal map can easily screw up the whole scene.
Brad wrote: Static lighting/shadows through lightmaps would look and perform fine; however, they would use additional memory, require an additional texture-lookup unless stuffed into a spare texture channel (or baked), and most importantly (once again) they would be less dynamic in realtime.
One texture lookup isn't a big problem. That "less dynamic" doesn't really depend on a shader too. And if someone needs it, there's specular lighting left that can't be baked. Realtime also doesn't mean that we must calculate all the lighting in realtime. That means the game must run much faster than a slideshow.
Brad wrote: PRT can produce similar performance if you're willing to tessellate your meshes and run them through pre-calculation steps (as you seem to be). The end results of doing so would be significantly improved since you'd get wonderful lighting by-products such as color bleeding as well as retaining the ability to move your light sources around dynamically.
Maybe, but the pre-calculation step is too long even for small scenes.
Brad wrote: Actually, it would REDUCE the batch count if you just pass the terrain in one step rather than attempting to dice it up for on-screen triangle performance (which is what I'm fairly sure swiftcoder is referring to).
Oh, I overlooked something. What I mean was that brute force is making the game run slower because of overdraw. The whole terrain from one corner would have too many pixels drawn too many times. And not with a fast pixel shader..
Brad wrote: Batching objects together isn't simply a matter of just deciding what triangles to plug where. The objects in the batch have to be related (typically by using the same material/shader) or specially tailored/optimized to be processed together (example above -- atlas mapping).
That optimization can be done in loading time. I think that after we would make an octtree, we could merge the parts that are using the same shaders, are drawn twice, etc. I think that it would be even better if people could choose how big parts should be made to configure the game better for the current PC - either many parts for a faster CPU or big parts for a faster GPU.

jkreef
Posts: 31
Joined: Fri Nov 07, 2008 11:23 pm

Re: Unofficial, alpha 12 weekly build

Post by jkreef » Sun Feb 08, 2009 7:49 pm

Any word from icculus about the Linux build?

User avatar
tomascokis
Posts: 431
Joined: Mon Jan 19, 2009 8:34 pm
Location: Australia, Perth
Contact:

Re: Unofficial, alpha 12 weekly build

Post by tomascokis » Mon Feb 09, 2009 5:53 am

GDer wrote: I think that it would be even better if people could choose how big parts should be made to configure the game better for the current PC - either many parts for a faster CPU or big parts for a faster GPU.
That makes me think of something, maybe when the user loads an object file there can be a checkbox marked "scenery" and if you click it, then if the user has not clicked the scenery button in his/her settings then objects that are purely for aesthetically use, and are rather unnecessary anyway can be "left out". I'm thinking that things like pots and bottles, excess furniture, maybe %30 of bushes and possibly even half of a forest can be left out to gain extra ram.

swiftcoder
Posts: 12
Joined: Mon Nov 17, 2008 5:00 pm

Re: Unofficial, alpha 12 weekly build

Post by swiftcoder » Mon Feb 09, 2009 9:17 am

GDer wrote: Lighting increases the draw call count, shadows do the same. So we should be talking about both of them.
Lighting increases shader complexity, but it doesn't generally affect the number of batches. Shadows on the other hand require a full pass of the scene per light source.
GDer wrote:But a normal map doesn't always help. It depends on the art direction. In many cases, a normal map can easily screw up the whole scene.
Any lousy art can reduce the quality of a scene, but how is a well-drawn normal map going to screw up the rest?
GDer wrote:Realtime also doesn't mean that we must calculate all the lighting in realtime. That means the game must run much faster than a slideshow.
But since we can perform all lighting dynamically, at run time, while still pushing a higher refresh rate than your monitor, then why on earth would we spend a bunch of time pre-baking stuff?
GDer wrote:
Brad wrote: Actually, it would REDUCE the batch count if you just pass the terrain in one step rather than attempting to dice it up for on-screen triangle performance (which is what I'm fairly sure swiftcoder is referring to).
Oh, I overlooked something. What I mean was that brute force is making the game run slower because of overdraw. The whole terrain from one corner would have too many pixels drawn too many times. And not with a fast pixel shader..
Which is why modern GPUs have early-Z rejection. And if that doesn't give you enough performance, you can perform a depth pre-pass with colour writes disabled to fill the depth buffer, which allows all subsequent shaders to only be run on visible pixels. And if that doesn't give you enough performance, you can use hardware occlusion culling to cull with the depth buffer from the previous frame.
GDer wrote:That optimization can be done in loading time. I think that after we would make an octtree, we could merge the parts that are using the same shaders, are drawn twice, etc.
Earlier you wanted to chop the terrain up to allow CPU-side culling, and now you want to recombine them into big chunks? Terrain tends to use the same shader and textures over large areas - there isn't much to be gained from all this chopping and changing.

GDer
Posts: 51
Joined: Sat Jan 31, 2009 6:13 pm

Re: Unofficial, alpha 12 weekly build

Post by GDer » Mon Feb 09, 2009 10:26 am

swiftcoder wrote:Lighting increases shader complexity, but it doesn't generally affect the number of batches. Shadows on the other hand require a full pass of the scene per light source.
I am thinking about multi-pass lighting.. but maybe letting the user choose - single pass / multi-pass lighting would be better?
swiftcoder wrote:Any lousy art can reduce the quality of a scene, but how is a well-drawn normal map going to screw up the rest?
That just happens. Some art designers like to "bump it up" because the normal map is barely visible and that's the problem.
swiftcoder wrote:But since we can perform all lighting dynamically, at run time, while still pushing a higher refresh rate than your monitor, then why on earth would we spend a bunch of time pre-baking stuff?
That pre-baking is done on the Wolfire development PC. Remember, non-shader graphics cards will be supported.
swiftcoder wrote:Which is why modern GPUs have early-Z rejection. And if that doesn't give you enough performance, you can perform a depth pre-pass with colour writes disabled to fill the depth buffer, which allows all subsequent shaders to only be run on visible pixels. And if that doesn't give you enough performance, you can use hardware occlusion culling to cull with the depth buffer from the previous frame.
As Splinter Cell Double Agent did, It allowed to choose whether we want the Z pre-pass or we don't. So maybe the same will be good here?
swiftcoder wrote:Earlier you wanted to chop the terrain up to allow CPU-side culling, and now you want to recombine them into big chunks? Terrain tends to use the same shader and textures over large areas - there isn't much to be gained from all this chopping and changing.
Not only the terrain. The whole scene. And there must be an FPS increase on older PCs with older graphics cards. Also using a terrain mesh with lower detail for drawing the far parts of the terrain would be better. Also that mesh could be used for the depth pre-pass.

-- What we need here in terms of graphics is - good-looking game that runs even on quite old PCs and still looks great. Forgetting about shaders when making art would be great. Because cutting shaders off after a nice texture is made that must look nice with fixed function rendering too is hard to do in my opinion (I see many commercial projects where this approach has failed). It's like building a house. It's easier to build it starting from the bottom.

Brad
Posts: 9
Joined: Sun Feb 01, 2009 5:39 pm

Re: Unofficial, alpha 12 weekly build

Post by Brad » Mon Feb 09, 2009 11:56 am

Gder wrote:I am thinking about multi-pass lighting.. but maybe letting the user choose - single pass / multi-pass lighting would be better?
What reason would you use for multiple passes to calculate lighting? Yes, you could process a shader program in steps by iterating over multiple (or all) lights in your scene, but if performance is your concern, you'd be much better off building the technique in a single pass to either take in a fixed number of light sources OR to accept a single average weighted blend of the lights in your scene.
Gder wrote:That pre-baking is done on the Wolfire development PC. Remember, non-shader graphics cards will be supported.
Gder wrote:Forgetting about shaders when making art would be great. Because cutting shaders off after a nice texture is made that must look nice with fixed function rendering too is hard to do in my opinion
No need to worry. Even fixed-function pipeline approaches can handle rudimentary dynamic lighting techniques without sweating too much. While it is better to offset work to the GPU in modern development cycles to fully utilize hardware, playing on vertex and frame buffers to reproduce semi-modern techniques is always possible for more vintage setups and higher compatibility yields.

I don't think the Wolfire development team needs to bend over backwards on production variants of media just for low-end machines by any means.
Gder wrote:
swiftcoder wrote:And if that doesn't give you enough performance, you can perform a depth pre-pass with colour writes disabled to fill the depth buffer, which allows all subsequent shaders to only be run on visible pixels.
As Splinter Cell Double Agent did, It allowed to choose whether we want the Z pre-pass or we don't. So maybe the same will be good here?
Maybe. The depth pre-pass will need to be used on per-situation basis. Since it does add an entire extra pass to the object (albeit a primitive one) and subsequently an extra batch for early removal of fragments, it'll need to be used only on objects with complex shaders, screen-consuming size/layers, and/or relatively moderate vertex/triangle counts to get the full benefit.
Gder wrote:Not only the terrain. The whole scene. And there must be an FPS increase on older PCs with older graphics cards. Also using a terrain mesh with lower detail for drawing the far parts of the terrain would be better.
If care is taken to avoid level of detail "popping" without compromising performance or other overheads, seamless and adaptive LOD for terrain would be pretty cheap and offer decent gains for a higher-tessellated terrain.

Also as I mentioned before, using a fog hull would give a pretty solid low-end solution. Just enable distance fog in a scene and set far clipping to a value just beyond that (remembering that care has to be taken to so that your sky box/sphere/plane still renders despite the culling)
Gder wrote:What we need here in terms of graphics is - good-looking game that runs even on quite old PCs and still looks great.
I agree 100% there. It's not an easy task, but given how current "next-gen" commercial games on the market barely even make an effort to give shader model 2 support (if that), it would be a VERY refreshing change to have a project that emphasized game design and mechanics without completely destroying it's potential "fun factor" simply because of hardware limitations.

GDer
Posts: 51
Joined: Sat Jan 31, 2009 6:13 pm

Re: Unofficial, alpha 12 weekly build

Post by GDer » Mon Feb 09, 2009 4:15 pm

Ok, nice.
I agree with you on most of the answers.

I just want to add something.
What reason would you use for multiple passes to calculate lighting?
Usually there's no reason, yes. It could help if a PC has better CPU than graphics card.
Anyway, maybe there will be no need for other lights except the sun..

About..
higher compatibility yields
It would be even better if Overgrowth could draw effects that are said to be DirectX10-only. It would prove that OpenGL isn't dead too. And that shader model 2 or even 1.1 isn't dead (by the way, here I'd like to mention Splinter Cell Chaos Theory, which used normal mapping with SM 1.1).
I don't think the Wolfire development team needs to bend over backwards on production variants of media just for low-end machines by any means.
But I think that good art design will be better than any shader. :)
There's a game - Vietcong. It has some of the best textures I've ever seen.
Here's a pic: http://z.about.com/d/compactiongames/1/ ... etcong.jpg
The water doesn't use shaders but looks great.

swiftcoder
Posts: 12
Joined: Mon Nov 17, 2008 5:00 pm

Re: Unofficial, alpha 12 weekly build

Post by swiftcoder » Mon Feb 09, 2009 6:42 pm

GDer wrote:
higher compatibility yields
It would be even better if Overgrowth could draw effects that are said to be DirectX10-only. It would prove that OpenGL isn't dead too.
Meh, I think that has been adequately proved many times - look at Ysaneya's work (which is all in OpenGL) - he has a number of DX10 effects running very nicely.
And that shader model 2 or even 1.1 isn't dead (by the way, here I'd like to mention Splinter Cell Chaos Theory, which used normal mapping with SM 1.1).
I think you are looking at a lot of work there, just to support some very ancient hardware. Even integrated GPUs from the last few years support SM 3+
I don't think the Wolfire development team needs to bend over backwards on production variants of media just for low-end machines by any means.
But I think that good art design will be better than any shader. :)
Fair enough, but those of us with kick-ass graphics cards still want to see the blood oozing off of every normal-mapped surface :D

Post Reply