Page 11 of 17
Posted: Fri May 09, 2008 2:47 pm
by David
The evolved movement thing is mostly for me to practice using machine learning techniques, and also see what crazy movements a ragdoll might come up with without any of our assumptions about how humans should move.
Posted: Fri May 09, 2008 11:03 pm
by Renegade_Turner
This might be REALLY asking for it, but how does it work? You repeat the process a number of times until the ragdoll figures out the best way of moving with the constraints you've placed on it?
Posted: Sat May 10, 2008 12:33 am
by David
All genetic algorithms work by starting with a base population of random genotypes, and then evaluating their fitness, and populating the next generation with random variations of the most fit. In this case the genotype is a graph of oscillators that can be connected to the ragdoll joints, and the mutations change the oscillator parameters or the connections. Fitness is determined by finding how far they move, and dividing that by how much energy they use to get there. Each generation I pick the 20 best and fill the next generation with these winners as well as 4 mutated variations of each one.
Posted: Sat May 10, 2008 8:22 am
by Renegade_Turner
Right. I'm not going to pretend like I completely understood that (my mind's too feeble), but I understood enough to know what was going on. Sounds cool. lol
Posted: Tue May 13, 2008 7:14 am
by Makrond
Eh, a good example of genetic algorithms being adapted to game AI would be NERO. If you've played that, then the whole concept should make a little more sense.
Posted: Tue May 13, 2008 10:20 am
by Renegade_Turner
Yeah, I've played NERO. It's pretty hard, but once you get used to it it's okay. I beat the last guy and the computer gave me back a CD copy of the Enter Shikari album as a present.
Posted: Sat May 17, 2008 1:05 am
by Makrond
Looking at the evolved movement video again - would you possibly be able to make one where the ragdoll attempts to stand up - or fitness based on time standing / energy spent? I just want to see a ragdoll actually get up again.
Posted: Sat May 17, 2008 1:26 am
by David
A couple more
blog posts, one about the movement stuff, and one about controlled intersections.
I could change the fitness to encourage standing, but I think that normal behavior is already pretty well-studied, and can be done more easily by other means.
Posted: Sat May 17, 2008 7:37 pm
by David
Yet another blog post, about 3D acoustics.
Posted: Sat May 17, 2008 7:50 pm
by Cmyszka
Dang... that'll make the game a whole lot more intense. I can only imagine how realistic playing L2 with headphones on and experiencing all this would be.
With every update i get more hyped for it. I bet this really is going to go far beyond anything we've seen from wolfire, and that's saying a
lot.
EDIT: Oh wait... With this be put to use in L2 even?

Posted: Sat May 17, 2008 8:13 pm
by rudel_ic
They've put a lot of those principles into GTA4.
It's really obvious when you're in multiplayer, approach a firefight by car, starting a few blocks away, blitz by and then get a building between you and the shooters.
A lot of modern games with high production values get these aspects right. The most common offender though is the sound travel delay, most games ignore it.
Are you using OpenAL for playback? There's a lot of what you're talking about already built into it, as you probably know. And you need no fancy soundcard, they've got a software renderer.
Posted: Sat May 17, 2008 8:20 pm
by David
Yeah, GTA 4 is starting to do some pretty cool things with the sound rendering. I don't use OpenAL; I am doing all the mixing myself and then streaming it to the SDL audio wrapper. I don't think OpenAL really does much of this actually; it mostly just does the panning and distance attenuation. It has some support for doppler effects but is kind of hacked together based on explicit relative velocities, and not as a side effect of a realistic time delay method.
I haven't used OpenAL much recently though, did they improve it a lot?
Posted: Sat May 17, 2008 9:03 pm
by Makrond
Damn that intersection stuff looks awesome - I'd love to see a Python script that would do that for UV coords in Blender.
Posted: Sat May 17, 2008 9:19 pm
by invertin
All this sound talk reminds me of this screensaver I've got, fireworks fly up and explode. Pretty flashy, but there is a slight delay after the explosion before you hear it and the camera flies around, so the closer it is the less delay there is.
Which is probably the only computer program I've seen do that.
Posted: Sat May 17, 2008 10:09 pm
by rudel_ic
David wrote:Yeah, GTA 4 is starting to do some pretty cool things with the sound rendering. I don't use OpenAL; I am doing all the mixing myself and then streaming it to the SDL audio wrapper. I don't think OpenAL really does much of this actually; it mostly just does the panning and distance attenuation. It has some support for doppler effects but is kind of hacked together based on explicit relative velocities, and not as a side effect of a realistic time delay method.
I haven't used OpenAL much recently though, did they improve it a lot?
Can't really speak for the recent version, I've used 1.0 for positional audio and doppler effects a while back. Maybe I'm giving it more credit than it's worth, I don't know.
Edit: Going by the API docs, there's Doppler, distance attenuation and positional audio.