Game Development Community

projectile impact sound radius, and response to it.

by deepscratch · in Torque 3D Professional · 09/28/2009 (3:51 pm) · 14 replies


ok, another one of my odd questions,
a projectile can ceate a sound on impact, right?
and that sound, somehow, can have a radius within which it is heard by a player, or aiPlayer, right?
so...
how, in script, could I set up a function that goes something like this:


SUPER PSEUDO CODE
function AIPlayer::reactToSound(%obj, %sourceObject, %position, %radius, %volume, %sound)
{
  InitContainerRadiusSearch(%position, %radius, $TypeMasks::ShapeBaseObjectType && %obj.getClassName() $= "AIPlayer");

   %halfRadius = %radius / 2;

   while ((%targetObject = containerSearchNext()) != 0)
   {
 
      // Calculate how much exposure the current object has to
      // the sound.  The object types listed are objects
      // that will block sound.  If the object is totally blocked,
      // then no sound is heard.
      %coverage = calcSoundCoverage(%position, %targetObject,
                                        $TypeMasks::InteriorObjectType |
                                        $TypeMasks::TerrainObjectType |
                                        $TypeMasks::ForceFieldObjectType |
                                        $TypeMasks::VehicleObjectType);

      if (%coverage == 0)
         continue;

      %dist = containerSearchCurrRadiusDist();

      // Calculate a distance scale for the sound and the volume.
      // Full volume is applied to anything less than half the radius away,
      // linear scale from there.
      %distScale = (%dist < %halfRadius)? 1.0 : 1.0 - ((%dist - %halfRadius) / %halfRadius);

      // hear the sound
      %targetObject.hearSound(%sourceObject, %position, %volume * %coverage * %distScale, %soundType);

      // react to the sound
      if (%sound)
      {
         %soundVec = VectorSub(%targetObject.getWorldBoxCenter(), %position);
         %soundVec = VectorNormalize(%soundVec);
         %soundVec = VectorScale(%soundVec, %sound * %distScale);
         %targetObject.findCoverFrom();
      }
   }
}

what would work? what would I need to change to make it work?
help??

things like "findCoverFrom()" are part of the A* pathing that the ai would perform to get away from being shot at,
and "hearSound" I have no idea how to script (actually getting the ai to "hear the sound.
things like "sneakUpOn()" could be used by that same ai if his health is good and he is "brave", to get the blighter who shot at him, but thats the next function to work on...


can this be done?
I think it could expand the T3D ai dramaticaly if yes.

thanks.
whew, long post.

#1
09/28/2009 (5:20 pm)

Sound occlusion is a lot more complex than that. You could do a raycast from the AI player to the sound emitter or even the coverage computation you suggest above and it still would only account for direct occlusion which in reality is just a fraction of the sound waves that reach our ears.

With just direct occlusion, you very quickly end up getting really unrealistic situations, like tiny blockers completely shielding a source in wide open space.

Probably the computationally quickest way to do this would be to chop up your level into zones and use the zoning information to compute audibility.
#2
09/28/2009 (5:25 pm)

Ah, and BTW, the distance attenuation computation above would apply the same attenuation to every 3D sound regardless of whether it is a big jet engine or a tiny little bee. Each 3D sound has its own reference distance and max distance settings that probably should be taken into account also.
#3
09/29/2009 (12:25 pm)
I guess the real, short question, is how to make ai aware of sounds?, I need them to register that they heard a sound, and what kind of sound it is, and how loud it is.
#4
09/29/2009 (12:28 pm)
Quote:
how loud it is

Yep, that's called sound occlusion. That's what I was talking about.

//Edit
Or in a little more words: making AI aware of sounds is trivial. Computing how much exposure they have to a given sound isn't, though, and you're probably best of using trickery like the mentioned zoning. The route you are suggesting above leads you to full occlusion computation and believe me, that isn't working as simply as you outline it above.
#5
09/29/2009 (12:58 pm)
mm, ok, I see.
which would be cheaper, the ai doing a spherical ray cast every tick to see if a projectile hit close by, or to have the projectile create a spherical ray, that if intersects an aiplayer or aivehicle, triggers an "if" in that ai's script, telling it to decode what the sound was, and how close?

like, a grenade, but not loud, so not near, so safe-ish,
or
a projectile at my feet, loud impact, so very close, run like hell!
#6
09/29/2009 (1:08 pm)

If you are going the route with sounds, then I'd suggest to do neither. Rather test for 3D sounds in range which the SFX system could give you very quickly (or you can traverse the SFX source set manually yourself, though that's probably best done in native code).

This would give you the basic information about the sounds the AI is currently *likey* to hear and at what level (there's an SFXDistanceAttenuation function there which computes the effective sound level at a given distance for you).

The only thing where this breaks down is when obstacles really affect a source a lot. If you want to factor this in, it's probably best to start cheating.

You could do a direct raycast, then see how big the object is and somewhat attenuate the volume based on that information. It would be very coarse but probably be better than nothing.

It still would be totally wrong if the AI is in a completely closed room and the projectile hit is outside. For this, a zoning solution would probably be a good starting point but there's probably a range of other good creative solutions.

It's just that going anywhere near getting real occlusion working gets expensive quickly with sound.
#7
09/29/2009 (1:17 pm)
As another BTW, I'm currently working on sound occlusion and your post really got me thinking whether there really isn't a way to approximate occlusion well enough and especially cheap enough so it can be reused to query arbitrary listener positions instead of only the single listener position on the SFX device.

That would be the ultimate solution. Like, having one function that returns all playing 3D sources in range and one method on each source that gives the effective volume for a given position.

But as of yet, I really don't know whether this is practical at all. My current approach seems *comparably* cheap but would definitely be way too expensive to do this kind of arbitrary position testing.
#8
09/29/2009 (1:18 pm)
nice, I see, thanks for getting into this Rene,
so lets work with zoning, but without attenuation/ occlusion, I guess if a projectile hits the other side of a vehicle you are behind, even if the vehicle occludes the projectiles sound, it would still be to close for comfort.
even in a closed room, if bullets start hitting the walls outside, you know that someone is gunning for you, so take cover anyway.

so

next step is how to get the ai to register the impact sound, and how to get the projectile to emit the sound on impact?

edit: was typing when you posted #7
#9
09/29/2009 (1:23 pm)
this whole line of thought extends beyond just impact sounds, I intend to have a system where if the said ai is getting shot at, he will shout for help, and any ai within hearing range could supply backup, if they themselves are not under fire.
I am working on a single player only game, so ai needs to be pretty much autonomus
#10
09/29/2009 (2:13 pm)
As I see it you're all perfectly set already there. If you don't want to use the existing explosion/splash thing (which have sounds), adding custom impact sounds (even per-material) would be pretty simple, though.

Sounds are automatically registered with the SFX system as they are created. In your AI code, you could either traverse SFXSourceSet and filter out the stuff you want (3D sounds in range) or (better) you could quickly whip up a console method to do this for you (e.g. sfxGetFirstSoundInRange, sfxGetNextSoundInRange).

To do it entirely in script, you probably need to expose some more stuff to the console. I'd definitely go the native code route.

The only thing that you probably would want to add here is information on the sources that links back to what these sounds actually mean. That could easily be done through dynamic properties.

Rough code outline for the querying stuff (needs some rearrangement to actually work):

U32 sgLastIndex;

ConsoleFunction( sfxGetFirstSourceInRange, S32, 4, 4, "( x y z ) - Return first source audible at position x/y/z" )
{
   Point3F pos( dAtof( argv[ 1 ] ), dAtof( argv[ 2 ] ), dAtof( argv[ 3 ] ) );

   MatrixF transform( true );
   transform.setPosition( pos );
   SFXListener listener;
   listener.setTransform( transform );

   listener.sortSources( SFX->mSources ); // protected in fact

   // same protection problem here
   for( U32 i = 0; i < SFX->mSources.size(); ++ i )
      if( SFX->mSources[ i ]->is3d() && SFX->mSources[ i ]->getAttenuatedVolume() > 0.0f )
      {
         sgLastIndex = i;
         return SFX->mSources[ i ]->getId();
      }

   return 0;


//then in the other console function, return all the remaining ones (based off sgLastIndex) that are above 0.0f (or some custom cutoff value)

It's a bit hacky since the sortSources call will update all the attenuated volumes of the sources with the fake listener position and resort the array but the next SFX update will just revert things to the true SFX listener again so it won't do any damage.

PS: Yeah, my annoying editing...

//EDIT: fixed test logic bug above
//EDIT: fixing my many English mistakes
#11
09/29/2009 (2:31 pm)
As for things beyond impact sounds, putting dynamic properties on the SFXSources would allow you to do that, e.g. when letting an AI cry for help: (this uses raw SFX calls; there's also the ShapeBase audio stuff but it would need some extending to get the sources returned to the caller)

%pos = %this.getPosition();
%source = sfxCreateSource( AudioProfileBegForHelp, getWord( %pos, 0 ), getWord( %pos, 1 ), getWord( %pos, 2 ) );// Hmm, this function sould rather use a vector, I think...
%source.soundType = "BegForHelp";
%source.soundOrigin = %this;
%source.play();

//EDIT: Arghh... getting confused here
#12
09/29/2009 (2:38 pm)
Hmmm... interesting, and food for thought :)
#13
09/29/2009 (4:59 pm)
awesome!!
I'm almost finished merging my code changes into 1.0,
which went very smoothly (what with doing it about twice per beta release!! so I got it down smooth by now.) btw,
and I think I'll have a serious bash at this.
I see already there are a few SFX changes in the code, so good.
Rene, thanks again for the help. if I get stuck, I'll post here.

anyone else who might have insight, or ideas on this, please post them!!!
cheers.
#14
09/30/2009 (4:43 am)

One thing that occurred to me today is that you have to watch out there for client/server issues. Strictly seen, SFX is client-side whereas AI is server-side. ShapeBase audio will only play on ghosts, for example.

This will not get you into trouble in a quasi-single player Torque game but in a multiplayer game it probably will. Here one would need to come up with a workaround, like e.g. a central array where active sound information is stored.

The bad thing is that then all this wouldn't be free anymore, i.e. instead of this information being available automatically, you need to go in and change all those places in the engine where relevant sounds are being triggered.