23 June 2024

Integrating Wwise 2023.1 as a plugin for Unreal 5.4

This past semester I've been teaching a few classes for the 2nd year sound students at Howest - Digital Arts & Entertainment in the course "Sound Integration 2". Main goal: make them accustomed to use C++ in Unreal, but also make them able to integrate Wwise as a plugin in an Unreal project without destroying the project for their peers.

In courses where the sound students work together with art and programming students on game projects, we often had the situation where the sound student wanted to use the Wwise plugin, tried to add it to the project and subsequently broke the project for everyone. After a few hours of trying to get it fixed they usually give up and just use the built-in audio tools in Unreal, or use Fmod.

But we want them to be able to use Wwise, so we need to fix this. I myself have little knowledge of Wwise, but I do know a bit of C++ and Unreal so that's why I took a shot at clarifying for the students how they should integrate Wwise. 

And boy, it is indeed not for the faint of heart. The following is a summary of my research into this to get to a completely working plugin, but it took me quite a while to iron out all the details. There are numerous sources that guide you in this, but they're all very specific to a certain Wwise and Unreal version. The same goes for this blog post of course, although the below steps work for multiple versions.

Wwise as a local plugin

Turns out, there are two ways of integrating Wwise into Unreal, either as a local plugin to your project, or either as an engine-wide plugin. Let's start locally as this is the easiest way.

  •  Make sure to install Wwise 2023.1.x - there are quite some differences between different Wwise SDKs so no guarantees are given that this process will work for other versions. I used 2023.1.0 when I started writing this article and now I've updated to 2023.1.4. You need to repeat this process if you update Wwise.
  • Make sure to have Unreal 5.4.2 installed - same deal, there are differences between Unreal versions that might make this process different. I've also tried this with 5.3.2 and the process is the same. And again you need to repeat this process if you update Unreal.
  • If you don't have one already, create a new C++ Unreal project.
  • In the Wwise launcher you can now click the "Integrate Wwise in project..." button


  •  Set everything up as requested


  • If you leave the Wwise project path empty, a new Wwise project will be created inside the Unreal project. I'd rather not mix these so I recommend creating a Wwise project first next to the Unreal project and then entering the path to that project here, as in the screenshot.
  • If you do create a separate Wwise project then don't forget to point the soundbank folders to the Unreal project

  • I recommend adding an event with a simple test sound to verify your integration.
  • When Wwise is done integrating you can open the Unreal project. The plugin will be automatically active.
  • In the project settings we need to target the Wwise project and define the Soundbanks folder


  •  And enable this so assets get reloaded after banks are generated:


  •  Via Window -> Wwise browser the soundbanks can now be generated

  •  There should be a folder WwiseAudio in the content browser containing a soundbank


  • If not, I noticed it helped to restart Unreal.
  • I've added an AKAmbientSound actor in the scene that uses an AK event to play my test sound. That should be functional now.

And done! We have now a working integration of the Wwise plugin in our project! At a whopping size of 7.3GB! If you use git for your versioning needs you'll notice that git is quite unable to sync this. Perforce (what we use at DAE) can handle it, but it still slows down everyone's progress a lot.

Wwise as a an engine plugin

One way to avoid having to upload this enormous plugin is to have the plugin engine-wide instead of inside the project. With this approach all your peers need to install the plugin in their engine (once), but at least it's not part of the project anymore. For this approach Wwise gives us a nice warning:

They were not kidding... Let's be experts here:

  • Close Unreal, we'll be creating the plugin via command line. 
  • The Wwise plugin is configured to be enabled by default, switch that off or all your Unreal projects, whether or not using Wwise, will load the plugin and that's not desired. In the Wwise.uplugin file which you can find in your Unreal project, change this line from true to false.
"EnabledByDefault": false,
  • For some reason the Wwise.uplugin is missing some modules it need to be compiled as an engine plugin, so in that file, add the text below at the end of the modules list. Note, this seems to be fixed in Wwise 2023.1.1, so from then on this step is obsolete.
,
{
    "Name": "WwiseUtils",
    "Type": "Runtime",
    "LoadingPhase": "None"
},
{
    "Name": "WwiseProcessing",
    "Type": "Runtime",
    "LoadingPhase": "None"
},
{
    "Name": "WwiseEngineUtils",
    "Type": "Runtime",
    "LoadingPhase": "None"
},
{
    "Name": "WwiseObjectUtils",
    "Type": "Runtime",
    "LoadingPhase": "None"
}
  •  Open a terminal and navigate to this folder in your Unreal installation: "Unreal\Build\BatchFiles"

  • Now run this command (change YourProjectPath with the path to the project you integrated wwise with):
    .\RunUAT.bat BuildPlugin -plugin="C:\YourProjectPath\Plugins\Wwise\Wwise.uplugin" -package="C:\Temp\Wwise" -TargetPlatforms=Win64
    If all goes well, this should build the plugin. In the package parameter we specify where this plugin should be placed, in the example it's "c:\Temp\Wwise".


    This takes a while though, so go grab lunch.
  • Copy the folder "ThirdParty" from the local plugin to the build plugin. We now have all the files for the entire engine-wide plugin.You can zip the content and distribute as you like.
  • We don't need everything actually, for example the ThirdParty folder contains files for Visual Studio 2019 as well so if you're only using Visual Studio 2022 you can remove those older files. Same for when you're not targeting Win32 computers


  • To install into Unreal as an engine plugin, place the content of the build Wwise folder in the Plugins\Marketplace folder of your engine. If that folder does not exist yet, create it.
  • We can now remove the Wwise plugin from your project folder, since we don't need the local plugin anymore.
  • Since the plugin is no longer enabled by default (as it should) we should enable it specifically for our project. That can be done via the project settings in Unreal or simply add it with a text editor in your uproject file:
{
    "Name": "Wwise",
    "Enabled": true
}

If we open our Unreal project the Wwise integration is still completely functional + our project is a 7.3GB smaller.

 Let me know in the comments if this has helped you in any way!

11 April 2019

GDC 2019 - Part Three

The is the final post on my trip to GDC19, find the first here and the second here.

The last three days of GDC there was also the expo on which we had a booth as part of the Belgian pavilion, so I had less time to attend talks. This last post wraps up the talks I attended during those three days.

The Making of 'Divinity: Original Sin 2'

I just had to go to this session by my former employer, Swen Vincke. He talked about the rough ride Original Sin 2 was. Partly a trip down memory lane for me, as nothing changed much at Larian ;). Very nice to meet-up with ex-Larian-colleague Kenzo Ter Elst who was attending the same talk!

"Shadows" of the Tomb Raider: Ray Tracing Deep Dive

Somehow this happened to be the first talk I did on ray tracing, which is my favorite subject of all, while I had actually planned for many more. I still had time :)

I just read that the all the slides of the GDC19 talks by NVidia are online, so you can already check those!

The good thing of this talk is that it brought me somewhat up to speed with the all the new RT stuff. I mean, I "know" raytracers, having written quite a few as a student, but it has been +10 years since I actively did anything with ray tracing!

There are a bunch of new shaders we can write; rays are generated per light type by "raygen" shaders, we have anyhit and closesthit shaders. Even translucency gets handled by these shaders.

What I did not realize before GDC, but now fully understand, is the importance of the denoising step in the RT pipeline. GI in raytracing always yielded noisy results unless you calculated a massive amount of rays. In all appliances of RT I've seen at GDC, only one ray was cast per step of a path, yielding incredibly noisy results. So denoising is a central part of real time raytracing. A lot of optimization needs to go in this step, for example in this talk they showed a penumbra mask, areas where we know there is a half-shadow, and only denoised on those areas.

Interesting too were the acceleration structure concepts, BLAS and TLAS (Bottom Level Acceleration Structure and Top LAS). In Tomb Raider BLAS were used on a per mesh base, TLAS were regarded as scenes.

Real-Time Path Tracing and Denoising in 'Quake 2'

Another RT focused talk, this time how a Quake II build received ray tracing. It started as research project called q2vkpt that can be found entirely on github. After Christophs part of the talk came Alexey from NVidia detailing what extra features and optimizations they added.

I played the game at the NVidia booth for a while and had a short talk there with Eric Haines (some guy who apparently just released a book on ray tracing, nice timing). In the demo, with my nose to the screen, I could easily see what is called "fireflies", pixels that are outliers in intensity that do not really de-noise so very well.

No matter how good the raytracing, something still looks off if you ask me, but this was explained in the talk: the original textures of Quake contained baked lighting and while they did an effort to remove that, it was not entirely possible.

'Marvel's Spider-Man': A Technical Postmortem

I think this talk was the best I've seen at GDC19. Elan Ruskin announced that he would go fast through his slides and it would not be possible to take pictures of his slides. Boy, was he right! It was amazing, he went super fast, almost never missed, was always crystal clear. Luckily his slides can be found here.

Some things that stood out to me:

  • They worked on it for three years, producing 154MB worth of source code.
  • They use scaleform for their UI, we used that at Larian too, not sure if they still do though.
  • Concurrent components are hard
  • Scenes were build with Houdini, they defined a json format for their levels that could easily be exported from and into houdini. It's that "into" that struck me as odd, I learned that when you have a loop in your art tools that you run into issues, but hey, if it worked for them...
  • They used 128m grid size for their tiles, which were hexes! Hex3es allow for better streaming because three tiles cover almost 180 degrees of what you're going to see next, while with square tiles you'd need to stream 5 of them.
  • During motion blur (while swooping through the city) no extra mipmaps get streamed
  • There are a few cutscenes that can't be skipped in the game: they are actually just animated load screens; it was cool to see how everything outside the cutscene got unloaded and then loaded in.
  • At some point the game covered 90% of a blu ray disc and still it needed to grow, so they had to quite some little clever compression tricks to get everything on one disc.
  • One example was instead of using a classic index buffer they used offsets (+1, +2, etc) which yielded better compression results.

This talk is a must watch! Good thing it's available on the vault for free!

Back to the Future! Working with Deterministic Simulation in 'For Honor'

Last but definitely not least was this session on lockstep deterministic simulation by Jennifer Henry. In For Honor only player input is send over the network. There is no central authority, meaning that every peer simulates every step of the game. Every player keeps a history of 5 seconds worth of input. If a delayed input arrives, since the simulation is done completely deterministic, the whole simulation gets redone starting from the delayed input.

This proved to be hard, for starters: floating points are different between AMD and Intel, there's multithreading, random values, etc...

Central take-away: "don't underestimate desync". Jennifer showed a few cases. In debug mode desyncs get reported to Jira, but in a live build a snapshot gets recorded. Then the game either tries to recover, kicks the diverging peer or disbands the entire session.

Input gets processed quickly: 8 times per frame! The physics steps with 0.5ms:

Definitely worth to watch on the vault!

Wrap-up

So that's it! On Friday after the expo closed we went sightseeing a bit in SF and found this cool museum of very very old arcade machines! My smartphone takes really bad pictures so I cannot show much of them, but this one was really cool:

We went for dinner in "Boudin", which is a restaurant + bakery, where you could buy bread in all kinds of weird shapes, this one was nice:

Thanks for reading!

08 April 2019

GDC 2019 - Part Two

This is part two of my notes on GDC 2019, read the first part here.

Follow the DOTS: Presenting the 2019 roadmap

Intrigued by Unity's keynote I decided to attend this session. This was a rather high-level talk with lots of pointers to other content (for example lot's of talks where held at Unity's own booth). Main take away for me were the ECS samples that can be found on github. There is also a new transform system coming in 2019, curious for that as well.

At the keynote it was announced that Havok Physics will be integrated with Unity, together with a custom, completely C# based physics solution from Unity themselves. Personally I trust the in-house version a bit better atm, but maybe Havok will be more performant after all? It's just weird to have the two options.

There is also a new API in the works to control the unity update loop. Not sure why, since I think it will only complicate things.

At the moment the C# job system and the Burst compiler are released. ECS is due later in 2019 and then it's the plan to transfer all other internal systems over to ECS by the end of 2022.

It is sneakily never mentioned anywhere but I asked during the Q&A session: yes, Havok will still require a separate license.

Creating instant games with DOTS: Project Tiny

Build upon DOTS, the goal of of project tiny was to create a 2d web game smaller than 100kb. For that they stripped the editor from anything that was too much for the project, "tiny mode". For scripting they introduced TypeScript... Why??? We just got rid of JavaScript! Luckily they announced that they're going to switch this back to C# again. It's unclear to me they even bothered with TypeScript.

The goal in the end is that you can select the libraries you need for your game and remove all the others. Tiny mode will then be called "DOTS mode". It is only targeted for web, but mobile platforms will be added later. A bit more info can be found here.

Cool part of "DOTS mode" is the runtime runs in a sperate process, even in the editor. This means it can even run on another device, while you're working in the editor! It also implies that there will be no need anymore to switch platforms; conversion of assets will happen at runtime.

Another part of the DOTS improvements is the vast improvement of streaming, awake and start times have all but been eliminated, so that sounds promising too.

IMGui in the editor is also completely deprecated, UI will build with UIElements.

Looking forward to these changes, I might test this with a 2D GIPF project I'm working on...

Procedural Mesh Animation with non-linear transforms

This talk by Michael Austin was serious cool! He illustrated how we could implement easy wind shader code with non linear transforms. But then he went on and made extremely nice visual effects with only very few lines of math in the vertex shader.

I had not the time to note it all down thoroughly enough to reproduce it here, but I really recommend checking out this talk on the vault! If I find anything online I'll add it here and my fingers are itching to get started on a demo :)

Cementing your duct tape: turning hacks into tools

Not really my field of interest, but the speaker was Mattias Van Camp, ex-DAE student but (more importantly) ex-Kweetet.be artist! He even mentioned Kweetet during his introduction, the logo was on the screen!

He then defined the term "duct tape" he uses in his talk: duct tape is a hack that you throw away. What follows were two examples of duct tape code they had to write at creative assembly to work with massive amounts of assets. Both of the examples boiled down to the DRY principle, but this time applied to art assets instead of code or data. They used the Marmoset Toolbag to generate icons from max files for example, all automatically. Continuous integration FTW!

Are Games Art School? How to Teach Game Development When There Are No Jobs

Next I attended another session of the educators summit. Speaker Brendan Keogh made a case that game schools are art schools, meaning that once you graduate there are practically no jobs available. There were some interesting stats:

The sources for that data can be found here.

He then continued to make a case that we should train "work-ready" game dev students.

I'm a real fan of the first sentence on that slide! Students often do not realize this and we should tell them this indeed.

Another good take-away for me was the notion to not have the students in the first year create a game like X (which we actually do in Programming 2 and Programming 4) but instead have them make a game about Y. And Y can be anything, so you're not only restricted to games. The students will much more likely create something truly unique.

Something I should mention too: "Videogames aren't refrigerators". Just so you know.

Belgian Games Café

We quickly visited IGDA's Annual Networking Event, which was nice but not very interesting. After that we went to the Belgian Games Café, there were good cocktails, but no real beer :). Nice venue!

And it was cool to meet so many Belgian devs. And then the party got started once David started showing of his retro DJ skills :)

28 March 2019

GDC 2019 - Part One

I just got back from GDC, full of inspiration! This is a small report of the first day and I intend to write about the other days in coming posts.

Marvel's Spider-Man AI Postmortem

We're starting new AI courses in DAE so i thought it was a good idea to attend this session by Adam Noonchester of Insomniac Games. Main takeaways:

  • The insomniac engine used to have behavior trees but for the Spider-Man they implemented "Data Driven Hierarchical Finite State Machines". The complex behavior trees had become very difficult to debug and were not composable. The DDHFSM system uses structs containing data that drive the creation of the eventual FSM.
  • There was also the cool concept of "sync joints" in the animation system. In each combat related animation their has been added an extra sync joint. When one character plays an attack anim and the other a response animation, the sync joints of both animations are matched, causing the animations to play simultaneously and at a correct distance from each other. Difficulties here are the fact that you need to add a response animation for each attack animation that gets added to the system for each character. That can quickly mount up to a lot of work.
  • For combat there was a "Melee Manager" that made sure no two NPC's attacked spiderman at the same time. There was a strategy that decided who attacks spider-man first. To avoid that you can just run away of the current assigned attacker there is the concept of "job stealing" where another NPC can become the attacker in certain conditions (fi the player is closer to another NPC)
  • A bit similar was the "Ranged Manager" that controlled the ranged attacks. In deciding who's the next to attack Spider-Man the off-screen NPC's get a lower prio than the ones on the screen. There are many details in this system, fi jumping in the air could cause all the enemies to stop attack, so quite some special cases were implemented.
  • Positioning the NPC's around Spider-Man was first done with a circle divided in different wedges, but that quickly turned out not to work. Instead a Gradient Descent algorithm was used.
  • The web blankets that Spider-Man shoots are a collection of joints that raycast onto the surface. For non flat surfaces that rays rotate inwards until they're also connected.
The "what went wrong" part held no surprises:
  • Flying animations: 3D animation is hard!
  • The navmesh was a lot of work
  • Moving surfaces are not nice for navmeshes

It was clear that the FPS engine needed a different approach for Spider-Man.

Teaching puzzle design

The next talk I attended was more in line with my role as a lecturer at DAE instead of a developer. This was a session with three speakers, First came Jesse Schell, known for his book "A book of lenses".

He talked about how he approached puzzle design in class, starting with what we call a "class-discussion" with questions and answers to determine the definition of a puzzle. The function of a puzzle in a game is to "stop and think".

  • Are puzzles games? His statement is "Puzzles are like penguins": Penguins are birds that do not fly. Likewise, puzzles are games that cannot be replayed.
  • A puzzle is a game with a dominant strategy, and once you found that dominant strategy you solved the puzzle.
  • A riddle is not a puzzle, since it has no progress, you either know the solution or you don't. There is no knowing a part of the solution.
  • Having multiple solutions to a puzzle will make the player feel smart(er) for finding a different approach than someone else, while it's still one of the solutions you provided.

At the end Jesse mentioned this article on gamasutra and this video on youtube as further reading/viewing.

Next was Naomi Clark of NYU Game Center to talk about how they teach puzzle design. She spoke of the "Bolf" game they play; golf with beanbags. Students must design bolf holes and then play like you would play golf, trying to finish below par.

Later in their course students create a game with puzzlescript, which generates 2d dungeon like puzzle games:

After that they create puzzles with Portal 2's Puzzle Maker in pairs, where the focus is on playtesting. In the results it quickly shows which games have been thoroughly tested and which were not.

Third speaker was Ira Fay. He talked about playtesting puzzles.

  • Hardest part there is that you cannot reuse your testers; once they've figured out the dominant strategy their value is gone.
  • Small changes in the game design can cause big swings in the difficulty level of the puzzle, making it hard to design.
  • There is also a wider variance in player skill

He then continued to talk about their escape room design course! Really cool, I wondered how they evaluate this though.

Analyzing for workflow reduction: from many to one to zero.

This was not such a good talk, it stayed very high level, talking about generalities without any concrete examples. The speaker basically recommends one or even zero click build tools, in other words: continuous integration...

New ideas for any-angle path-finding

Best talk of the day! Daniel Harabor presented his any-angle pathfinding solution. Any-angle is pathfinding on a grid, but where you can enter any grid cell at any angle, instead of only multitudes of 45 degrees.

He started with pointing out that string pulling is not always optimal and that Theta* (which is A* with string pulling during the search) yields good results but is slow.

The new algorithm, called Anya, is both fast and optimal. It made me think of a line sweep algorithm. A variant called Polyanya does the same thing but with polygons. I cannot start to summarize the algorithm here, but the good thing is: he posted his slides online!

He also mentioned the website movingai.com, which contains a lot of interesting resources, but most cool are the benchmark worlds of real games where these algorithms can be tested!

Stop fighting! Systems for non-combat AI

Last talk of the day by Rez Graham was about AI algorithms for AI that is not involved in combat. Most of the time AI handles NPC's that take action with the player, but NPC's who are "idle" often do that on the most weird places or in unnatural ways.

Check out this online curve tool! He used this to create the graphs that illustrated the utility curve concept. He talked about utility theory as if everyone should know what it that is, unfortunately I didn't.

At some point he made a very good point that I can only applaud: "Throw it the fuck away". He was talking about prototypes. I should've taken a picture for my students :)

Unity's GDC Keynote

I then attended Unity's keynote, which you can watch in full here.

I'm mostly excited about DOTS, I can't wait until it will be completely integrated into Unity!

Hopefully all these links are helpful, more is coming later!

20 January 2019

Blur with Unity's Post FX v2

I recently needed to blur the scene when it's in the background of a 2D UI. This being a post process effect I expected it to be available in the PostFX v2 stack. But as you can guess, it was not.

There is this legacy blur effect, which still works but is not integrated in the stack. I took the liberty to convert it to a PostFX v2 effect.

I only converted the Standard Gauss blur type, the other one called "Sgx Gauss" is left as an exercise to the reader ;)

You can find it integrated in my unitytoolset repo on BitBucket in the post-processing profile of the Stylized Fog scene.

If there are better alternatives, I'm very interested!