11 April 2019

GDC 2019 - Part Three

The is the final post on my trip to GDC19, find the first here and the second here.

The last three days of GDC there was also the expo on which we had a booth as part of the Belgian pavilion, so I had less time to attend talks. This last post wraps up the talks I attended during those three days.

The Making of 'Divinity: Original Sin 2'

I just had to go to this session by my former employer, Swen Vincke. He talked about the rough ride Original Sin 2 was. Partly a trip down memory lane for me, as nothing changed much at Larian ;). Very nice to meet-up with ex-Larian-colleague Kenzo Ter Elst who was attending the same talk!

"Shadows" of the Tomb Raider: Ray Tracing Deep Dive

Somehow this happened to be the first talk I did on ray tracing, which is my favorite subject of all, while I had actually planned for many more. I still had time :)

I just read that the all the slides of the GDC19 talks by NVidia are online, so you can already check those!

The good thing of this talk is that it brought me somewhat up to speed with the all the new RT stuff. I mean, I "know" raytracers, having written quite a few as a student, but it has been +10 years since I actively did anything with ray tracing!

There are a bunch of new shaders we can write; rays are generated per light type by "raygen" shaders, we have anyhit and closesthit shaders. Even translucency gets handled by these shaders.

What I did not realize before GDC, but now fully understand, is the importance of the denoising step in the RT pipeline. GI in raytracing always yielded noisy results unless you calculated a massive amount of rays. In all appliances of RT I've seen at GDC, only one ray was cast per step of a path, yielding incredibly noisy results. So denoising is a central part of real time raytracing. A lot of optimization needs to go in this step, for example in this talk they showed a penumbra mask, areas where we know there is a half-shadow, and only denoised on those areas.

Interesting too were the acceleration structure concepts, BLAS and TLAS (Bottom Level Acceleration Structure and Top LAS). In Tomb Raider BLAS were used on a per mesh base, TLAS were regarded as scenes.

Real-Time Path Tracing and Denoising in 'Quake 2'

Another RT focused talk, this time how a Quake II build received ray tracing. It started as research project called q2vkpt that can be found entirely on github. After Christophs part of the talk came Alexey from NVidia detailing what extra features and optimizations they added.

I played the game at the NVidia booth for a while and had a short talk there with Eric Haines (some guy who apparently just released a book on ray tracing, nice timing). In the demo, with my nose to the screen, I could easily see what is called "fireflies", pixels that are outliers in intensity that do not really de-noise so very well.

No matter how good the raytracing, something still looks off if you ask me, but this was explained in the talk: the original textures of Quake contained baked lighting and while they did an effort to remove that, it was not entirely possible.

'Marvel's Spider-Man': A Technical Postmortem

I think this talk was the best I've seen at GDC19. Elan Ruskin announced that he would go fast through his slides and it would not be possible to take pictures of his slides. Boy, was he right! It was amazing, he went super fast, almost never missed, was always crystal clear. Luckily his slides can be found here.

Some things that stood out to me:

  • They worked on it for three years, producing 154MB worth of source code.
  • They use scaleform for their UI, we used that at Larian too, not sure if they still do though.
  • Concurrent components are hard
  • Scenes were build with Houdini, they defined a json format for their levels that could easily be exported from and into houdini. It's that "into" that struck me as odd, I learned that when you have a loop in your art tools that you run into issues, but hey, if it worked for them...
  • They used 128m grid size for their tiles, which were hexes! Hex3es allow for better streaming because three tiles cover almost 180 degrees of what you're going to see next, while with square tiles you'd need to stream 5 of them.
  • During motion blur (while swooping through the city) no extra mipmaps get streamed
  • There are a few cutscenes that can't be skipped in the game: they are actually just animated load screens; it was cool to see how everything outside the cutscene got unloaded and then loaded in.
  • At some point the game covered 90% of a blu ray disc and still it needed to grow, so they had to quite some little clever compression tricks to get everything on one disc.
  • One example was instead of using a classic index buffer they used offsets (+1, +2, etc) which yielded better compression results.

This talk is a must watch! Good thing it's available on the vault for free!

Back to the Future! Working with Deterministic Simulation in 'For Honor'

Last but definitely not least was this session on lockstep deterministic simulation by Jennifer Henry. In For Honor only player input is send over the network. There is no central authority, meaning that every peer simulates every step of the game. Every player keeps a history of 5 seconds worth of input. If a delayed input arrives, since the simulation is done completely deterministic, the whole simulation gets redone starting from the delayed input.

This proved to be hard, for starters: floating points are different between AMD and Intel, there's multithreading, random values, etc...

Central take-away: "don't underestimate desync". Jennifer showed a few cases. In debug mode desyncs get reported to Jira, but in a live build a snapshot gets recorded. Then the game either tries to recover, kicks the diverging peer or disbands the entire session.

Input gets processed quickly: 8 times per frame! The physics steps with 0.5ms:

Definitely worth to watch on the vault!

Wrap-up

So that's it! On Friday after the expo closed we went sightseeing a bit in SF and found this cool museum of very very old arcade machines! My smartphone takes really bad pictures so I cannot show much of them, but this one was really cool:

We went for dinner in "Boudin", which is a restaurant + bakery, where you could buy bread in all kinds of weird shapes, this one was nice:

Thanks for reading!

08 April 2019

GDC 2019 - Part Two

This is part two of my notes on GDC 2019, read the first part here.

Follow the DOTS: Presenting the 2019 roadmap

Intrigued by Unity's keynote I decided to attend this session. This was a rather high-level talk with lots of pointers to other content (for example lot's of talks where held at Unity's own booth). Main take away for me were the ECS samples that can be found on github. There is also a new transform system coming in 2019, curious for that as well.

At the keynote it was announced that Havok Physics will be integrated with Unity, together with a custom, completely C# based physics solution from Unity themselves. Personally I trust the in-house version a bit better atm, but maybe Havok will be more performant after all? It's just weird to have the two options.

There is also a new API in the works to control the unity update loop. Not sure why, since I think it will only complicate things.

At the moment the C# job system and the Burst compiler are released. ECS is due later in 2019 and then it's the plan to transfer all other internal systems over to ECS by the end of 2022.

It is sneakily never mentioned anywhere but I asked during the Q&A session: yes, Havok will still require a separate license.

Creating instant games with DOTS: Project Tiny

Build upon DOTS, the goal of of project tiny was to create a 2d web game smaller than 100kb. For that they stripped the editor from anything that was too much for the project, "tiny mode". For scripting they introduced TypeScript... Why??? We just got rid of JavaScript! Luckily they announced that they're going to switch this back to C# again. It's unclear to me they even bothered with TypeScript.

The goal in the end is that you can select the libraries you need for your game and remove all the others. Tiny mode will then be called "DOTS mode". It is only targeted for web, but mobile platforms will be added later. A bit more info can be found here.

Cool part of "DOTS mode" is the runtime runs in a sperate process, even in the editor. This means it can even run on another device, while you're working in the editor! It also implies that there will be no need anymore to switch platforms; conversion of assets will happen at runtime.

Another part of the DOTS improvements is the vast improvement of streaming, awake and start times have all but been eliminated, so that sounds promising too.

IMGui in the editor is also completely deprecated, UI will build with UIElements.

Looking forward to these changes, I might test this with a 2D GIPF project I'm working on...

Procedural Mesh Animation with non-linear transforms

This talk by Michael Austin was serious cool! He illustrated how we could implement easy wind shader code with non linear transforms. But then he went on and made extremely nice visual effects with only very few lines of math in the vertex shader.

I had not the time to note it all down thoroughly enough to reproduce it here, but I really recommend checking out this talk on the vault! If I find anything online I'll add it here and my fingers are itching to get started on a demo :)

Cementing your duct tape: turning hacks into tools

Not really my field of interest, but the speaker was Mattias Van Camp, ex-DAE student but (more importantly) ex-Kweetet.be artist! He even mentioned Kweetet during his introduction, the logo was on the screen!

He then defined the term "duct tape" he uses in his talk: duct tape is a hack that you throw away. What follows were two examples of duct tape code they had to write at creative assembly to work with massive amounts of assets. Both of the examples boiled down to the DRY principle, but this time applied to art assets instead of code or data. They used the Marmoset Toolbag to generate icons from max files for example, all automatically. Continuous integration FTW!

Are Games Art School? How to Teach Game Development When There Are No Jobs

Next I attended another session of the educators summit. Speaker Brendan Keogh made a case that game schools are art schools, meaning that once you graduate there are practically no jobs available. There were some interesting stats:

The sources for that data can be found here.

He then continued to make a case that we should train "work-ready" game dev students.

I'm a real fan of the first sentence on that slide! Students often do not realize this and we should tell them this indeed.

Another good take-away for me was the notion to not have the students in the first year create a game like X (which we actually do in Programming 2 and Programming 4) but instead have them make a game about Y. And Y can be anything, so you're not only restricted to games. The students will much more likely create something truly unique.

Something I should mention too: "Videogames aren't refrigerators". Just so you know.

Belgian Games Café

We quickly visited IGDA's Annual Networking Event, which was nice but not very interesting. After that we went to the Belgian Games Café, there were good cocktails, but no real beer :). Nice venue!

And it was cool to meet so many Belgian devs. And then the party got started once David started showing of his retro DJ skills :)