11 April 2019

GDC 2019 - Part Three

The is the final post on my trip to GDC19, find the first here and the second here.

The last three days of GDC there was also the expo on which we had a booth as part of the Belgian pavilion, so I had less time to attend talks. This last post wraps up the talks I attended during those three days.

The Making of 'Divinity: Original Sin 2'

I just had to go to this session by my former employer, Swen Vincke. He talked about the rough ride Original Sin 2 was. Partly a trip down memory lane for me, as nothing changed much at Larian ;). Very nice to meet-up with ex-Larian-colleague Kenzo Ter Elst who was attending the same talk!

"Shadows" of the Tomb Raider: Ray Tracing Deep Dive

Somehow this happened to be the first talk I did on ray tracing, which is my favorite subject of all, while I had actually planned for many more. I still had time :)

I just read that the all the slides of the GDC19 talks by NVidia are online, so you can already check those!

The good thing of this talk is that it brought me somewhat up to speed with the all the new RT stuff. I mean, I "know" raytracers, having written quite a few as a student, but it has been +10 years since I actively did anything with ray tracing!

There are a bunch of new shaders we can write; rays are generated per light type by "raygen" shaders, we have anyhit and closesthit shaders. Even translucency gets handled by these shaders.

What I did not realize before GDC, but now fully understand, is the importance of the denoising step in the RT pipeline. GI in raytracing always yielded noisy results unless you calculated a massive amount of rays. In all appliances of RT I've seen at GDC, only one ray was cast per step of a path, yielding incredibly noisy results. So denoising is a central part of real time raytracing. A lot of optimization needs to go in this step, for example in this talk they showed a penumbra mask, areas where we know there is a half-shadow, and only denoised on those areas.

Interesting too were the acceleration structure concepts, BLAS and TLAS (Bottom Level Acceleration Structure and Top LAS). In Tomb Raider BLAS were used on a per mesh base, TLAS were regarded as scenes.

Real-Time Path Tracing and Denoising in 'Quake 2'

Another RT focused talk, this time how a Quake II build received ray tracing. It started as research project called q2vkpt that can be found entirely on github. After Christophs part of the talk came Alexey from NVidia detailing what extra features and optimizations they added.

I played the game at the NVidia booth for a while and had a short talk there with Eric Haines (some guy who apparently just released a book on ray tracing, nice timing). In the demo, with my nose to the screen, I could easily see what is called "fireflies", pixels that are outliers in intensity that do not really de-noise so very well.

No matter how good the raytracing, something still looks off if you ask me, but this was explained in the talk: the original textures of Quake contained baked lighting and while they did an effort to remove that, it was not entirely possible.

'Marvel's Spider-Man': A Technical Postmortem

I think this talk was the best I've seen at GDC19. Elan Ruskin announced that he would go fast through his slides and it would not be possible to take pictures of his slides. Boy, was he right! It was amazing, he went super fast, almost never missed, was always crystal clear. Luckily his slides can be found here.

Some things that stood out to me:

  • They worked on it for three years, producing 154MB worth of source code.
  • They use scaleform for their UI, we used that at Larian too, not sure if they still do though.
  • Concurrent components are hard
  • Scenes were build with Houdini, they defined a json format for their levels that could easily be exported from and into houdini. It's that "into" that struck me as odd, I learned that when you have a loop in your art tools that you run into issues, but hey, if it worked for them...
  • They used 128m grid size for their tiles, which were hexes! Hex3es allow for better streaming because three tiles cover almost 180 degrees of what you're going to see next, while with square tiles you'd need to stream 5 of them.
  • During motion blur (while swooping through the city) no extra mipmaps get streamed
  • There are a few cutscenes that can't be skipped in the game: they are actually just animated load screens; it was cool to see how everything outside the cutscene got unloaded and then loaded in.
  • At some point the game covered 90% of a blu ray disc and still it needed to grow, so they had to quite some little clever compression tricks to get everything on one disc.
  • One example was instead of using a classic index buffer they used offsets (+1, +2, etc) which yielded better compression results.

This talk is a must watch! Good thing it's available on the vault for free!

Back to the Future! Working with Deterministic Simulation in 'For Honor'

Last but definitely not least was this session on lockstep deterministic simulation by Jennifer Henry. In For Honor only player input is send over the network. There is no central authority, meaning that every peer simulates every step of the game. Every player keeps a history of 5 seconds worth of input. If a delayed input arrives, since the simulation is done completely deterministic, the whole simulation gets redone starting from the delayed input.

This proved to be hard, for starters: floating points are different between AMD and Intel, there's multithreading, random values, etc...

Central take-away: "don't underestimate desync". Jennifer showed a few cases. In debug mode desyncs get reported to Jira, but in a live build a snapshot gets recorded. Then the game either tries to recover, kicks the diverging peer or disbands the entire session.

Input gets processed quickly: 8 times per frame! The physics steps with 0.5ms:

Definitely worth to watch on the vault!

Wrap-up

So that's it! On Friday after the expo closed we went sightseeing a bit in SF and found this cool museum of very very old arcade machines! My smartphone takes really bad pictures so I cannot show much of them, but this one was really cool:

We went for dinner in "Boudin", which is a restaurant + bakery, where you could buy bread in all kinds of weird shapes, this one was nice:

Thanks for reading!

08 April 2019

GDC 2019 - Part Two

This is part two of my notes on GDC 2019, read the first part here.

Follow the DOTS: Presenting the 2019 roadmap

Intrigued by Unity's keynote I decided to attend this session. This was a rather high-level talk with lots of pointers to other content (for example lot's of talks where held at Unity's own booth). Main take away for me were the ECS samples that can be found on github. There is also a new transform system coming in 2019, curious for that as well.

At the keynote it was announced that Havok Physics will be integrated with Unity, together with a custom, completely C# based physics solution from Unity themselves. Personally I trust the in-house version a bit better atm, but maybe Havok will be more performant after all? It's just weird to have the two options.

There is also a new API in the works to control the unity update loop. Not sure why, since I think it will only complicate things.

At the moment the C# job system and the Burst compiler are released. ECS is due later in 2019 and then it's the plan to transfer all other internal systems over to ECS by the end of 2022.

It is sneakily never mentioned anywhere but I asked during the Q&A session: yes, Havok will still require a separate license.

Creating instant games with DOTS: Project Tiny

Build upon DOTS, the goal of of project tiny was to create a 2d web game smaller than 100kb. For that they stripped the editor from anything that was too much for the project, "tiny mode". For scripting they introduced TypeScript... Why??? We just got rid of JavaScript! Luckily they announced that they're going to switch this back to C# again. It's unclear to me they even bothered with TypeScript.

The goal in the end is that you can select the libraries you need for your game and remove all the others. Tiny mode will then be called "DOTS mode". It is only targeted for web, but mobile platforms will be added later. A bit more info can be found here.

Cool part of "DOTS mode" is the runtime runs in a sperate process, even in the editor. This means it can even run on another device, while you're working in the editor! It also implies that there will be no need anymore to switch platforms; conversion of assets will happen at runtime.

Another part of the DOTS improvements is the vast improvement of streaming, awake and start times have all but been eliminated, so that sounds promising too.

IMGui in the editor is also completely deprecated, UI will build with UIElements.

Looking forward to these changes, I might test this with a 2D GIPF project I'm working on...

Procedural Mesh Animation with non-linear transforms

This talk by Michael Austin was serious cool! He illustrated how we could implement easy wind shader code with non linear transforms. But then he went on and made extremely nice visual effects with only very few lines of math in the vertex shader.

I had not the time to note it all down thoroughly enough to reproduce it here, but I really recommend checking out this talk on the vault! If I find anything online I'll add it here and my fingers are itching to get started on a demo :)

Cementing your duct tape: turning hacks into tools

Not really my field of interest, but the speaker was Mattias Van Camp, ex-DAE student but (more importantly) ex-Kweetet.be artist! He even mentioned Kweetet during his introduction, the logo was on the screen!

He then defined the term "duct tape" he uses in his talk: duct tape is a hack that you throw away. What follows were two examples of duct tape code they had to write at creative assembly to work with massive amounts of assets. Both of the examples boiled down to the DRY principle, but this time applied to art assets instead of code or data. They used the Marmoset Toolbag to generate icons from max files for example, all automatically. Continuous integration FTW!

Are Games Art School? How to Teach Game Development When There Are No Jobs

Next I attended another session of the educators summit. Speaker Brendan Keogh made a case that game schools are art schools, meaning that once you graduate there are practically no jobs available. There were some interesting stats:

The sources for that data can be found here.

He then continued to make a case that we should train "work-ready" game dev students.

I'm a real fan of the first sentence on that slide! Students often do not realize this and we should tell them this indeed.

Another good take-away for me was the notion to not have the students in the first year create a game like X (which we actually do in Programming 2 and Programming 4) but instead have them make a game about Y. And Y can be anything, so you're not only restricted to games. The students will much more likely create something truly unique.

Something I should mention too: "Videogames aren't refrigerators". Just so you know.

Belgian Games Café

We quickly visited IGDA's Annual Networking Event, which was nice but not very interesting. After that we went to the Belgian Games Café, there were good cocktails, but no real beer :). Nice venue!

And it was cool to meet so many Belgian devs. And then the party got started once David started showing of his retro DJ skills :)

28 March 2019

GDC 2019 - Part One

I just got back from GDC, full of inspiration! This is a small report of the first day and I intend to write about the other days in coming posts.

Marvel's Spider-Man AI Postmortem

We're starting new AI courses in DAE so i thought it was a good idea to attend this session by Adam Noonchester of Insomniac Games. Main takeaways:

  • The insomniac engine used to have behavior trees but for the Spider-Man they implemented "Data Driven Hierarchical Finite State Machines". The complex behavior trees had become very difficult to debug and were not composable. The DDHFSM system uses structs containing data that drive the creation of the eventual FSM.
  • There was also the cool concept of "sync joints" in the animation system. In each combat related animation their has been added an extra sync joint. When one character plays an attack anim and the other a response animation, the sync joints of both animations are matched, causing the animations to play simultaneously and at a correct distance from each other. Difficulties here are the fact that you need to add a response animation for each attack animation that gets added to the system for each character. That can quickly mount up to a lot of work.
  • For combat there was a "Melee Manager" that made sure no two NPC's attacked spiderman at the same time. There was a strategy that decided who attacks spider-man first. To avoid that you can just run away of the current assigned attacker there is the concept of "job stealing" where another NPC can become the attacker in certain conditions (fi the player is closer to another NPC)
  • A bit similar was the "Ranged Manager" that controlled the ranged attacks. In deciding who's the next to attack Spider-Man the off-screen NPC's get a lower prio than the ones on the screen. There are many details in this system, fi jumping in the air could cause all the enemies to stop attack, so quite some special cases were implemented.
  • Positioning the NPC's around Spider-Man was first done with a circle divided in different wedges, but that quickly turned out not to work. Instead a Gradient Descent algorithm was used.
  • The web blankets that Spider-Man shoots are a collection of joints that raycast onto the surface. For non flat surfaces that rays rotate inwards until they're also connected.
The "what went wrong" part held no surprises:
  • Flying animations: 3D animation is hard!
  • The navmesh was a lot of work
  • Moving surfaces are not nice for navmeshes

It was clear that the FPS engine needed a different approach for Spider-Man.

Teaching puzzle design

The next talk I attended was more in line with my role as a lecturer at DAE instead of a developer. This was a session with three speakers, First came Jesse Schell, known for his book "A book of lenses".

He talked about how he approached puzzle design in class, starting with what we call a "class-discussion" with questions and answers to determine the definition of a puzzle. The function of a puzzle in a game is to "stop and think".

  • Are puzzles games? His statement is "Puzzles are like penguins": Penguins are birds that do not fly. Likewise, puzzles are games that cannot be replayed.
  • A puzzle is a game with a dominant strategy, and once you found that dominant strategy you solved the puzzle.
  • A riddle is not a puzzle, since it has no progress, you either know the solution or you don't. There is no knowing a part of the solution.
  • Having multiple solutions to a puzzle will make the player feel smart(er) for finding a different approach than someone else, while it's still one of the solutions you provided.

At the end Jesse mentioned this article on gamasutra and this video on youtube as further reading/viewing.

Next was Naomi Clark of NYU Game Center to talk about how they teach puzzle design. She spoke of the "Bolf" game they play; golf with beanbags. Students must design bolf holes and then play like you would play golf, trying to finish below par.

Later in their course students create a game with puzzlescript, which generates 2d dungeon like puzzle games:

After that they create puzzles with Portal 2's Puzzle Maker in pairs, where the focus is on playtesting. In the results it quickly shows which games have been thoroughly tested and which were not.

Third speaker was Ira Fay. He talked about playtesting puzzles.

  • Hardest part there is that you cannot reuse your testers; once they've figured out the dominant strategy their value is gone.
  • Small changes in the game design can cause big swings in the difficulty level of the puzzle, making it hard to design.
  • There is also a wider variance in player skill

He then continued to talk about their escape room design course! Really cool, I wondered how they evaluate this though.

Analyzing for workflow reduction: from many to one to zero.

This was not such a good talk, it stayed very high level, talking about generalities without any concrete examples. The speaker basically recommends one or even zero click build tools, in other words: continuous integration...

New ideas for any-angle path-finding

Best talk of the day! Daniel Harabor presented his any-angle pathfinding solution. Any-angle is pathfinding on a grid, but where you can enter any grid cell at any angle, instead of only multitudes of 45 degrees.

He started with pointing out that string pulling is not always optimal and that Theta* (which is A* with string pulling during the search) yields good results but is slow.

The new algorithm, called Anya, is both fast and optimal. It made me think of a line sweep algorithm. A variant called Polyanya does the same thing but with polygons. I cannot start to summarize the algorithm here, but the good thing is: he posted his slides online!

He also mentioned the website movingai.com, which contains a lot of interesting resources, but most cool are the benchmark worlds of real games where these algorithms can be tested!

Stop fighting! Systems for non-combat AI

Last talk of the day by Rez Graham was about AI algorithms for AI that is not involved in combat. Most of the time AI handles NPC's that take action with the player, but NPC's who are "idle" often do that on the most weird places or in unnatural ways.

Check out this online curve tool! He used this to create the graphs that illustrated the utility curve concept. He talked about utility theory as if everyone should know what it that is, unfortunately I didn't.

At some point he made a very good point that I can only applaud: "Throw it the fuck away". He was talking about prototypes. I should've taken a picture for my students :)

Unity's GDC Keynote

I then attended Unity's keynote, which you can watch in full here.

I'm mostly excited about DOTS, I can't wait until it will be completely integrated into Unity!

Hopefully all these links are helpful, more is coming later!

20 January 2019

Blur with Unity's Post FX v2

I recently needed to blur the scene when it's in the background of a 2D UI. This being a post process effect I expected it to be available in the PostFX v2 stack. But as you can guess, it was not.

There is this legacy blur effect, which still works but is not integrated in the stack. I took the liberty to convert it to a PostFX v2 effect.

I only converted the Standard Gauss blur type, the other one called "Sgx Gauss" is left as an exercise to the reader ;)

You can find it integrated in my unitytoolset repo on BitBucket in the post-processing profile of the Stylized Fog scene.

If there are better alternatives, I'm very interested!