4.30.2020

Visual Debugger 3

I started integrating Dear ImGui the other day; it's a UI framework that has been evangelized recommended to me multiple times and this seemed like a good opportunity to give it a shot. I'm sure I'll have more thoughts in the future as I use it more, but thus far it has been pretty easy - the verbose documentation, seemingly responsive support based on my Googling, and the ability to step through all the code when something isn't working have all made debugging issues much less frustrating.

However, I did hit one issue that was quite annoying and difficult to debug - my text was rendering as blocks. At first it was inconsistent, so I added it to my todo list to investigate, and kept working. Eventually I hit a state where it seemed to repro one-hundred percent of the time, which made it pretty difficult to iterate on UI code.


The initial inconsistency made me think that it was some sort of race condition - in fact, when I first encountered it and tried to step through with the debugger, the issue didn't occur. Seemingly, a heisenbug. My first thought was to dig into the font loading code of Dear ImGui, but as the callstack got deeper and the concepts became more foreign I slowly backed away. I peeked at the font texture it output, and while I wasn't certain the data looked "right", it didn't look bad.

Fun debugging trick: use the memory debug window to visualize data

Next, I started poking at the OpenGL texture data to see if if something was happening there. No GL Errors, but when Dear ImGui tried to render text, the texture size came back 0 x 0. This was mystifying, but did correlate with weirdness I had seen in gDebugger, a free OpenGL debugging tool: it had displayed the texture as invalid.

Teaser: the bug is in this picture, can you spot it?

I had found this post suggesting that it might be because the texture was too large for my GPU memory, and I had been futzing with my graphics card the other night... But the texture was 512x64 RGBA (4 bytes per pixel), so that's only 128KB. Still I eventually started looking around gDebugger to see if it would give me a picture of my GPU's memory. It didn't exactly give me that, but the picture it gave was still illuminating.


So I could now see that I had two Open GL contexts, whatever those were. The font texture belonged to the first one, which appeared to be deleted, and the second one had the empty texture I was looking at. At this point I went back to my code, where the bug certainly was, and realized that I probably shouldn't be destroying and recreating an OpenGL context (which maintains state used by OpenGL, like textures) every time I resized the window.

Tada!

4.25.2020

Visual Debugger 2

Two not-so-brief moments of pain. They're fresh wounds and I'm sure the sting will dull with the passage of time, but hopefully if someone else encounters them in the future this will prove useful.

The first: In Visual Studio, I was able to open my tool window once, but if I closed it I couldn't open it a second time: all I got was "This operation could not be completed". As I dug in, I determined that IVsWindowFrame::Show() was failing. I went round and round trying to figure out how to detect that the window was closed, going as far as to copy-paste the contents of the ToolWindowBase class that ships with the VS SDK to gain access to its private variables. That wasn't a totally fruitless exercise because it made the error code more accessible, unfortunately I couldn't find the error code anywhere on the internet or in winerror.h.

COM interfaces, still a large mystery to me, made it difficult for me to try to determine the state of the object implementing IVsWindowFrame - had it actually been deleted, should I just try to create a new one or would I be leaking memory? Eventually I tried this - and here I'll pause to offer a tip that when your iteration times are slow, modifying data and instruction memory is a valuable accelerator - but creating the new window failed as well, this time with an E_POINTER error. Well at least that was something I could read more about, but it still wasn't incredibly illuminating.

It did encourage me to reevaluate the arguments I was supplying to create the window, which eventually led me to a creation flag that I recalled adding out of whimsical curiosity when I had first started exploring the tool window extension: CTW_fMultiInstance. I wasn't able to find tons of detail about why it would have caused my problem, other than a comment on a github submission:

// we don't pass __VSCREATETOOLWIN.CTW_fMultiInstance because multi instance panes are
// destroyed when closed.  We are really multi instance but we don't want to be closed.

So I removed the flag, was able to re-open my window, and moved on with my life.

The second: In Visual Studio, my tool window wasn't receiving arrow key input. I had previously determined that this was because it was sub-classed from the ATL dialog window, but didn't know much more. After some breakpoint tracing I figured out that my message was being turned into a dialog code message, WM_GETDLGCODE. I did some searching, and tried to follow the examples of returning DLGC_WANTARROWS, but it didn't work.

After trying different combinations of flags, it finally occurred to me that my code was being called by the ATL message map, not by Windows directly. The return value from the message map function is a BOOL indicating whether you handled the message, and the result (LRESULT) supplied to Windows is actually an out-arg. But because BOOL is actually just a typedef for a long integer and LRESULT is also a typedef for a long integer, the compiler provided no warning that I might not be doing what I thought. The fix:

lResult= DLGC_WANTARROWS;
return TRUE;

4.23.2020

Visual Debugger 1

TLDR; My latest programming project is a visual debugger plug-in for Visual Studio. The intent is to make it easier to visualize 3D spatial math while debugging. Here's a picture:



Like many programmers, or probably anybody with much experience in a field, I roll my eyes at the inaccuracies when I see my area of expertise portrayed in film. But one film about a programmer that stuck with me, and this may be a false memory, was Chappie: I remember being inspired by the amount of time Dev Patel's character apparently invested in custom hardware and software to accelerate his workflow.

The problem:
Debugging issues like "why are my effects being created over there?", "what does that quaternion rotation actually look like?", and "why aren't these objects colliding?" has always been a challenge for me.

The context:
I write C++ code using Visual Studio at work (2017) and at home (2015), and while I'm aware it has a plug-in system I had never considered writing one until recently. I think I decided to try to learn how because I thought it might pay dividends in my long term productivity.

The process (so far):
Installing the Visual Studio Extensibility SDK was pretty painless, although I did have to fix one small thing in the SDK headers to get the tool window plug-in to compile. Every night I set a goal that I could achieve (or at least make significant progress on) in an hour or two - this was a lot easier at the beginning. Once I got a tool window showing up in Visual Studio and figured out how to get a handle to it, I just needed to set up an OpenGL context and draw a test shape (read: copy-paste an OpenGL tutorial code for umpteenth time). Then some windows message processing code to control the camera (along with some multithreading and synchronization to ferry data between the input and the rendering), followed by adding a library of object types that could be rendered (e.g. vector, point), and finally actually trying to get real data out of Visual Studio to render.

The biggest challenges have been:

  1. 1. Learning how to use COM. It's not a system I've ever used before 
  2. 2. Finding documentation for how to do something in the extensibility framework. The remarks are pretty sparse and the examples are all in C# (which means everything is slightly different)
  3. 3. Getting information out of the watch window. My MVP (minimum viable product) goal was being able to right-click on a variable in the watch window, select a command from a context menu, and see the variable visualized in 3D in a window. From all of my Googling thus far, I haven't found an interface for this (see challenge #2). The current implementation executes the Copy command and then tries to consume the data from the clipboard; it hurts my soul as much as I'm sure it hurts yours (which is to say both a lot and a little). I did explore trying to restore the clipboard, but stackoverflow suggests that path is tedious at best and fraught with peril.
  4. 4. Iteration is pretty slow: running requires launching a separate instance of Visual Studio and loading all of the associated symbols. It takes a couple minutes on my laptop.
The biggest wins have been:
  1. 1. It actually works. It's far from complete, though (see my todo list below).
  2. 2. I've learned some new skills and gained new competence in existing areas
    1. a. COM: I'm the farthest thing from an expert and I'm probably not going to use it unless I have to, but I know what it is and I appreciate the value that it offers for interoperability between unrelated pieces of software.
    2. b. DLL's: I've never written a DLL before or learned much about how they work. Challenge #4 above (iteration speed) encouraged me to explore making a DLL (and a standalone application) for faster iteration on features not related directly to Visual Studio integration.
    3. c. 3D Math (especially quaternions): Every time I go back to (re)write some 3D math code I think I learn a little more. I know about 0.1% more about quaternions than I did when I started and I found more bugs in my math library code.
My todo list, as promised:
-re-open window
-improved camera controls (arrow key input: base class is swallowing arrows keys because it's a dialog, mouse click & drag?)
-object labels
-object list tree view (dearIMGUI?)
-parent objects to other objects
-customize coordinate system
-add-as sub menu (e.g. vector4 -> point3)
-pointer objects?
-use shaders instead of legacy OGL?
Back to work! 

12.17.2010

Stereoscopic (3D) Split Screen




I'm working on a new project suggested to me by my friend Alex. The idea is to present a split-screen multiplayer experience without having to worry about "screen-watching", when you're friends try to get an advantage by looking at your screen. An added benefit of this is that instead of having your screen crammed in the corner, it fills up the whole TV. To be clear, this technique is not presenting a "3-D" image to a viewer, but it is taking advantage of the same science and technology that 3-D uses. Here are two separate screenshots from Halo 2. Some astute observers may note they are not from the same gameplay session, but they will still serve our purpose.

Screenshot 1 of Halo 2 (courtesy of IGN)
Screenshot 2 of Halo 2 (courtesy of Team Xbox)
Now, we're going to combine the two images into one anaglyph, or red/cyan image you may be familiar with from 3-D movies or images. The red/cyan filters will result in a loss of color quality, but this is just a proof of concept. To appropriately view the image below, you'll need a pair of old-school 3-D glasses (red/cyan). Put them on and look at the picture, first closing the left eye, then the right. You'll be able to see each screenshot isolated.

Screenshots 1 and 2 combined to form the red/cyan anaglyph

In order for this to be implemented in a game, Player A would wear glasses with two red lenses and Player B would wear glasses with two cyan lenses. This way, neither player could see the others screen. Using red/cyan, only 2 players can play at a time, but this technique could also be done using polarized lenses, which you see used in 3-D movies these days. Using polarized lenses, more than 2 players could play at one time.
This technique is not perfect. One problem is the loss of color quality mentioned above. Polarized lenses would solve this, but they require a screen or projector capable of presenting polarized images; red/cyan only requires the user to have special glasses. Players can still cheat using polarized lenses by rotating their head to look at an opponent's screen.

6.24.2009

VR Mod Update

I'm currently working on the 6th level of the stealth missions. My time is split between trying to recreate the levels with Hammer and trying to recreate the mechanics with source code. Cameras have proved to be a tricky part that I'm attacking from both perspectives: is it possible to create a solution that isn't bloated with entities in Hammer, or should I try to create my own simple camera entity? The Hammer approach seems to be working OK for now, with a little source code tweaking, but the problem is that the model is frickin' huge (see below). I'm going to look into 'borrowing' the security gun from Dystopia for now, but I'd love to find a simple security cam model (or if someone made one for me).

The other challenging part that I could use some help with is actually playing the MGS VR Missions. It's rather tedious trying to measure each map and determine its spawn points and movement paths as well. I've been searching for a god mode or invisibility cheat but I haven't found one yet. If someone knows of such a cheat, or would be willing to go throug the VR missions and provide a simple map of each mission (I even have the executable if you need it), it would be awesome! Feel free to contact me or leave your info in the comments if you're interested.

Here are some pictures of the cameras in action in Level 6. I went back to the minimalist geometric texturing for the skybox because some people mentioned that it was difficult to spot enemies before, and in general I feel like it gives the game a cleaner look. I'm toying with the ideas of some sort of wireframe for enemies, and possibly using a short-ranged wall-hack (think Dystopia's TAC Scanner) in place of MGS's standard radar.

6.18.2009

VR Mod

So this new mod project has consumed excessive amounts of my time already, but I'm loving it. Finals are winding down, Summer break is coming up, and I'm about to head home from Korea. I should have lots of free time to work on it. I got some feedback about the color scheme so I changed things up a bit. Here are a bunch of new pictures. So far I've done 3/15 missions.





6.16.2009

New Directions

So the last project fizzled pretty quickly. It's too bad because my partner and I had a lot in common, but I guess common interests don't make a game.

I've already moved onto and past one project about weather control. The premise was that you would use weather to control the outcome of events. It was inspired mechanically by Opera Omania, and thematically by Metal Gear Solid (conspiracy theory story). The biggest problem was that I had trouble figuring out how to make the game's main mechanic fun and challenging.

My next project is something with a tried-and-true mechanic: shooting things. I've been playing with Valve's Source engine and learning the ropes. As I was surfing FPSBanana for skins (I'm not an artist) and I found a few Metal Gear Solid inspired ones. I thought that making some maps based on MGS's VR Missions would be fun, but I quickly discovered that some of the entities would need some tweaking in-order to fit my needs. The two biggest changes I want to make though are:
  • Adding a radar system, akin to MGS, but perhaps without the 'jamming' feature
  • Dumb-down the AI by limiting the visual range of enemy NPC's
Below are some screen shots of what I've got so far (models courtesy of various FPSBanana members). I think the color-scheme could use some work, any and all feedback is appreciated.