How does (Windows) Auto HDR compares to Special K HDR? Advantages of each?

What are the pros and cons of each implementation?
And how do they different in technical implementation?

It’s still not clear to me if Windows 11 Auto HDR is able to retain more brightness detail than playing the same game in SDR (directly from the rendering buffers), or if it just tone maps with a post process without actually adding any information.

I know for sure that Special K does “add” information and it’s not just a post process tone map, and I have the feeling Auto HDR does the same, but Microsoft doesn’t explain this at all (though the fact that Auto HDR is DX 11+ kinds of speaks for itself).

Special K v Native HDR - Side by Side comparison - YouTube SK v Native
Special K v AutoHDR - Side by Side comparison - YouTube SK v AutoHDR

In short theyre all sorta the same, Native ahead of some titles, behind in others compared to SK. AutoHDR is always behind SK but has some major wins on the basis that it doesn’t hook into the game and thus can be used in online games :slight_smile:

Great stuff. I can finally exclude the idea that AutoHDR is just a cooler SDR to HDR tonemapping. Clearly it does read the additional information in DX 11 and 12 rendering buffers to produce and HDR image, similar to how Special K does.
I’ve left a comment to your vids :slight_smile:.

I don’t think it does, you can capture and record autoHDR titles as if they are in SDR, reshade and such works as if you are applying it to the SDR image. This to mean hints that autoHDR is just a postprocessing effect that is done when Windows does it’s HDR compositing.

SK HDR under DX12 works the same way.

Ah I tought that Special K was directly reading rendering buffers, then switching their color/brightness mappings from SDR to HDR and maybe skipping color correction. I’m not even sure if this is possible or if it makes sense, but I tought that’s how it worked, especially because the base HDR mode of Special K is called “Passthrough” which made me thinl it was just skipping the interal “HDR” image mapping to SDR.

You might be right about your Windows AutoHDR theory though, if ReShade works as it does in SDR, then they might not convert the internal buffers to HDR formats, but only apply a post process.
Also yeah, skipping the game engine color mapping would often give horrible results and is just not feasible.

I keep forgetting not everyone watches Microsoft’s dev videos… Anyway, Microsoft breaks down how the post-processing pixel shader that is Auto HDR works in various sections of the below video.

TL;DR:

  • Auto HDR uses a machine-learned method that was trained on “billions of pixels” of HDR game data.
  • It runs as a post-process pixel shader pass on the swap chain at present time.
  • It uses swap buffer information, such as pixel format, to determine if the game is eligible for Auto HDR.
  • Running as a shader means there’s a small but measurable performance impact on both the GPU render time and the allocation that’s needed for an additional HDR swap buffer. The render time overhead primarily scales with the game resolution.
  • Microsoft considers Auto HDR a post process “HDR color decompressor”, since it has been trained on “billiions of pixels” of HDR game output, which allows Microsoft to recover the color data that has been compressed by modern game engine’s internal SDR tone mappers.
  • Depending on the display capabilities, Auto HDR can expand the game’s SDR colors to target up to 1000 nits and the DCI-P3 color gamut.
  • Auto HDR has been tuned to minimize its effect on midtones and overall color appearance (artistic intent).
  • Natural and photo realistic lighting work really well with the algorithm. However some non-photo realistic styles may have unexpected results.

Don’t mind the quotation marks around the “billions of pixels” claim… I can’t help but do that since it sounds ridiculously low when remembering that two seconds of gameplay at 4K running at 60 FPS makes up almost a billion of pixels… In comparison, a trillion of pixels would be the equivalence of analyzing a bit more than half an hour of footage recorded at 4K and 60 FPS… So yeah, “billions of pixels” sounds like a surprisingly low number of sample data to train the algorithm on unless they picked out a shit ton of static frames and trained it on.

1 Like

Thanks a lot. No, these videos never came up in the countless times I’ve searched for AutoHDR information.