Topic-Free Mega Thread - v 1.11.2020

Suppose if developers have to time the loading screens (EDIT: or the loading of something.) to framerate one could use the hotkey combination to set the FPS limit to something like ~300 with one toggle and back to refresh rate with another.

Although with flip model and VSync I think the way it works even with variable rate refresh it never goes and disengages above the refresh rate when it comes to framerate hmm.

Wonder if the V sync toggle itself could be flipped dynamically like that, maybe.
At least load times are normally not too unbearable even if it’s capped like this although some timed together with movie playback that’s completely unskippable can take a while.

It is, sometimes fine gradients in games that use 10-bit can be a tell but it’s easy to mix up with texture compression and shader banding too from what I know.

8-bit as a standard as well with 10-bit now breaking through a bit more due to HDR but outside of HDR being enabled it might still go back to 8-bit.

Driver settings and some overrides might try to force it higher or benefit from maybe reducing dithering slightly with the difference of 1 billion something colors versus 10 million separately also means more than oh what was it again 256 gradients of 8-bit and…a lot more with 10-bit thus the benefit with gradients here and banding.

Alien Isolation and deep color but usually you wouldn’t notice it although it’s nice to have and I think that’s one of the first games that utilized this. :slight_smile:

From Reddit.
https://www.reddit.com/r/pcgaming/comments/8ek1sa/10bit_colour_in_games_where_is_it/

There’s other tests and such but this is a good example where it can benefit.
But then texture compression (Skyboxes, always the skyboxes!) and shader banding or optimization and using less precise calculations or formats which is faster but with trade-offs.

ReShade has some methods for it but it’s also useful to know it mostly aids ReShade itself although the dither shader I think it is can smooth over extreme banding though with grain as the trade-off instead.
(Whereas the deband shader is only for the ReShade shaders and only if placed correctly so the other shaders run and then get deband on it last.)

Yeah the simulation driving model has a hot topic on the Steam forums for that now, 10 up to 30 attempts for some dedicated people and lamentations of how the developers removed a shortcut since in the original game Tommy isn’t a race car driver so the idea is simple, cheat. :smiley:
(Doing it legit works but the conditions and driving model and some clunkiness too makes it clear as to why a later patch allows for reducing this challenge significantly ha ha.)

EDIT: For the challenge itself or perhaps the achievement I suppose, clearing the game on hard/simulation and never changing it unlocking one of those.
It’s a neat little touch and reference to some of the things in Mafia 1 classic but the new game isn’t quite tuned for it but it’s there allowing players to try the harsher driving rules and more restrictions in addition to overall combat and gameplay tweaks.

SpecialK64.7z (7.6 MB) SpecialK32.7z (6.3 MB)

Improved Render Latency measurement, added 32-bit support and reverted to old framerate limiter statistics in non-Flip model games / D3D9 / GL.


Note: Don’t compare another framerate limiter’s performance after Special K’s limiter has been on for a single frame, SK will have optimized the SwapChain and invalidated the results.

1 Like

Nice.

So flip discard to enable, wait chain is now either 1 or 0 and handled by SpecialK and then the pre-rendered at one above back buffer count was it?

3 or so back buffers, bit more might be more ideal for higher refresh rates if flip model is engaged and then just matching the FPS limit with what you get in the game I suppose possibly lowering settings a bit for hitting higher numbers for more gains here with latency and such.

HDR mastering was 10 or 11 bits with 8 possible but potentially problematic (Not that I have HDR to test with or at least not the actual proper way this is done in hardware.) think it was something like that.

Eh it’s probably written out on the PC Gaming Wiki if nothing else since the SpecialK page there was pretty detailed last I checked. :slight_smile:

Oh and for the texture cache and optimizing that is that from just comparing (With SpecialK) what the game is trying to use and set it after this limit or just going by a ratio of available RAM/VRAM?
(In with it, all of it, every single texture. :smiley: )
(“Gotta cache em all.”

EDIT:
Well just have to try and see how it goes.

On Death Stranding, frame limiter isn’t kicking in on these last 2-3 versions you shared.
0.11.0.48 works just fine, and earlier versions of 0.11.0.49 did too.

I’m not surprised, I haven’t tested a D3D12 game in a while. D3D11 and D3D12 both share DXGI as a common interface, it’s extremely easy for me to forget the code is supposed to run on D3D12 as well :slight_smile:

D3D9 and GL don’t have that problem, there’s nothing shared between them and earlier / later graphics APIs. I can more or less assume nothing I’m doing breaks those, lol.

Oh, nice.

Whatever the hell ATLUS did to their game, it’s running like garbage now. But this shows the difference between frametime and latency quite nicely. The game’s running at 2 FPS (so ~500 ms frametime), however, the render queue is so backed up with undisplayed frames that latency is > 3000 ms.

In HDR this doesn’t seem particularly practical, but …

Experimenting with a latency histogram that runs behind the frametime graph. Getting it so that both things are legible at once is going to take some work.

     SpecialK64.7z (7.6 MB)

Let me know if it actually seems practical in SDR at least, because I think it’s a really badass feature if I can fine tune it enough. It’s fascinating watching latency go up when the Xbox Game Bar is open, there are lots of things on your system stealing latency from you and you’d never know it when the framerate is 100% stable but latency is not :slight_smile:

Again, this requires flip model or fullscreen exclusive.

Well, in Watch Dogs 2 it basically always states a render latency of 0 frames when the window is active. An inactive window jumps between 0 and 1 frame on the fly.

… Turns out I didn’t even need those additional low latency options :smiley:

Ooo, that sounds cool. Gonna take it for a spin in Watch Dogs 2.

Well, it helps that you have G-Sync active :stuck_out_tongue: Virtually none of this matters if you’ve got the benefit of G-Sync.


Also, when using G-Sync always use the Low Latency Mode in the framerate limiter advanced tab. That changes when the limiting is applied, with it off the CPU creates a frame but it gets delayed until target fps.

With it on, the new frame doesn’t start until target fps. Basically, it polls input closer to the actual image going out onto screen… but it would cause unstable framerates w/o G-Sync.

A minor thing I noticed with the new latency histogram was that it didn’t follow the frame rate histogram perfectly – meaning that occasionally the latency histogram would drift away from the specific frame it concerned.

Yeah, I’m aware of that. The DWM doesn’t update statistics every frame when latency is high… it updates them at the same rate (n-many frames) reported in the UI :slight_smile:

I’ve decided to just fill-in last known good values so that it operates at the same speed as the line graph for frametime.

SpecialK64.7z (7.6 MB)

From what I understand, PresentMon does the same thing. A lot of its data is not actual sampled data points, but just re-using last known values.

What do you suppose the odds of getting a GeForce RTX 3080 before Watch_Dogs: Legion are at this point? :slight_smile: I really thought that by being willing to blow my money on an RTX 3090 K|NGP|N, I wasn’t going to have much trouble in this department. But stock apparently sucked across all SKUs.

I really want a new card before Cyberpunk, Watch_Dogs and Assassin’s Creed. The appeal of 4K/120 and/or 4K/12-bit HDR is just too great. I can sort of do that with my RTX 2080 Ti right now, but the adapter for DP1.4 → HDMI 2.1 is flaky (Win+Ctrl+Shift+B is needed many times when the signal just doesn’t negotiate correctly) and I know an actual HDMI 2.1 GPU would be a better experience.

Watch_Dogs Legion comes out October 29th with stocks for the 3080 and 3090 getting supplied in smaller quantities from what it’s looking like around early and mid October but any larger quantities could take a while and back orders on top of that as well.

So mid-November is my expectation but even these larger shipments will probably sell out fast though perhaps not hours like with the first two batches here.

EDIT: Kinda hard to put the demand into words.
Microcenter had people queuing to buy these cards about a week before street date if that says anything.

Damn, I’m thinking I’d be better off with a Series X then and just buy this year’s AAA games on console. NVIDIA’s creating sales for AMD consoles, lol.


So, GeForce Experience has absolutely no support for HDR video capture. I don’t know what they were smoking. With my luck they probably want me to turn HDR on using their stupid proprietary crap in NvAPI, which I’m not doing because I already wrote nice portable DXGI HDR code.

AMD’s people have been fooling around on Twitter as well though in a more joking manner rather than anything official.

Perhaps learning a bit from what they pulled with Epyc (Zen for datacenters I think.) and early Vega promotions.


(This one’s a bit milder.)

I have basically given up hope of getting a new GPU this year… friggin hell, the stock situation seems ridiculous.

Initial demand is going to be extremely high and will be split between gamers and smaller work related purposes or app and rendering and such stuff since the Ampere GPU’s have more memory and are outright faster while also costing less than the Pascal series or Turing including the Titan V which I saw listed as 5000$ not sure but I assume some of the floating point and integer precision calculations are also preserved on Ampere instead of at a reduced rate also making them attractive for AI and CUDA overall plus GPU mining.

Unless NVIDIA can get a bigger shipment of cards out world-wide this demand could last months from what I’m thinking and resale value and store prices might also increase further due to said demand and the cards still selling out quickly.

It is the experiment build right?

3.20.5.48

https://uk.download.nvidia.com/GFE/GFEClient/3.20.5.48/GeForce_Experience_Beta_v3.20.5.48.exe

There’s a toggle in it too for enabling experimental features, I was under the impression that only related to the overclock stuff though.

Stable version being 3.20.4.14

https://uk.download.nvidia.com/GFE/GFEClient/3.20.4.14/GeForce_Experience_v3.20.4.14.exe

Hmm and I can see the geo localization thing adding that Sweden/Swedish bit to the URL but it should switch over and the direct download URL’s are there too for convenience.

EDIT: Sudden random Swedish language embedded URL.
Hmm curious, top one remains a link only.

Well I get that all the time in the Steam chat so nothing new.
(Picky thing that, same site and it’s still pretty random as to whether it embeds it or just shows the URL. :stuck_out_tongue: )

NVIDIA GeKraft Upplevelse

Good thing we don’t localize names, generally.