Hello, everybody! By “pre-rendered frames 0” I mean the lowest lag possible. For dx9 thers’s such thing as WaitBusyRenderer in ENB which results in better response time compared with “pre-rendered frames 1”. Also there is d3d_antilag dll for dx9 which can do the same thing. I wonder if such thing is possible with dx11 and if it can be done with SpecialK. Nvidia has Low Latency Mode=ultra for that purpose, guess there’s something similar with AMD (Radeon Anti-Lag??). I have an older intel HD gpu which has close to nothing to be tweaked driver side but is working wonderfully with SpecialK. In dx9 games I use enb or antilag and the input lag is ok for low fps. In dx11 I use fps limiter either SpecialK or RTSS and when fps hits the cap it’s ok with lag but below the cap it is not. I play with vsync off (of course), full screen exclusive, and it’s windows 7. Thank You)
the special k control panel menu in the swapchain management section has the “maximum device latency” setting, which sets the maximum pre-rendered frames. you can technically type in a 0 there, but that’s basically an invalid setting. setting it to 1 is technically the lowest it may work at – and technically it goes from 1 to 16 (also buffer count + 1 is what sk allows you to set there, so if you were wanting to set maximum device latency to 16 for some reason… you’d also set sk’s “backbuffer count” setting to 15). something better that sk has is the option to enable nvidia reflex for dxgi games that don’t have support for it, but that does require an nvidia gpu.
also, special k doesn’t officially support windows 7, and so several features simply won’t work on windows 7…
Thank you! Yes I’m actually wanting maximum device latency at 0 but SK doesn’t process this value))
0 is basically an invalid setting (or may switch to the default) for SetMaximumFrameLatency
The value defaults to 3, but can range from 1 to 16. A value of 0 will reset latency to the default. For multi-head devices, this value is specified per-head.
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
i don’t know of another method to lower the pre-rendered frames queue or to lower it further… other than using an fps limiter and lowering fps enough or via a driver option (like nvidia’s ultra low latency mode or nvidia’s reflex…)
Thank you for answering! So I guess you are right)) Had to ask anyway… I could miss something))