Topic-Free Mega Thread - v 1.11.2020

0.11.0.48 has trouble launching some older games, like Vampyr will stuck at startup banner or Battle Chasers: Nightwar will crash to desktop.
I think it’s because SleeplessRenderThread and SleeplessWindowThread are on by default. Changing those settings to false allow games to launch normally.

Good thing is it only needs to last one year then the next GPU generation comes around. :smiley:
…Isn’t quite how that works ha ha.

3090 with this current economy is like what six or so years of paying that back via monthly partial payments yeah not really happening here but it’s going to be really fun to read up on how the GPU performs. :smiley:

Plus any eventual third party variants and customs plus the later models potentially the 3080Ti 15 - 20 GB or other positions including mid-range 3060 or the 3050 later on I assume.
3070’s what I assume is the more gamer price/performance GPU here, 3080’s as the enthusiast and then the 3090 is the 1% who want the fastest and the cost isn’t a issue though NVIDIA could also make a full on almost workstation Ampere Titan down the line with almost the full 40 what it was GB memory too.
(Less thousands of Euro/USD than a full on Quadro Ampere or what these workstation cards are called but still suitable to many work and compute tasks from AI to CUDA to what else outside of servers and all that for larger businesses.)

EDIT: Ohh.

Spiritual successor to Hexen 2 apparently, nice. :smiley:

This is why I don’t measure allocation, it’s useless. These two situations are the same.

If you’re already managing 60fps in the games you care about (with some headroom for higher settings) then you should be fine. I’m running a 1080 ti with an R5 1600, and i’m fine because i stick to 4k at 40-60fps (freesync range).

You generally go for a higher end CPU, when you’re targeting a high refresh rate.

Hard to say, simply because next-gen third party games will now have a much better base CPU spec to work with (Zen 2 w/8 cores on consoles), but who knows how soon devs will actually utilise Zen 2 to its fullest (or close to). It’s possible most if not all games on ng consoles will have a 60fps option at a lower res because the new Consoles have that powerful a CPU. Of course some games could still push the CPU hard enough to 30fps, but we’ll have to see when that ever happens, if at all. - i expect console CPUs to be pushed a lot more towards the latter half of the gen which is a few years.

On the other hand, we don’t know how good the Ryzen 4000 series will be, and i think they launch this year? Or at least they were originally meant to. If they’re a good step up from Zen 2, and an upgrade is desired, then yeah should be worth it.

So that’s what got implemented with the delay for the remaster of Crysis ha ha.

https://twitter.com/Crysis/status/1302652932064714756

Which kinda looks like a config tweak and a TOD to make it a bit more cinematic, plus blue.
Not pure white though, thankfully.
(Not completely cinematic super hard black level and gamma crushed either, even more thankfully. :smiley: )

EDIT: I know it’s a remaster work but they could have done a remake level for the tree and leaf sprites of said trees though I suppose a number of assets will look their age much as early images shows that they have updated others and not just materials or textures but the actual mesh itself.

I intend to pick up Crysis Remastered on release and give it a spin. It’s awesome that they updated the game to an engine that actually utilizes multicore CPUs properly so we aren’t limited to the single-threaded bottleneck that the original Crysis is (technically it makes use of a few more threads than one, yes, but the vast majority of workloads are all constrained to a single thread).

I also recommend DigitalFoundry’s recent video where they visited the studio and got to playtest the game and interview the devs for a day, as it can provide some additional details about it.

Oh, and don’t forget that the game is also planned to feature not only CryEngine’s “native hardware- and API-agnostic ray tracing solution,” but also support Nvidia’s DLSS and hardware-based ray tracing when playing in Vulkan, using Nvidia’s VKRay Vulkan extension.

A lot of work have gone into this, basically, and it’s not just a “config tweak” or the original Crysis with a fan-made texture pack applied to it.

1 Like

Ooh that does look super nice


Vanilla for comparisons sake

From what i recall, the only real issue with the initial leak was that faces are basically untouched, so it didn’t look like a complete remaster. I don’t think they’ve had time (with the delay) to address faces and every character model either, but it still looks like a step up from the original overall.

Thank you this is what I thought was correct, but was only 99% on it. So pretty much any time my allocation is below the total of my card, I’ll just toss more special K texture cache at it

Situation’s going to change when the new consoles ship and games start utilizing SSD as a lower tier form of memory. DirectStorage is roughly the same; we can expect some of these games to port off the modern console onto PC and neglect to use DirectStorage, at which point you need a @#$%load of VRAM for the PC version :slight_smile:

To be fair, that’s the story for all new console generations.

Would it be possible for you to implement a vram usage logging feature that runs even when your UI is not working? I’d like a way to measure Dx12 games

Edit: or perhaps a plug in for rtss that uses dxgi memory budget? This would be maaaaaaasively popular

That already exists, refer to logs/dxgi_budget.log

Hot diggedy!

Can you add avg budget and usage to that log? The current information is not quite as useful. Terraria @ max budget of 9390 MB VRAM lol. The entire game is 280 MB.

From an old log for Terraria:

06/21/2020 00:36:01.365: --------------------
06/21/2020 00:36:01.365: Shutdown Statistics:
06/21/2020 00:36:01.365: --------------------

06/21/2020 00:36:01.366: Memory Budget Changed 2 times

06/21/2020 00:36:01.366: GPU0: Min Budget: 09390 MiB
06/21/2020 00:36:01.366: Max Budget: 09390 MiB
06/21/2020 00:36:01.366: Min Usage: 00024 MiB
06/21/2020 00:36:01.367: Max Usage: 00024 MiB
06/21/2020 00:36:01.367: ------------------------------------
06/21/2020 00:36:01.367: Minimum Over Budget: 00000 MiB
06/21/2020 00:36:01.367: Maximum Over Budget: 00000 MiB
06/21/2020 00:36:01.367: ------------------------------------

.
.
.
.
.
I’m also getting logs that make no sense, this is Flight Simulator 2020:

09/06/2020 02:05:40.733: --------------------
09/06/2020 02:05:40.733: Shutdown Statistics:
09/06/2020 02:05:40.733: --------------------

09/06/2020 02:05:40.733: Memory Budget Changed 2 times

09/06/2020 02:05:40.733: GPU0: Min Budget: 05329 MiB
09/06/2020 02:05:40.733: Max Budget: 05329 MiB
09/06/2020 02:05:40.733: Min Usage: 00005 MiB
09/06/2020 02:05:40.733: Max Usage: 00005 MiB
09/06/2020 02:05:40.733: ------------------------------------
09/06/2020 02:05:40.733: Minimum Over Budget: 00000 MiB
09/06/2020 02:05:40.733: Maximum Over Budget: 00000 MiB
09/06/2020 02:05:40.733: ------------------------------------

Supposedly DirectStorage is supposed to be the same API that the Series X is gonna be using, so hopefully it’ll be utilized more than you think, but you are probably right outside of Microsoft 1st party titles unfortunately.

I’m kind of torn on what I really want to do. On one hand a 3080 will be a decent upgrade over my 2080 Ti, but I could also afford a 3090, but I’m wary since I’ve never been one to go for the Titan level cards. They never really offered anything compelling over a -80 Ti card, which seems to be different this time around. There is also the leaks pointing to another card with 20 GB of VRAM, and despite what Lenovo had on their site, I find it hard to believe that they’d put 20 GB of VRAM on a 2070 Ti. We do know there is another unannounced SKU from internal roadmaps that have surfaced, so there likely really is still unannounced Ampere GPU waiting in the wings. If it has 20 GB it makes me think it is likely a 3080 Ti/Super. I’d say probably to announce in case Big Navi really surprises us.

And that brings me to the fact that AMD should be announcing Big Navi any time now. We know that the chips in the Next Gen consoles are supposed to be a cut down version of Big Navi, and they are showing essentially 3070 levels of performance. Using that knowledge, I’d say it is fairly safe to assume that Big Navi will probably be comparable or at least fairly close to the 3080 in performance, likely for cheaper (probably only $100 or so).

So, I’m not sure if I want to hold out hopefully for a 3080 Super/Ti, Big Navi or just say f*** it and just go for a 3090 now and be good for this generation. Not like I can’t afford it, and despite all the shit going on in the world/economy, my job is pretty damn secure atm. We did let some people go but our business has picked up so much now, we just don’t have the people to keep up and they even rehired a few of those let go, so I’m not really worried on that front.

I’m still awaiting benchmarks, but it looks more and more to be a 3080 for me this time around. I simply cannot justify the 3090 since it costs twice as much as the 3080 in Sweden, for what I assumes is some 20-25% performance advantage at best.

I dunno if this has been confirmed yet, but I remember reading rumors somewhere about how factory OC’d third-party RTX 3080s might even close the gap to the 3090 by quite a lot. If that’s even half true, I’d probably be better off purchasing a 3080 this time around and then in 2-3 years upgrading to the next GPU that’s released then.

And save half the price of the 3090 in the meanwhile…

I made my mind… i’m gonna grab a 3090. I’ve sold my Radeon VII (Jesus man they’re fetching an absolute fortune!) and mostly all my gear here, i’m just waiting for them to arrive. It’ll be glorious.

I’ve enjoyed running an AMD flagship for the past generation, but other than that, it’s not really been good experience - AT ALL. I don’t think AMD has what it takes to keep up with NVIDIA anymore, even by some utter and absolute miracle Navi 2x performs well and the card manages to run without crashing every five minutes like Navi 10 did - the drivers are such dogshit that I don’t wish it on anyone who wants to use their computers seriously. Hit me up when AMD has a deferred context DirectX 11 driver or when they fix their OpenGL essentials - until then, i’m back on team green.

Also the first bug I’ve reported, regarding every public driver since 19.9.3 is still unfixed - 359 days later. It’s a bug that prevents the R9 Fury X and older from outputting 4K 60 due to a bug in negotiating DisplayPort link speed. And that’s because I have advance access to the developers, so it isn’t the average bug submission process. When it turns a year old, i’m gonna buy cake.

Omg I’m fangirling, Andrew Burnes from Nvidia, the guy who writes the nvidia tweak guides, reached out to me to say they really liked my thread on VRAM.
@Kaldaien what if you could do a collaboration with NVIDIA about your framerate limiter or monitoring tool or something? Imagine how many people would hear about Special K if it was featured in an article on nvidia.com

Ooh that’s pretty cool :smiley:

Yeah definitely use the connection to your advantage if you can. I am now picturing a scenario where Nvidia implement some of SK’s features/code in the control panel since it’s allowed (afaik)

Yeah it’s annoying, many including myself were recommending the 5700 XT for its value, back when AIB cards released, only for driver issues to occur soon after :confused:

I remembered that I first learnt of Special K from Nvidia’s article too.
It’s about FFXV mods. (link)
Though, that article only talked about Special K’s ability to enable/disable input devices.