Topic-Free Mega Thread - v 1.11.2020

Lmfao now i wish that reddit post picked up

Almost 24k users fell for this…
image

I dare to refute it and my post gets deleted :slight_smile:

4 Likes

I ■■■■■■■ love it! Placebo is a god damn pandemic among PC gamers.

I was pretty damn spot on i’d say, from my post:

I assume the file exists in the first place since it was used as a reference during development and was just left in there - that’s my guess anyway.

But nope the file is there so it just must be used in some way :stuck_out_tongue: The game devs can be dumb enough to type the wrong values but not dumb enough to leave an unnecessary file in the game.

I’m going to stop ranting though, hurts my brain trying to comprehend this.

  • Added some warmth to HDR.

Uh oh, time to grab a bunch of screenshots and compare before and after :smiley:

I accidentally deleted it and the game was working fine without it.

I have fixed HDR screenshots in D3D12:

SpecialK64.7z (7.6 MB)

3 Likes

LOL, that’s awesome.

From reading up more about it looks like it was actually possible to load values into the unlocked dev console via the plugin that unlocks this functionality but I’d imagine the differences would be fairly low the pool size seemed pretty generous to me though with further RAM and VRAM to spare trying higher GPU values well it doesn’t hurt to test I suppose. :smiley:

1.05 removing the file might remove the functionality though but if one renames the file or backs it up and loads that .csv into the game through the console that could still work.

Done with Valhalla and going through Fenix Rising at the moment waiting on at least the first major patch before checking out Cyberpunk so mostly going through a variety of information and mods for looking into these things. :slight_smile:

EDIT: This I think or other means of unlocking the function.

EDIT: Ah and part of this as well it seems.

Going by the issue tracker there there’s a few mentions of a console window.

EDIT: Must be that one.

Or these for the experience. :stuck_out_tongue:

Line 542: RegisterConsoleCmd(“engine.forceCrashRecur”, “Forces crash that is at least 128 stack entries long”)
Line 543: RegisterConsoleCmd(“engine.forceCrash”, “Forces nullptr access crash”)
Line 544: RegisterConsoleCmd(“engine.forceAssert”, “Forces assertion crash”)
Line 545: RegisterConsoleCmd(“engine.forceOom”, “Forces OOM crash”)
Line 546: RegisterConsoleCmd(“engine.forceHang”, “Forces hang (watchdog crash)”)

Out of real memory, or just the placebo memory? :slight_smile: I can’t stop laughing at the idea people thought that was a configuration file for memory pools.

Some older games used it, Gothic 3 and the newer Genome builds for example up until Elex.

Titanfall 2 has it’s texture setting value directly tied to a memory pool from 512 MB to 6144 I think it was for ultra and depending on availability that determines the texture resolution and how much it can keep without having to unload or decrease the texture quality.

Some of these older game engines or some of the “jankier” games had some interesting solutions and exposed settings at times ha ha.

There’s a whole bunch of them in Witcher 3 too far as I remember, not something you’d usually tinker with or need to tinker with though.

TextureMemoryBudget=800 or what the default was there.

Sigh well it’s been a while since I did any major config tweaking it’s more often used for some of the lower than low quality configs and below minimum spec gaming these days.

Red Dead Potato 2

Redemption really but it’s often called potato mode for some reason I’m not 100% sure as to where that originated from.

Cyberpunk doesn’t scale down as much though but it is possible to really reduce texture quality to almost flat shading levels.

Actually kinda surprising just how much RDR2 can be pushed to below lowest quality settings it’s less common in newer titles that you can get these absolute minimum LOD’s and no texture loading at all to show like this.

460.97 “Hotfix” for NVIDIA.
https://nvidia.custhelp.com/app/answers/detail/a_id/5145/~/geforce-hotfix-driver-version-460.97

Standard:
https://international.download.nvidia.com/Windows/460.97hf/460.97-desktop-notebook-win10-64bit-international.hf.exe

DCH:
https://international.download.nvidia.com/Windows/460.97hf/460.97-desktop-notebook-win10-64bit-international-dch.hf.exe

1 Like

1x1 resolution LODs almost always exist to my knowledge (i’ve seen like one game where the minimum LOD was 2x2). If there’s some parameter that lets you force the texture resolution cap to 1x1, you will see these single pixel textures, which essentially becomes solid colour :smiley:

Edit: that might be worth noting in the Shader & Texture guide for modders. The highlight feature sort of covers this anyway, but filling a diffuse texture with a single colour is useful for seeing what area of a mesh it covers - especially if you want to take a screenshot for reference.

1 Like

Hmm yeah probably common in the engine itself just that it’s less exposed these days with the settings even when using really low or below lowest parameters thinking of it a bit more.

Games like Half-Life 2 scaling across three generations I think (D3D7 to D3D9) whereas nowadays it’s various smaller tweaks and reductions although if nothing else ultra to v.high or high still works really well getting a good boost to performance for a small hit to perceivable image quality. :smiley:

Also:

Via Meta Council. →

EDIT: Suppose recharging those batteries should be a little bit less awkward now compared to the motion controlled Wii version. :smiley:

https://www.reddit.com/r/pcgaming/comments/kft0sy/psa_your_cyberpunk_saves_will_become_corrupted/

That, is really bad.

EDIT:
And that.

Less problematic but that really shouldn’t be a issue after all these years.
More easily resolvable at least.

Can anyone with the console version actually take comparisons? Does it look like the raised black levels have been fixed? Or is it literally just changing the color temperature of the game?

Also, do we have no estimate for a PC release of the patch?

Huh thought it was already out but nope.

Should have read the full statement.

Some good PC updates too and a few oddities.

Why remove AVX instructions if it’s AVX1 processors nearing a decade in age now support that and they are too slow for the game on average.

Though the utilization of these extensions tend to be fairly minor anyway so maybe not a huge deal as such, probably even more of a rarity to see AVX2 utilized outside of perhaps a few specific cases and developers taking advantage of it.

Also would have assumed querying for the support of these instructions and enabling them as supported would have been possible keeping potential performance benefits intact.

Interesting about SMT as well but I suppose it makes sense, lower core count processors benefiting from the additional thread management but higher core ones don’t scale as well though from what Kaldaien mentioned it seemed the game engine at least could handle around 16 cores so 8 core with SMT though that’s pretty close to the AMD recommended here with 6 core and SMT scaling. :slight_smile:

Or without SMT and pure 8 or 12 core processors even the full on 16 core model in that case though splitting it up might just be shifting the various threads around not giving any major performance gains.
(Zen2 and newer the major benefit is probably keeping it to as few CCX’s as possible for minimizing any possible latency problems.)

Not sure how it handles Intel Hyper Threading (HT) guessing it’s kinda similar but the way it threads to various cores and the scaling differs a little bit due to AMD’s Zen2 architecture and then Zen3 though it’s fairly similar just more cores per CCX in that regard.
(Though handled better with the newest bios, chipset drivers and the OS for CPU scaling.)

EDIT: Hmm speaking of Zen looks Zen3+ is indeed a thing though it’s code name has been known since a while now.

https://www.reddit.com/r/Amd/comments/kg8ktw/amd_apus_for_2021_and_2022_rogame_leak/

APU’s for 2021 and 2022

Vega keeps going it seems, would have assumed AMD to be moving away from GCN and onto RDNA2 now but it might not be time to fully move away from the existing Vega APU variants and all the work invested into these just like that.

Feels like Zen3 is also already a bit of a big improvement and further refinement over Zen2 so I wonder what the Zen3+ can bring ahead of I presume Zen4 on the next AM5 platform, new DDR5 memory and maybe PCI Express 5.0 too why not add all the features in there. :smiley:

Going to be interesting especially with Intel more ready to compete as well plus whatever is next up for RDNA and maybe also CDNA if AMD can bring something to compete against NVIDIA (And Intel?) in the professional application market, workstation systems and servers and such.
(CUDA in particular won’t be budged easily.)

So 1.05 2077 Patch has hit PC, they did add some warmth to the HDR, maybe 0.05% its so very very minor

They also seemed to reduce the glow objects in the game have from the Cubemaps/GI.

Don’t have RTX so I can’t see for any changes with RT

So without A/B comparisons there isn’t much diff

Patch notes only address the glitches with reflections breaking and having missing lighting data, which made them glow.

But I will see myself if anything changed with lighting.

EDIT: Lighting hasn’t changed at all.

Reflections still bug out, just to a lesser extent:

Hmm so instead of Hopper it might be Ada next.

EDIT: Wonder how multi-chip is going to work on GPU, maybe that’s something AMD’s going to be utilizing (Much more?) infinity fabric cache memory for with the newer RDNA/Navi cards mitigating latency and such since it’s a good couple of clusters with cores and the rest and how this differs from the CPU and Zen in how it’d have to work and the issues in solving problems with access times and delays if it has to go through these various bits instead of a singular chip.

And how NVIDIA might resolve it as well looks like both have plans for this but they might be playing it a bit safer waiting on it a bit more until some of these initial difficulties with this type of design can be resolved better.

No idea though, fascinating but way too complex for my knowledge level to truly grasp.

Some pretty knowledgeable people in this field though for how it’s going to be accomplished and what the actual benefits are from having this design instead of the monolithic one.

Must be some pretty large gains if it can be solved without too many drawbacks after all considering the challenges involved and the difficulties in how to mitigate or fully eliminate some of these problems.
(The way I’m trying to imagine this would be done and if it’d be worth it with current tech.)

EDIT: Would be interesting if AMD attempts infinity fabric as well sort of mitigating at least some of the problems with a design like this by keeping data effectively in a faster cache or pipeline but they’d need quite a lot of it judging by the existing RDNA2 benefits but also drawbacks plus if it’s similar to the CPU and RAM / infinity fabric speeds maybe a return to HBM2 or newer would be ideal.

Costly though not the least for consumers if the GPU prices normalize around 800 US Dollar or comparably for the mid to high end segment and then higher still the next wave of hardware.

Going to be a thing for the new platforms eventually though and the features like usual with DDR5 coming up and the new PCI Express standard fixing up the existing support of all that and improving it over a refresh or two afterwards.

Well first things first, Ada and Navi30 I guess assuming Hopper is now pushed back a bit.
Wonder if Intel will be ready for next year too with their CPU improvements but that might be later still and then whatever their GPU’s might do for the mid-end market perhaps or maybe oriented towards workstation and business applications.
(Same issue as AMD has though I’d imagine budging CUDA in that field and it’s very well established support and ongoing improvements and developments from NVIDIA.)

Seems Rivatuner is improving their framerate limiter in the newer beta builds too, would be nice to have improved quality overall so there’s SpecialK, Rivatuner, further improvements to NVIDIA’s FPS limiter and all it’s various modes maybe even AMD although by now I think it’s falling behind a bit over the rest.

The tech seems fairly high level to what I can get from it, customization isn’t bad though and setting up profiles for what’s optimal.
(Works in SpecialK too and for NVIDIA I think Inspector lets users into the rest of the NVFPS settings and modes.)

EDIT: Effectively eliminate the issues around FPS limiting as much as possible and if the in-game option can’t do it / is terrible having means to use other tools for the job.

Display driver might not always be the best or most customizable for it but that improving would at least cover for a majority of the problems which can then be fine tuned and optimized through software like RTSS or SpecialK.

Good to see this given further thought and priority is all I’m saying and getting at the remaining issues or drawbacks that earlier forms of framerate limiting had. :slight_smile:

EDIT: Use whichever hammers fits best and still have no problems nailing the issue down?
Eh good enough of an analogy. :stuck_out_tongue:

EDIT: Thinking…

So async means it can do front or back whichever part of the frame(?) is well where it syncs I suppose
Then prioritizing either the start or the end of the frame I suppose is where these two other modes com in and respective advantages or disadvantages of all three of these.

EDIT: Winter sale must have started Steam is having a bit of a slowdown issue. :stuck_out_tongue:

“Bummer”
“Oops”
and “Huh”

Hitting all the error messages, ha ha.