Topic-Free Mega Thread - v 1.11.2020

Yeah 3070 - 3080 somewhere is what I expect, AMD cites Navi20 improvements over Navi10 but it never really scales 1:1 but AMD’s doing a few things here.

The 6800 looks similar in hardware to the 5700XT but it’s RDNA2 and a good bit faster both base clock and potential boost clock speeds giving it two improvements here over the 5700

6900 has two variants I assume a XT and XTX again (Anniversary type variant bit better binned higher clock speed potentials.) speeds are a bit lower but it has a lot more cores over what the 5700 had in addition to RDNA2 and still somewhat higher speeds.

Flop comparisons get thrown around on how AMD only has ~20 something versus NVIDIA’s ~30+ something but then AMD’s gained over Navi10 whereas NVIDIA is a bit below Turing although this is also depending on the type of workload it performs and when it can leverage it’s full power it’s really fast.
(3090’s also power limited to a larger degree but custom designs or bios mods could remove that issue or hardware mods like shunts.)

Still never really going to scale perfectly but it’s not going to be another 30% from AMD either so the 6800 could push close to the 3060 or the 3070 with the 6900 closing the gap to the 3080 though this is all theoretical and we’ll see how the GPU scales and in which scenarios once actual info becomes available closer to launch.

5700 is still mostly scaling in Vulkan and D3D12 with D3D11 performance being very variable and D3D9 while now stabler is pretty poor compared to Vega or earlier.

5700 and newer now also has the hardware (Cache was redesigned.) to where AMD can leverage D3D11.1 multi-threading and potentially see gains like NVIDIA did but that’s a lot of work over a year or more that would be needed but that’s a performance challenge until then.

D3D12 and Vulkan would be dependent on developer utilizing AMD’s way of doing things and NVIDIA’s which either means both work or one shows more substantial gains there’s differences here for what either vendor recommends as best practices plus Vulkan also has extensions that make a difference.
(But there’s still several titles that don’t use Vulkan 1.2.x and thus this is limited by what 1.1.x has.)

What else? Well probably just the driver and software situation. AMD’s working out the RDNA1 issues but the regressions for Vega, Polaris and Fury are still there along with Radeon VII issues and RDNA2 is going to start this entire process over but hopefully be less of a mess for the … first year.
(That took way too long to get sorted and there’s still a variety of hardware level issues too.)

EDIT: Still needs actual confirmed info and more details though, lots of misinformation and rumors even with Linux and Mac driver code info now mostly confirming some of the details on this new hardware after all.

Theoretical gains and up to too until actual benchmark results and how well this additional hardware and clock speeds and the RDNA2 instruction set actually does scale.

EDIT: Though that’s true for NVIDIA and Ampere too and some games and game engines either hit a CPU limit or don’t scale well for other reasons.

EDIT: Ah yes the 4 to 6 % performance lead here with the 2080 S well at least the Ti figures made it. :stuck_out_tongue:

Still really fast though, bridging the gap between the current 5700 and 2080Ti performance wise would be needing almost a 40% performance gain or so and then an additional 30% to 40% on top of that to get close to competing with the 3080’s performance level.

Competition between AMD and NVIDIA on the high end segment of the GPU market would be nice to see though, RDNA took a while but if RDNA2 and the next RDNA3 gaming oriented cards can start catching up or matching NVIDIA’s Ampere and upcoming Hopper cards this is going to get really interesting. :slight_smile:

It just dawned on me that

That may be the reason for the bump to 10 GiB from the normal 8 GiB despite never having come across a single game in the wild that fills 8 GiB :slight_smile:

WDDM 2.0’s memory budget system went a long way toward giving engines the confidence to fill VRAM completely, this is perhaps the tipping point where engines actually do load as many assets into VRAM as the driver will allow.

It’d certainly help with Baldur’s Gate 3, it uses a tiny pool of ~512 MiB for texture data and as a result is loading / unloading textures constantly while you pan the camera and as one would imagine, stuttering all the while.


I kinda want to re-tool SK’s texture mod system at some point to use RTX IO. I think I could probably eliminate a lot of the cache complexity.

1 Like

Could be though maybe more for development purposes at this point.
RTX IO is NVIDIA’s implementation of Microsoft’s Direct Storage API isn’t it which will be available as of Windows10 2021 H1 so from there to seeing actual use in games it’s going to be a while.

But potentially aided by the XBox Series X and their support for D3D12_2 and this although then also hampered by storage requirements and lowest common denominator although for starting out storage of texture data as a cache could be a thing even if PCI Express 4.0 compatible NVME SSD “disks” are slowly seeing use in more general non-enthusiast computer systems. :slight_smile:

But that’s true for D3D12_2 itself and Windows 10 21H1 too and adaption for these before they can be mainstream but for development purposes and getting started and as long as NVIDIA and AMD don’t divide the thing between two entirely separate API’s and how this is utilized for either vendor well I hope it can be a good thing eventually same as the rest of the DirectX12 API although this also hinges on developers knowing what to do with it so yeah, going to be a few years I expect.

EDIT: Or perhaps…perhaps GPU IO can skip the drive requirement a bit.
Not entirely sure what Microsoft has planned for D3D12 Direct Storage yet.

EDIT: Probably not.

Which then goes same as DXR to NVIDIA and AMD’s implement of this like RTX or RTX IO for NVIDIA.

And then software like RAD Tools and Oodle.

PS5 stuff is a bit different though with it’s custom IO solution but faster data throughput and loading just needs the software for it and how it’ll end up being utilized and the requirements for this.

EDIT: Not having a few thousand loose files just scattered in folders everywhere would likely also help…just saying. :stuck_out_tongue:
(Think that’s actually done in Cold Steel 3 compared to the prior games noticeable improving things even if it’s a bit different it shows the packed data files can be significantly better.)

Shadow of War pushes upwards of 8 GB with their Ultra HD pack. Flight Sim pushes beyond that when it is allowed to.

Monster Hunter World with the texture pack can also push into ~7.5 GB and that’s 2560x1440 though disabling that pack more than halves it and lowering volumetric fog quality from V.High to High also cuts about 500 MB and restores a good chunk of performance for a small/minimal visual trade-off.
(Volumetric effects and a sliding scale of image quality and framerate since 2014 or so. :smiley: )

The game just chunks in entire areas far as I remember though so by normal streaming rules the actual usage would be much lower, guessing Rise on RE Engine plus Switch(Pro) will be a bit better and maybe eventually a next-gen + PC sequel to World at some point. :smiley:

Hah, well probably around the same time Capcom decides to give Mega Man Classic an ending or so.
(~Has to happen at some point leading to the X series and then beyond to Zero and weird and strange things.)

EDIT: On that note Resident Evil 3 also pushes VRAM pretty hard but it does also allocate and cache which looks funny on a 16 GB GPU or higher when a good 70 - 80% is reported as being used by the game which is not the actual real usage figure.

World just runs out and crashes on area transition if it hits the limit of availability. :stuck_out_tongue:

Think Red Dead 2 also has something like that with patches adding so users can see OS VRAM allocation. (Not necessarily actual usage.)

Not necessarily bug free but it’s getting there.
(Think the last two patches have also helped for the instability the recent larger online content update introduced.)

Anyone see this?

Apparently MSI did some very shady stuff on Ebay.

Lol, nothing freezes like Adobe software. It’s really stupid to have to force-close my computer every time Premiere freezes everything :confused:

They posted a explanation for it as well, weird how that happened with MSI and this subsidiary of theirs.


(From Video Card Z : https://videocardz.com/newz/msi-accused-of-selling-geforce-rtx-3080-on-ebay-at-much-higher-price-releases-a-statement )

Oh, it’s up in a bit


Does anyone believe MSI?

1 Like

Guru3D perhaps, seems they’re a bit affiliated as they’ve started warning for Gamer Nexus videos now since he criticized MSI a while back ha ha.

Shame too when those things come up as it impacts the sites reputation and reliability for whichever affiliate they do have.

Oh and Inno3D also had their little blurb on how their cards were absolutely not affected by the Ampere GPU instability issues highlighted by their little slogan “Brutal by nature.” :smiley:

Meanwhile Zotac is sending out cakes as an apology over that little issue on the other end of this whole situation.

Learned something new with that, seems like a interesting little dish these Mooncakes.

Nope not on that front. Apparently they done this before and for quite a while now. Since 900 Series days.

@Kaldaien
I got the GOG Edition for BG3. Thought that would be better choice of DL. I will test SK with it and see if it helps with some of game issues.

Should I use SK to correct HDR in BG3?

Ooooh, AMD just became an interesting choice for upgrades. Seeing them really improve for singlethreaded performance as well is niiiiiiice.

Yeah the Ashes benchmark was already pretty nice but Zen3 seems like a solid CPU with various smaller and larger tweaks and improvements too.

…Just remember the heat up the CPU first if it’s a AMD Zen model and then twisting it off gently as to avoid pin damage.
(By running something or whatever not GPU-baking style. :stuck_out_tongue: )

Still pops up occasionally on the AMD Reddit so it’s a thing with the pins on the CPU as a more sensitive or fragile bit compared to having them on the motherboard.

Probably just going to be a GPU upgrade and wait on the next full system probably 2023 - 2024 if there’s no delays in upgrades this time, 5900 CPU would be nifty though but the 3900X should suffice. :smiley:
(Bit over the top I suppose actually but the extra cores might come in handy and it’s mostly GPU limited stuff as a gamer but this helps offset CPU needs in some games or game engines and overall ha ha.)

Will be fun to see what Intel’s actual lineup and competition will be once they’re on track again but until then AMD has a pretty good start for the consumer desktop CPU market until AM5/Zen4 or what’s next if not a bit longer still depending on when Intel is ready for their node switch and all that after problems and various delays.

10k series is still not bad or anything so they could be making quite a comeback when they’re fully back on track again plus they still hold the server market and such and their net worth and all that stuff as well.
(Whole patent divide with AMD on the x86 and x64 stuff too and probably more.)

You know what? I’m stupid…

I can actually convert HDR10 to scRGB using Special K, I forgot about this.

Yes, I would suggest turning on SK’s HDR overrides. It eliminates the banding problem.

Unrelated, but this is a mega-thread afterall.

Thank you for joining in on the blurbusters discussion, hope you’re feeling better now as well. Like I said, I hope that forum compared to the other places you’ve been in the past (I’ve seen the entire track record heh…) will keep you more sane for good out-of community discussions for improvements / talk etc.

Blurbusters (at least the owner) has always been very open minded with great knowledge on many topics, thus making him a good candidate for something like Special K. I think Special K by you and Chief’s desire for perfect framemotion / perfect glassfloor framepacing in games due to his vested interest in monitors is a perfect fit currently.

I also hope maybe it pushes you into being interested about “scanline sync” methods (either for just learning purposes to see if there ever really is a need, as you’ve mentioned, windowed mode might just be the future afterall even for competitive gamers like myself who will sacrifice close to anything for lower latency with good framepacing)

Thank you again and hope you feel better.

Damn, those numbers from AMD are promising. Obviously need to see independent 3rd party benchmarks to know for sure. The thing that was amazing to me was the straight up comparison going from a 3900X to a 5900X. You don’t normally get gaming performance increases like that with just a CPU upgrade. That is seriously impressive. And to see them allegedly surpass Intel in basically every way that matters (who cares about BFV, lol) is nice. Intel needs more of a kick in the pants to get off their asses. And to see that the 5950X will beat the 5900X across the board (the 3950X got beat with some games by the 3900X) is just icing on the cake.

I am glad they decided to show something about Big Navi, it was desperately needed. What they are saying though lines up with all the leaks and rumors of late. It won’t be quite as good as a 3080, but it will pretty damn close. So unless they get really aggressive with pricing, I see no reason to get one over a 3080. Although hopefully catching up to their mainstream flagship should hopefully prompt Nvidia to be a bit more… consumer friendly, maybe we’ll even see a price cut if AMD is especially aggressive on pricing.

I assume you mean for CPUs? :stuck_out_tongue:

Their D3D11 drivers kind of suck at multi-threading (D3D11 deferred mode).

AMD CPU + NV GPU is a safe bet for performance in D3D11-only titles that use multi-threaded rendering, but MTR on AMD CPU + GPU doesn’t really scale competitively until you feed it a D3D12/Vulkan engine.

Yup, they claim IPC performance gains for the latest Zen 3 architecture that sees the Ryzen 5900X beats out the i9-10900K in a lot of games at 1080p. Along with the number of cores and threads (12/24), it makes it even more interesting as an option.

The singlethreaded performance gains in particular are quite something since the CPU only turbos to 4.8 GHz, while the 10900K turbos up to 5.3 GHz.

Having fun right now with BG3 Char creator LOL.