Topic-Free Mega Thread - v 1.11.2020

Autumn’s going to be interesting, few more days for the Ampere / NVIDIA 3000 series to launch and reviews following a short delay for the NDA pending GPU availability for some sources.

Then there’s AMD’s Zen3 and improvements over Zen2 seemingly without a 600 series motherboard and after that RDNA2 with Navi20 and what could be the 6700 to 6900 series of GPU’s.

Intel should be getting into position for their new CPU architecture too but there’s been a few delays and setbacks but they’re still doing really well.

Consoles too maybe also followed by a revision of the Switch going by some of the rumors on that which may or may not be true and then sometime next year the next Windows 10 build and full support for some of the new D3D12 and other API’s and features on PC plus how over the next couple of years this is going to be utilized and implemented. :slight_smile:

Ampere and the 3070 with a 2080 level performance at a lowered cost and then the 3080 as a step up price wise but another good boost in performance also.
(3090’s it’s own thing but it’ll be fun to read up on how it does too.)

Year after that might see USB4, DDR5, PCI-E 5.0 and whatever else (NVME 1.5?) though I expect at least initial DDR5 support to be a bit of a learning phase and some issues to work out until the next hardware series from AMD and Intel refines this further along with RAM chip maturity and the controllers and all that stuff for speed and compatibility both. :slight_smile:

EDIT: Maybe a year or so for PCI-E 5.0 though, just because it’s readied doesn’t mean it’ll pop up on consumer desktop grade hardware outside of server and workstation kits just like that.

Apple event in 3 hours. New iPad Air and Apple Watch are expected to be announced, and I’ll probably, after this event, order a new iPad Pro for myself.

My iPad (2018) model is slooooooooow and have apps crashing all day long whenever I try to even do basic multitasking, so I’m glad to finally be upgrading. And nowadays the Pro model even supports a cursor and mouse :slight_smile:

Vega was also originally forgotten for me, until Vega 56 turned into some enticing deal and was being talked about more and more. AMD had a confusing line-up, and odd benchmarks where as you mentioned the Fury cards fall behind the 580 sometimes. They had too many architectures to optimise their drivers for, and with a pretty small team to do so.

Yep Fury was hit by rasterization performance as was Vega in addition to excelling at shader workloads plus a variety of features that were going to be supported but either were game-specific or didn’t work at all.
(Shaders are more of a thing now but Fury is pretty dated by this point.)

Polaris implementing improvements to texture compression and culling of geometry plus having 8 GB VRAM too once 4 GB was starting to be a limit has allowed it to scale past Fury despite the hardware difference.

Similarly Navi has much of this in addition to operating at a Wave32 set of instructions instead of Wave64 plus it can operate once every clock cycle instead of every fourth gives it a edge thus scaling almost to the level of the Radeon VII improving even under D3D12 or Vulkan API comparisons.

Though when Vega and particularly the less bottlenecked Radeon VII can scale these are still incredible high performing but you only really see it for a few Vulkan or D3D12 titles using the GPU’s strengths more fully.
(Wave64 is harder to optimize for and fully populate for efficient performance and full utilization though in cases where this can be done these cards are still really fast.)

Bit of the same for NVIDIA though, DOOM Eternal scales up really well but other engines and games are more varied though NVIDIA still seem to hit at least a 40% performance uplift even in more problematic games. Optimization and game engine issues might continue to be a hassle just overcome by brute forcing through it for a while yet unfortunately.
(Borderlands 3 I think it was for that particular game performance result.)

Driver improvements and regressions as a big thing for AMD also, Polaris, Vega and Fiji have regressed a bit in 2020 as Navi has improved (But it’s initial launch state was pretty rough.) though the amount varies and this decrease in performnace is most noticeable in what seems to be earlier Vulkan API implemented games.
(Upwards of a 30% performance decrease I think is what it amounted to comparing 20.8.1 and then also 20.8.3 against 19.12.1 I’ll have to re-check on that.)

RDNA also comes with it’s own benefits and AMD GPU Open effects but other than CAS and it’s sharpening added as a settings toggle in the driver control panel these are only utilized in a few AMD sponsored games most notable Horizon Zero Dawn where despite some initial problems with the port the Navi10 GPU’s close in on the 2080 performance level.
(Overall the gap between the 5700 / 5700 XT and the 2060 to 2070 Super has also closed with the 2080 a bit ahead and then the 2080Ti as the clear performance leader.)

Various AMD Vulkan extensions and such as well but NVIDIA has theirs too and these benefit AMD’s GPU lineup overall whereas Horizon sees the Navi GPU models perform really well and their other cards under perform with Vega in particular doing very poorly despite the game using D3D12 so it’s not the usual D3D11 API bottleneck and driver situation.
(Listed as a known issue but still not addressed by AMD.)

Can’t happen yet unless AMD plans to drop most every supported GPU but Navi but going fully RDNA for the drivers should help though GCN support is not going too well currently as is but is at least supported and AMD is fixing issues and regressions even if some of it takes time.
(Other than the 7000 series and GCN 1.0 / GCN Gen1 it’s been various additions and tweaks over time so dropping the earlier GCN GPU’s likely won’t change things too much either.)

Curious about RDNA2 too, might make RDNA1/GCN with Navi10 seem like a early test phase and the GPU’s lifetime and scalability might be a bit lower as a result as focus shifts fully on RDNA2 and newer from this first jump form GCN.

But that would happen anyway as Microsoft made D3D12_2 and it’s various features a hardware implementation so going forward it’s going to start seeing a new baseline of hardware as the minimum requirement although not immediately.

That’s a bit lengthy.
Fury and Vega didn’t quite live up to their full potential although over time their shader performance and usage in workstations have allowed better usage and scaling but the driver situation could be better for continued support and any regressions though this also includes Polaris which is otherwise still doing really well for AMD’s sorta mid-range lineup.

5000 series / Navi / RDNA1 as the new mid-range that never really got a actual high-end model but the GPU pushed quite above it’s price/performance range (Until third party models price hiked the card to ridiculous levels.) so it’s done pretty well for being a bit of a test state for RDNA1 until the big upcoming second generation comes out.

EDIT: Features and the driver state and software is going to be a bit of a thing though, hard to recommend the 5000 series even when it closes the gap on the 2070 S if you’re going to have a uncertain experience with bugs or stability that those extra 100$ might also justify even if the performance is now pretty close from AMD’s 5700 and 5700 XT models.

3070’s going to kinda end that though it does cost a bit more still but gives a bigger performance lead too.

So it depends on what AMD’s price/performance level will be with the 6000 series and how these position themselves against NVIDIA’s 3000 GPU lineup in addition to the software side and possibility of NVIDIA slotting in a 3060 or a Ti of the 3070 or 3080 later on.

EDIT: Well it’s easier to at least recommend the 5000 series as a consideration now due to improvements to the drivers over the last six months but now the new GPU’s are coming out and possible also price cuts and selling out stock on the older models depending on how things go.
(At a guess the first shipment of the 3000 series will go…very quickly. :stuck_out_tongue: )

Trailer’s up

Assets are mostly the same, few meshes and textures seem upgraded. I see this as an advertisement for the latest version of Cryengine, since it’s re-purposing a classic to showcase the engine’s latest features. Looks fine to me from what i can tell, but i mostly just want the AI to work this time.

There’s some noticeable raytraced-reflections artifacts on the nanosuit - at least that’s what i think it is? It’s reminiscent of specular reflection noise.

I expect AMD to be more power efficient vs Nvidia with their next line-up. It seems like Nvidia have relied on pushing shader count immensely on Ampere, without really pushing clock speeds, and still asking for a lot of power - meanwhile, the PS5’s RDNA 2 GPU can clock to 2.1 ghz - though the Series X with more compute units clocks significantly lower, but that could also be Microsoft being sensible about thermals and noise.

Could also be part of AMD going with the second version of TSMC’s 7nm node and NVIDIA is on I think it was Samsung 8nm but overall NVIDIA is also putting a lot of hardware into their cards, number of cores is up significantly plus the custom GDDR6X memory modules.

Reference watts and the power draw seem to be 320 for the 3080 and 350 for the 3090 so that’s something to compare.

Then there’s the way the GPU’s quickly boost up to full speed and also how it falls back to idle or sleep mode levels.

AMD’s traditional problem here besides the power usage itself has been quick shifts and high peak transients or power draw levels near double that of the rated figures, newer PSU’s can handle that but a modular design and some earlier models will have problems with the rapid shifting power levels and the amperage and wattage totals being so much higher.

But Navi drawing ~200W (For the less extreme customized models at least.) up to ~300W is a lot better than what Vega did so that reduces the peak power draw though in turn the GPU’s a bit system sensitive for unknown reasons. (AMD keeps tinkering with timing values in their drivers too possibly related to some extent.)

Power level type users can also still undervolt Navi though it’s dynamic clock state and constant changes and fluctuations requires more extensive stability testing than the fixed states used by Vega and earlier.

Much the same results though with a near halved power draw, upwards of a 100mv lowered GPU core voltage and the reduction this does to temperature possibly even allowing higher or better maintained max boost clock speeds.

Some of the Navi cards including the 5700XT’s are still highly variable though in terms of effectiveness and might not scale much without lowering the GPU clocks first and the memory modules varying between Hynix and Samsung including the timings are also sensitive.
(But voltage here is fixed and further down clocking isn’t possible so it’s strictly a overclock thing.)

EDIT: Suppose the increase in core count also means that Ampere isn’t quite as efficient as Turing but it has other advancements and improvements so it’s not just down to the extra cores or a higher potential boost clock speed either. :slight_smile:

Doesn’t really mean too much on it’s own and one can’t directly compare things 1:1 either much as it seems stuff like the amount of TerraFlopps for one big thing is once more a numbers game and the whole marketing thing around that.
(End of the earlier bit wars was the start of the terra-flop wars I guess. :stuck_out_tongue: )

Yeah, Samsung 8nm is more like 9/10nm by TSMC standards.

1 Like

nVidia wanted to put some pressure on TSMC, but the Samung 8 nm doesn’t seems so good as the TSMC one.
One of AdoredTV videos, he theorizes they needed to increase de CUDA cores much more than normal, because they couldn’t increase much the speed on the new processing node.

1 Like

It’s a weird theory, but i kind of think AMD might name their top card the 6800 XT, so RDNA 2 takes the max tier up a notch after the 5700 XT - by the time the RX 7000 series is nearby, people will expect AMD to finally go for the top and make a 7900 XT. It could be a good way to create hype in advance, if AMD can deliver.

I wouldn’t be surprised if everyone thinks this sounds stupid though, like i said, weird theory XD

Water is the “universal solvent,” if it looks too clean… it’s probably not water :wink:

Crysis: Remastered is looking mighty fine! They’ll support DLSS as well apparently so I guess that’s how some users will be able to run it at 8K.

I’ve never been a real big fan of DLSS. I like my pixels temporally stable. The few games I have tried it in, the unnatural artifacts it produces are very distracting. Though I suppose at 8K resolution, it might be less noticeable.

I don’t have any 8K displays though :frowning: I put all my eggs in the OLED / HDR basket.

DLSS 2.0 is when DLSS actually became usable - there are motion artifacts though, this video analyses DLSS 2.0 pretty well

Generally good results, and could become a selling point imo if many more games support it over time. That being said, we haven’t seen 2.0 used in a first person game to my knowledge, so we’ll have to see if Crysis even works well with DLSS or not.

But i personally also prefer my pixels stable :slight_smile:

Edit: I still stick by what i said a few months back, DLSS 2.0 feels more like Nvidia just lowers quality on the driver level to push the frame rate. I know it’s not true of course, but it seems like a good way to describe DLSS 2.0’s results imo, because of some of the artifacting.

Example image of 2.0

Oh, that’s interesting. Control doesn’t even seem to acknowledge that I own an RTX graphics card, lol.

I know half of that is because its DXR features are D3D12 only and I run the D3D11 version of the game for HDR… but DLSS isn’t tied to graphics API. They should really stop doing stuff like that. Tomb Raider was similar, they could have very easily implemented HDR in the D3D11 version of the game, but didn’t bother with it.

I never play the D3D12 version of games if they also ship with D3D11, so this bugs me.


DLSS stutters like a pig, it would appear… but lets you turn on DXR features at comparable framerates.

Lmao, i never even noticed that, was just so focused on the final image. Btw how do you do that line between text? :eyes:

Oh, hahah.

@ 419s, “peter panning” it’s not just for shadows anymore. Now you get reflections that are floating unnaturally too.

Yeah it’s bizarre, the further graphics are pushed, the more artifacting appears.

RDR2 is really interesting, i think it’s the best looking game out there, but turn off TAA and you’ll notice how much the devs relied on TAA to tidy up the graphics and basically optimise the game. Shadows in particular flicker quite a bit, no matter how high you push the game’s settings. Turn on TAA and it looks fine.