It’d probably initialize it really early so it can be activated and loaded on the fly, I assume it’s a combination of D3D11 which SpecialK supports even up to 11.4 and the current newest Windows 10 DXGI versions but mixing in Vulkan might cause SpecialK to throw a error depending on what’s going on here with the actual shader system and a mix of HLSL and SPIRV or what they’re using for the Vulkan bits.
EDIT: Not like I’m all that good at this but if they’re using a Vulkan layer over D3D11 it could be anything in there, think the last time I heard of a game mixed like that it was Evil Within 1 using OpenGL as part of idTech5 and D3D11 over that.
Anything from the shader management to the compiler and the format the shaders are using or more since from that Tweet it sounds like a fairly custom solution.
(I believe D3D12 has interop for Vulkan some way to make this easier but not D3D11)
EDIT: Actually it’s a bit curious as to why it’s not just using Vulkan or D3D12 natively but it is what it is.
(If this is Cry Engine V and not the game and additions of newer Cry Engine on that they would gain more from using DirectX 12 or Vulkan perhaps throughout but maybe there wasn’t enough time.)
Meh, seems that I’m losing G-Sync if I force the game to use “System” as the DPI scaling option, which is the only one that allows me to play at a sub-4K resolution.
Finding a good stable setting and everything for this game seems rather hard… Even when I managed to get ~60 FPS, moving slightly in any directions ended up causing 40-60ms frame spikes for some weird reason.
Their autodetect algorithm also sucks, as it defaulted my system to 4K Very High preset, which saw 23ish FPS in most scenes…
That wasn’t exactly relevant to this game I don’t own Crysis.
The reason there’s DXGI / Vulkan interop in DOOM Eternal is because the Vulkan swapchain has to be copied to a DXGI swapchain for HDR. In order to handle all of that without any performance penalties, SK needs to be able to handle the creation of 1-2 swapchains a frame. That engine doesn’t recycle swapchains, so I had to implement recycling myself to keep memory limits reasonable. It’s because of that swapchain recycling that my device context code had to be re-written.
I am not a fan of the “new” nanosuit mode that Crysis: Remastered features. It’s basically a mode that more mirrors the later iterations of the series, but it ends up excluding the player from one of the movement speeds.
Basic nanosuit mode has three or so different speed levels:
Walk
Sprint / Walk (Speed)
Sprint (Speed) — This one eats up suit energy while being used.
The regular Sprint and the Walk (Speed Buff) is about the same speed. The former is active when sprinting while Stealth/Strength/Armor is activated, while the latter is active when simply walking around with Speed activated.
Sprint (Speed) should be self-explanatory – it’s active when sprinting while the Speed buff is activated. This is an “active” ability, and so eats up energy when being used.
The new nanosuit mode has the game automatically switch over to the more “appropriate” buff on the fly. What this means while playing is that when you attempt to sprint, the game will use Sprint (Speed) and eat up your suit energy in the process. The only way to use the actual normal Sprint speed with the new nanosuit mode is to enter either the Armor mode and then sprint. The Stealth mode technically also works, but as Stealth eats up more suit energy the faster you’re moving you probably wouldn’t like that option. Armor at least ‘only’ eats up suit energy if you get hit.
I guess this isn’t a major issue for most players, but it means the nuances of the classic nanosuit mode (e.g. regular sprint without actually accidentally eating up energy when being hit or so) is lost in the transition.
As someone whom basically always plays Crysis on the hardest difficulty (not that it means much) with constant switching between the suit modes as needed, I found the new “on-the-fly” mode of the suit a bit distracting. But I guess it’s probably just me at the end of the day.
At least they retained the classic nanosuit mode as an option for players, so I appreciate that.
Also, blood apparently defaults to disabled in the game, lol.
Yeah, basically the game only uses Vulkan in the backend to facilitate hardware-accelerated raytracing as there’s currently no raytracing APIs or extensions in D3D11 if I remember it correctly.
So if raytracing is enabled, and hardware-accelerated raytracing is possible, the D3D11 layer basically sends commands to the Vulkan raytracing backend layer and has it send various raytracing compute operations to the GPU for processing.
It’s basically a similar approach to how developers can use DXGI to calculate GPU memory usage in Vulkan games. There’s nothing that really prevents developers from using the features of one API in conjunction with those of another APIs as long as the performance loss isn’t too high and they handle the translation between the two well enough.
Seems a bit closer to the per-object motion blur effect the game originally touted as a really nifty feature but that would have been overhauled significantly in this version, Lost Planet DX10 and some other games used these geometry shaders I think to add to this effect but with various drawbacks.
Crysis “solved” it by from memory going up to very high sample levels. (512? Think 32 or lower is common but motion blur has also changed since these earlier methods.)
EDIT: Though then again it could just be the first person player model and it’s data getting caught in this temporal artifact from the TAA and the result being that there’s ghosting from these prior frames.
Crysis 3 and it’s SMAA effects for the swaying grass particularly at lower framerates come to think of it.
(SMAA T2X and many other TAA modes overall really, Battlefield V’s water effect and other things using temporal data too.)
EDIT: Still would have expected it on more stuff than just the player first person stuff.
Also is that a rock?
(EDIT: Ah it might be on those leaves in the background too but there’s also DOF so it’s hard to tell.)
EDIT: Hmm actually the outline more resembles shadows…maybe it’s the shadow buffer rather than motion blur though yeah TAA causing it is probably still the same just what it applies to and what glitches.
Eh it’s probably similar regardless, TAA and glitches particularly with a lower framerate.
Interesting to see though and I’m guessing the console is also locked so no way to get into the actual cvars and what it’s using more specifically.
c:\users\amcol\saved games\crysisremastered\shaders\cache\d3d10\cgcshaders\pending\total_illumination@computetracepass(d)(rt320000000080000)(cs)(1267,17-77): warning X3556: integer divides may be much slower, try using uints if possible.
c:\users\amcol\saved games\crysisremastered\shaders\cache\d3d10\cgcshaders\pending\total_illumination@computetracepass(d)(rt320000000080000)(cs)(1284,16-24): warning X3556: integer divides may be much slower, try using uints if possible.
c:\users\amcol\saved games\crysisremastered\shaders\cache\d3d10\cgcshaders\pending\total_illumination@computetracepass(d)(rt320000000080000)(cs)(276,15-41): warning X3556: integer divides may be much slower, try using uints if possible.
c:\users\amcol\saved games\crysisremastered\shaders\cache\d3d10\cgcshaders\pending\total_illumination@computetracepass(d)(rt320000000080000)(cs)(276,15-56): warning X3556: integer divides may be much slower, try using uints if possible.
c:\users\amcol\saved games\crysisremastered\shaders\cache\d3d10\cgcshaders\pending\total_illumination@computetracepass(d)(rt320000000080000)(cs)(277,16-42): warning X3556: integer divides may be much slower, try using uints if possible.
***out of memory during compilation***
compilation failed; no code produced
I’m going to need either more VRAM or RAM. Not sure which, lol. My 24 GiB GPU will be here next month.
===
+-------------+-------------------------------------------------------------------------+
09/18/2020 16:41:14.917: [ DXGI ] [!] IDXGISwapChain::SetFullscreenState ({ Windowed }, 00000000h) -- [ CrysisRemastered.exe < CUNIXConsole::Print>, tid=0x0c74 ]
09/18/2020 16:41:14.917: [ DXGI ] [@] Return: DXGI_ERROR_INVALID_CALL - < DXGISwap_SetFullscreenState_Override >
09/18/2020 16:41:14.917: [ SpecialK ] Critical Assertion Failure: 'xrefs == 0' (C:\Users\amcol\source\repos\SpecialK\src\render\dxgi\dxgi_swapchain.cpp:177) -- unsigned long __cdecl IWrapDXGISwapChain::Release(void)
09/18/2020 16:41:14.917: [ SpecialK ] Critical Assertion Failure: 'ReadAcquire (&refs_) == 0' (C:\Users\amcol\source\repos\SpecialK\src\render\dxgi\dxgi_swapchain.cpp:197) -- unsigned long __cdecl IWrapDXGISwapChain::Release(void)
09/18/2020 16:41:15.974: [SEH-Except] Exception Code: e06d7363 - Flags: (Non-Continuable) - Arg Count: 4 [ Calling Module: C:\WINDOWS\SYSTEM32\vcruntime140.dll ]
09/18/2020 16:41:15.974: [SEH-Except] >> Best-Guess For Source of Exception: CxxThrowException
09/18/2020 16:41:34.939: [SEH-Except] Exception Code: e06d7363 - Flags: (Non-Continuable) - Arg Count: 4 [ Calling Module: C:\WINDOWS\SYSTEM32\vcruntime140.dll ]
09/18/2020 16:41:34.939: [SEH-Except] >> Best-Guess For Source of Exception: CxxThrowException
09/18/2020 16:41:36.994: [Perf Stats] At shutdown: 0.07 seconds and 90.92 MiB of CPU->GPU I/O avoided by 178 texture cache hits.
09/18/2020 16:41:36.996: Reset potential = 1885 / 1885
09/18/2020 16:41:36.996: [DX11TexMgr] Unexpected reference count for texture with crc32=70fc105b; refs=2, expected=1 -- removing from cache and praying...
09/18/2020 16:41:36.996: [DX11TexMgr] Unexpected reference count for texture with crc32=3e5e36ee; refs=2, expected=1 -- removing from cache and praying...
09/18/2020 16:41:36.996: [DX11TexMgr] Unexpected reference count for texture with crc32=6d1821b9; refs=3, expected=1 -- removing from cache and praying...
Yeah, i’ve been playing with that. Was never a fan of the modernised iteration. The game still awkwardly provides you with the modern prompts though, as i mentioned before.
The game is kind of buggy, what’s up with that XD And unfortunately the AI seems unchanged
So far, i’ve found setting everything to low, excluding textures, post processing, shaders and shadow quality works best, at least for me. Textures i have set to very high, shaders need to be set to very high or the low res buffer of the setting will be noticeable - i believe shaders is SVOGI, which also replaces AO. Post processing i have at medium, and shadow quality at high because i love good shadows. Still need to experiment more with the graphics settings, but it’s a lot smoother now and still looks great.
Nt Share Handles; I’ve never seen memory allocated that way before in a graphics engine. It must be for ethe RayTracing stuff.
This game pushes my GPU really hard. It gets as high as 50C On a serious note, that’s the hottest this GPU has ever gotten. KINGPIN’s cooling is insane, even the original closed loop cooler that came with it was great.
Never. I need a very, very long break. The Steam overlay has broken my input processing in many games and I don’t want to release any new code until I can get some better unit tests written.
I do not know how they did that, but its not good. Changing the degree symbol to that has the potential to cause buffer overruns. I hate low-level code
I mean, this totally still sucks, but at least the insanity I’m dealing with input processing cannot really be blamed entirely on me:
Sat Sep 19 04:07:34 2020 UTC - Trying to setup input hook...
Sat Sep 19 04:07:34 2020 UTC - Set input hook...
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Detaching input hook...
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Releasing all resources for device: 000000000B2D2048
Sat Sep 19 04:07:34 2020 UTC - Trying to setup input hook...
Sat Sep 19 04:07:34 2020 UTC - Set input hook...
Now, repeat that for a couple thousand lines and this explains a lot. There’s nothing I can do about the problem. I wish the Steam overlay had an abort / rate limiting on failure. Whatever’s happening here it’s happening over and over and over and over. If my software were behaving that way, I’d include an automated runlevel demotion since it’s clear after the 500th time trying, the overlay’s attempt to restart itself every frame isn’t going to work.
// No XInput?! User shouldn't be playing games :P
if (pCtx == nullptr || pCtx->XInputGetState_Original == nullptr)
{
SK_LOG0 ( ( L"Unable to hook XInput, attempting to enter limp-mode..."
L" input-related features may not work as intended." ),
L"Input Mgr." );
InterlockedExchangePointer (
(LPVOID *)&_xinput_ctx.primary_hook,
&_xinput_ctx.XInput1_3 );
pCtx =
static_cast <SK_XInputContext::instance_s *>
(ReadPointerAcquire ((volatile LPVOID *)&_xinput_ctx.primary_hook));
HMODULE hModXInput1_3 =
SK_Modules->LoadLibraryLL (L"XInput1_3.dll");
if (SK_Modules->isValid (hModXInput1_3))
{
pCtx->XInputGetState_Original =
(XInputGetState_pfn)
SK_GetProcAddress ( L"XInput1_3.dll",
"XInputGetState" );
}
}
queued_hooks = true;
SK auto-turns off input processing features to try and go into partially working “limp mode” when something breaks. Why the Steam overlay doesn’t do this I don’t know. Valve’s engineers are pretty damn smart, they could easily work around this safer.
Oh well, I’ve been aware of this for a long time, but never saw fit to use it until now.
There’s technically a global killswitch for the Steam overlay if your software hooks GetProcAddress (...).
The Steam client checks the memory address of IsOverlayEnabled every frame.