The DB_SHADER_CONTROL register has several enable flags which must be set before certain depth exports are enabled.
This commit adds logic to respect the values in this register when performing depth exports, which fixes the regression in earlier versions of KNACK.
I've also renamed DepthBufferControl to DepthShaderControl, since that's closer to the official name for the register.
* vk_rasterizer: Reorder image query in fast clear elimination
Fixes missing clears when a texture is being cleared using this method but never actually used for rendering purposes by ensuring the texture cache has at least a chance to register cmask
* shader_recompiler: Partial support for ANCILLARY_ENA
* pixel_format: Add number conversion of BC6 srgb format
* texture_cache: Support aliases of 3D and 2D array images
Used be UE to render its post processing LUT
* pixel_format: Test BC6 srgb as unorm
Still not sure what is up with snorm/unorm can be useful to have both actions to compare for now
* video_core: Use attachment feedback layout instead of general if possible
UE games often do mipgen passes where the previous mip of the image being rendered to is bound for reading. This appears to cause corruption issues so use attachment feedback loop extension to ensure correct output
* renderer_vulkan: Improve feedback loop code
* Set proper usage flag for feedback loop usage
* Add dynamic state extension and enable it for color aspect when necessary
* Check if image is bound instead of force_general for better code consistency
* shader_recompiler: More proper depth export implementation
* shader_recompiler: Fix bug in output modifiers
* shader_recompiler: Fix sampling from MSAA images
This is not allowed by any graphics API but seems hardware supports it somehow and it can be encountered. To avoid glitched output translate to to a texelFetch call on sample 0
* clang format
* image: Add back missing code
* shader_recompiler: Better ancillary implementation
Now is implemented with a custom attribute that is constant propagated depending on which parts of it are extracted. It will assert if an unknown part is used or if the attribute itself is not removed by dead code elim
* copy_shader: Ignore not enabled export channels
* constant_propagation: Invalidate ancillary after successful elimination
* spirv: Fix f11/f10 conversion to f32
---------
Co-authored-by: georgemoralis <giorgosmrls@gmail.com>
* Allow vector and scalar offset in buffer address arg to
LoadBuffer/StoreBuffer
* remove is_ring check
* fix atomics and update pattern matching for tess factor stores
* remove old asserts about soffset
* small fixes
* copyright
* Handle sgpr initialization for 2 special hull shader values, including tess factor buffer offset
* vk_pipeline_cache: Cleanup graphics key refresh
* position: Don't assert on None mapping
Also check outputs in runtime info so shader is recompiled if they change
* video_core: support for RT layer outputs
- support for RT layer outputs
- refactor for handling of export attributes
- move output->attribute mapping to a separate header
* export: Rework render target exports
- Centralize all code related to MRT exports into a single function to make it easier to follow
- Apply swizzle to output RGBA colors instead of the render target channel.
This fixes swizzles on formats with < 4 channels
For example with render target format R8_UNORM and COMP_SWAP ALT_REV the previous code would output
frag_color.a = color.r;
instead of
frag_color.r = color.a;
which would result in incorrect output in some cases
* vk_pipeline_cache: Apply swizzle to write masks
---------
Co-authored-by: polyproxy <47796739+polybiusproxy@users.noreply.github.com>
Previously a buffer load in a vertex shader could be treated like a ring access, dropping offen vgpr and possibly asserting during resource tracking because of mismatch between types (u32x2 vs U32), caused by inconsistencies in flags (index_enable and offset_enable)
* shader_recompiler: Remove remnants of old discard
Also constant propagate conditional discard if condition is constant
* resource_tracking_pass: Rework sharp tracking for robustness
* resource_tracking_pass: Add source dominance analysis
When reachability is not enough to prune source list, check if a source dominates all other sources
* resource_tracking_pass: Fix immediate check
How did this work before
* resource_tracking_pass: Remove unused template type
* readlane_elimination_pass: Don't add phi when all args are the same
New sharp tracking exposed some bad sources coming on sampler sharps with aniso disable pattern that also were part of readlane pattern, fix tracking by removing the unnecessary phis inbetween
* resource_tracking_pass: Allow phi in disable aniso pattern
* resource_tracking_pass: Handle not valid buffer sharp and more phi in aniso pattern
* implement loads/store instructions for types smaller than dwords
* initialize s16/s8 types
* set profile for int8/16/64
* also need to zero extend u8/u16 to u32 result
* document unrelated bugs with atomic fmin/max
* remove profile checks and simple emit for added opcodes
---------
Co-authored-by: georgemoralis <giorgosmrls@gmail.com>
* resource_tracking: Mark image as written when its used with atomics
* texture_cache: Remove meta registered flag
Mostly useless and it is possible for images to switch metas
* vk_rasterizer: Use xor as heuristic for HTILE clear
* shader_recompiler: Replace buffer pulling with attribute divisor for instance step rates
* flatten_extended_userdata: Remove special step rate buffer handling
* Review comments
* spirv_emit_context: Name all instance rate attribs properly
* spirv: Merge ReadConstBuffer again
template function only has 1 user now
* attribute: Add missing attributes
* translate: Reimplement step rate instance id
* Resolve validation warnings
* shader_recompiler: Separate vertex inputs from LS stage, cleanup tess
* buffer_cache: Handle inline data to flexible memory
* control_flow: Fix single instruction scopes edge case
Fixes the following pattern
v_cmpx_gt_u32 cond
buffer_store_dword value
.LABEL:
Before
buffer[index] = value;
After
if (cond)
{
buffer[index] = value;
}
* vector_memory: Handle soffset when offen is false
When offen is not used we can substitute the offset argument with soffset and have it handled correctly
* scalar_alu: Handle sharp moves with S_MOV_B64
This fixes unable to track sharp errors when this pattern is used in a shader
* emulator: Add log
* video_core: Bump binary info search range and buffer num
* vector_alu: Improve handling of mbcnt append/consume patterns
The existing implementation was written to handle a single pattern of mbcnt before the DS_APPEND instruction
v_mbcnt_hi_u32_b32 vX, exec_hi, 0
v_mbcnt_lo_u32_b32 vX, exec_lo, vX
ds_append vY offset:4 gds
v_add_i32 vX, vcc, vY, vX
In this case however the DS_APPEND is before the mbcnt pattern
ds_append vX gds
v_mbcnt_hi_u32_b32 vY, exec_hi, vX
v_mbcnt_lo_u32_b32 vZ, exec_lo, vY
The mbcnt instructions are always in pairs of hi/lo and in general are quite flexible. But they assume the subgroup size is 64 so they are not recompiled literally. Together with DS_APPEND they are used to derive a unique per thread index in a buffer (different from using thread_id as order could be random). DS_APPEND instruction works on per subgroup level, by adding number of active threads of subgroup to the GDS counter, essentially giving a multiple-of-64 base index to all threads. Then each thread executes the mbcnt pair which returns the number of active threads with id less than the itself and adds it with the base.
The recompiler translates DS_APPEND into an atomic increment of a storage buffer counter, which already gives the desired unique index, so this pattern is a no-op. On main it was set to zero as per the first pattern to avoid altering the DS_APPEND result. The new handling passes through the initial value of the pattern instead, which has the same effect but works on either case.
* vk_rasterizer: Always sync DMA buffers
* Added V_CMP_EQ_U64 shader opcode support and added 64-bit relational operators (<,>,<=,>=)
* Fixed clang-format crying because I typed xargs clang-format instead of xargs clang-format-19
* Replaced V_CMP_EQ_U64 code to match V_CMP_U32 to test
* Updated V_CMP_U64 for future addons
* shader_recompiler: Simplify dma types
Only U32 is needed for S_LOAD_DWORD
* shader_recompiler: Perform address shift on IR level
Buffer instructions now expect address in the data unit they work on. Doing the shift on IR level will allow us to optimize some operations away on common case
* shader_recompiler: Optimize common buffer access pattern
* emit_spirv: Use 32-bit integer ops for fault buffer
Not many GPUs have 8-bit bitwise or operations so that would probably require some overhead to emulate from the driver
* resource_tracking_pass: Fix texel buffer shift