* implement loads/store instructions for types smaller than dwords
* initialize s16/s8 types
* set profile for int8/16/64
* also need to zero extend u8/u16 to u32 result
* document unrelated bugs with atomic fmin/max
* remove profile checks and simple emit for added opcodes
---------
Co-authored-by: georgemoralis <giorgosmrls@gmail.com>
* resource_tracking: Mark image as written when its used with atomics
* texture_cache: Remove meta registered flag
Mostly useless and it is possible for images to switch metas
* vk_rasterizer: Use xor as heuristic for HTILE clear
* renderer_vulkan: Respect provoking vertex setting
* renderer_vulkan: Handle rasterization discard
* renderer_vulkan: Implement logic ops
* renderer_vulkan: Properly implement depth clamp and clip
* renderer_vulkan: Handle line width
* Fix build
* vk_pipeline_cache: Don't check depth clamp without a depth buffer
* liverpool: Fix line control offset
* vk_pipeline_cache: Don't run search if depth clamp is disabled
* vk_pipeline_cache: Allow using viewport range when it's more restrictive then depth clamp
* liverpool: Disable depth clip when near and far planes have different setting
* vk_graphics_pipeline: Move warning to pipeline
* vk_pipeline_cache: Revert viewport check and remove log
* vk_graphics_pipeline: Enable depth clamp when depth clip is disabled and extension is not supported
Without the depth clip extension depth clipping is controlled by depth clamping
* shader_recompiler: Replace buffer pulling with attribute divisor for instance step rates
* flatten_extended_userdata: Remove special step rate buffer handling
* Review comments
* spirv_emit_context: Name all instance rate attribs properly
* spirv: Merge ReadConstBuffer again
template function only has 1 user now
* attribute: Add missing attributes
* translate: Reimplement step rate instance id
* Resolve validation warnings
* shader_recompiler: Separate vertex inputs from LS stage, cleanup tess
* buffer_cache: Handle inline data to flexible memory
* control_flow: Fix single instruction scopes edge case
Fixes the following pattern
v_cmpx_gt_u32 cond
buffer_store_dword value
.LABEL:
Before
buffer[index] = value;
After
if (cond)
{
buffer[index] = value;
}
* vector_memory: Handle soffset when offen is false
When offen is not used we can substitute the offset argument with soffset and have it handled correctly
* scalar_alu: Handle sharp moves with S_MOV_B64
This fixes unable to track sharp errors when this pattern is used in a shader
* emulator: Add log
* video_core: Bump binary info search range and buffer num
* buffer_cache: Bring back upload batching and temporary buffer
Because that PR fused the write and read protections under a single function call, it was a requirement to move the actual memory copy part inside the lambda to perform it before the read protection kicks in. However on certain large data transfers it had potential for data corruption. If, for example, an upload had two copies, a 400MB and a 300MB one, the first one would fit in the staging buffer, very likely with an induced stall. However the second one wouldn't have space to fit alongside the other data, but it's also small enough for the buffer to fit it, so the staging buffer would cause a flush and wait to copy it, overwriting the previous transfer.
To address this the upload function has been reworked to allow for batching like previously but with the new locking behavior. Also the condition to use temporary buffers has been expanded to also include cases when staging buffer will stall, which should increase performance a little in some cases.
* buffer_cache: Move buffer barriers and copy outside of lock range
* texture_cache: Async download of GPU modified linear images
* liverpool: Back to less submits
* texture_cache: Don't download depth images
* config: Add option for linear image readback
* buffer_cache: Fix various thread races on data upload and invalidation
* memory_tracker: Improve locking more on invalidation
---------
Co-authored-by: georgemoralis <giorgosmrls@gmail.com>
* vector_alu: Improve handling of mbcnt append/consume patterns
The existing implementation was written to handle a single pattern of mbcnt before the DS_APPEND instruction
v_mbcnt_hi_u32_b32 vX, exec_hi, 0
v_mbcnt_lo_u32_b32 vX, exec_lo, vX
ds_append vY offset:4 gds
v_add_i32 vX, vcc, vY, vX
In this case however the DS_APPEND is before the mbcnt pattern
ds_append vX gds
v_mbcnt_hi_u32_b32 vY, exec_hi, vX
v_mbcnt_lo_u32_b32 vZ, exec_lo, vY
The mbcnt instructions are always in pairs of hi/lo and in general are quite flexible. But they assume the subgroup size is 64 so they are not recompiled literally. Together with DS_APPEND they are used to derive a unique per thread index in a buffer (different from using thread_id as order could be random). DS_APPEND instruction works on per subgroup level, by adding number of active threads of subgroup to the GDS counter, essentially giving a multiple-of-64 base index to all threads. Then each thread executes the mbcnt pair which returns the number of active threads with id less than the itself and adds it with the base.
The recompiler translates DS_APPEND into an atomic increment of a storage buffer counter, which already gives the desired unique index, so this pattern is a no-op. On main it was set to zero as per the first pattern to avoid altering the DS_APPEND result. The new handling passes through the initial value of the pattern instead, which has the same effect but works on either case.
* vk_rasterizer: Always sync DMA buffers
* shader_recompiler: Simplify dma types
Only U32 is needed for S_LOAD_DWORD
* shader_recompiler: Perform address shift on IR level
Buffer instructions now expect address in the data unit they work on. Doing the shift on IR level will allow us to optimize some operations away on common case
* shader_recompiler: Optimize common buffer access pattern
* emit_spirv: Use 32-bit integer ops for fault buffer
Not many GPUs have 8-bit bitwise or operations so that would probably require some overhead to emulate from the driver
* resource_tracking_pass: Fix texel buffer shift
* Bit array test
* Some corrections
* Fix AVX path on SetRange
* Finish bitArray
* Batched protect progress
* Inclusion fix
* Last logic fixes for BitArray
* Page manager: batch protect, masked ranges
* Page manager bitarray
* clang-format
* Fix out of bounds read
* clang
* clang
* Lock during callbacks
* Rename untracked to writeable
* Construct and mask in one step
* Sync on region mutex for thw whole protection
This is a temporary workarround until a fix is found for the page manager having issues when multiple threads update the same page at the same time.
* Bring back the gpu masking until properly handled
* Sync page manager protections
* clang-format
* Rename and fixups
* I fucked up clang-formatting one more time...
* kek
* texture_cache: Avoid gpu tracking assert on sparse image
At the moment just take the easy way of creating the entire image normally and uploading unmapped subresources are zero
* tile_manager: Downgrade assert to error
* fix macos
* pixel_format: Remove unused tables, refactor
* host_compatibilty: Cleanup and support uncompressed views of compressed formats
* texture_cache: Handle compressed views of uncompressed images
* tile_manager: Bump max supported mips to 16
Fixes a crash during start
* oops
* texture_cache: Fix order of format compat check