Skip to content

Commit

Permalink
[X86] Preserve volatile ATOMIC_LOAD_OR nodes
Browse files Browse the repository at this point in the history
Fixes #63692.

In reference to volatile memory accesses, the langref says:
> the backend should never split or merge target-legal volatile load/store instructions.

Differential Revision: https://reviews.llvm.org/D154609
  • Loading branch information
omern1 committed Jul 11, 2023
1 parent 7b72920 commit e148899
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
4 changes: 3 additions & 1 deletion llvm/lib/Target/X86/X86ISelLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -33552,7 +33552,9 @@ static SDValue lowerAtomicArith(SDValue N, SelectionDAG &DAG,
// changing, all we need is a lowering for the *ordering* impacts of the
// atomicrmw. As such, we can chose a different operation and memory
// location to minimize impact on other code.
if (Opc == ISD::ATOMIC_LOAD_OR && isNullConstant(RHS)) {
// The above holds unless the node is marked volatile in which
// case it needs to be preserved according to the langref.
if (Opc == ISD::ATOMIC_LOAD_OR && isNullConstant(RHS) && !AN->isVolatile()) {
// On X86, the only ordering which actually requires an instruction is
// seq_cst which isn't SingleThread, everything just needs to be preserved
// during codegen and then dropped. Note that we expect (but don't assume),
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/X86/pr63692.ll
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ define void @prefault(ptr noundef %range_start, ptr noundef readnone %range_end)
; CHECK-NEXT: .p2align 4, 0x90
; CHECK-NEXT: .LBB0_1: # %while.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
; CHECK-NEXT: #MEMBARRIER
; CHECK-NEXT: lock orb $0, (%rdi)
; CHECK-NEXT: addq $4096, %rdi # imm = 0x1000
; CHECK-NEXT: cmpq %rsi, %rdi
; CHECK-NEXT: jb .LBB0_1
Expand Down

0 comments on commit e148899

Please sign in to comment.