Skip to content

Commit 9f3ffd4

Browse files
Marc ZyngierSasha Levin
authored andcommitted
arm64: Fix sampling the "stable" virtual counter in preemptible section
[ Upstream commit e5cb94b ] Ben reports that when running with CONFIG_DEBUG_PREEMPT, using __arch_counter_get_cntvct_stable() results in well deserves warnings, as we access a per-CPU variable without preemption disabled. Fix the issue by disabling preemption on reading the counter. We can probably do a lot better by not disabling preemption on systems that do not require horrible workarounds to return a valid counter value, but this plugs the issue for the time being. Fixes: 29cc0f3 ("arm64: Force the use of CNTVCT_EL0 in __delay()") Reported-by: Ben Horgan <ben.horgan@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/aZw3EGs4rbQvbAzV@e134344.arm.com Tested-by: Ben Horgan <ben.horgan@arm.com> Tested-by: André Draszik <andre.draszik@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent 1147ab1 commit 9f3ffd4

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

arch/arm64/lib/delay.c

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,11 @@ static inline unsigned long xloops_to_cycles(unsigned long xloops)
3232
* Note that userspace cannot change the offset behind our back either,
3333
* as the vcpu mutex is held as long as KVM_RUN is in progress.
3434
*/
35-
#define __delay_cycles() __arch_counter_get_cntvct_stable()
35+
static cycles_t notrace __delay_cycles(void)
36+
{
37+
guard(preempt_notrace)();
38+
return __arch_counter_get_cntvct_stable();
39+
}
3640

3741
void __delay(unsigned long cycles)
3842
{

0 commit comments

Comments
 (0)