Skip to content

Commit

Permalink
Fix a couple of bugs in MMIO HAL code.
Browse files Browse the repository at this point in the history
By the looks of it this path was triggered on specific circumstances
(namely, you use xAPIC instead of x2APIC as the latter uses MSRs instead
of MMIO).

First bug: the offsets were wrong. Even though I say so myself in the
comments that it's not _byte_ offsets but rather `u32` offsets I
promptly forgot to heed my own warnings and treated the offsets as
bytes. Oops.

Second bug: when deallocating the MMIO memory range, we need to restore
the page table entry for that vaddr range we used, and the reconstructed
one was wrong (I think the encrypted bit was AWOL). Instead of trying to
rebuild the PTE from scratch, we now just cache the old entry and
restore it later.

Third bug: there was an errant `todo!()` that should never been
committed.

We need better testing around this, as this should have been caught
earlier.

Change-Id: I9721a7ce7f3298631c8005bf0d8cee8ced1f33af
  • Loading branch information
andrisaar committed Jul 25, 2024
1 parent 7b6af53 commit f568c62
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 20 deletions.
33 changes: 15 additions & 18 deletions stage0/src/hal/base/mmio.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,49 +19,51 @@ use core::alloc::Layout;

use x86_64::{
instructions::tlb::flush_all,
structures::paging::{PageSize, PageTableFlags, Size2MiB},
structures::paging::{page_table::PageTableEntry, PageSize, PageTableFlags, Size2MiB},
PhysAddr, VirtAddr,
};

use crate::paging::PAGE_TABLE_REFS;

pub struct Mmio<S: PageSize> {
pub base_address: x86_64::PhysAddr,
pub base_address: PhysAddr,
layout: Layout,
mmio_memory: *mut u32,
mmio_memory: VirtAddr,
old_pte: PageTableEntry,
phantom: core::marker::PhantomData<S>,
}

impl<S: PageSize> Mmio<S> {
pub unsafe fn new(base_address: x86_64::PhysAddr) -> Self {
pub unsafe fn new(base_address: PhysAddr) -> Self {
// Tehcnically we only need a chunk of virtual memory (as we remap the physical
// memory backing it anyway), but the easiest way how to get a chunk of virtual
// memory is just to allocate it using the normal mechanisms. This means there
// will be a chunk of physical memory sitting unmapped and unused, but that
// should not matter for our use case.
let layout = Layout::from_size_align(S::SIZE as usize, S::SIZE as usize).unwrap();
let mmio_memory = alloc(layout).cast();
let mmio_memory = VirtAddr::from_ptr(alloc(layout));

// Remap our mmio_memory to base_address.
if mmio_memory as u64 > Size2MiB::SIZE {
if mmio_memory.as_u64() > Size2MiB::SIZE {
panic!("MMIO memory virtual address does not land in the first page table");
}
let mut tables = PAGE_TABLE_REFS.get().unwrap().lock();
tables.pt_0[VirtAddr::from_ptr(mmio_memory).p1_index()].set_addr(
let old_pte = tables.pt_0[mmio_memory.p1_index()].clone();
tables.pt_0[mmio_memory.p1_index()].set_addr(
base_address,
PageTableFlags::PRESENT | PageTableFlags::WRITABLE | PageTableFlags::NO_CACHE,
);
flush_all();

Mmio { base_address, layout, mmio_memory, phantom: core::marker::PhantomData }
Mmio { base_address, layout, mmio_memory, old_pte, phantom: core::marker::PhantomData }
}

pub fn read_u32(&self, offset: usize) -> u32 {
// Safety:
// - offset is aligned to u32
// - we've checked it's within the page size
// - when calling new() we were promised the memory is valid
unsafe { self.mmio_memory.add(offset).read_volatile() }
unsafe { self.mmio_memory.as_ptr::<u32>().add(offset).read_volatile() }
}

pub unsafe fn write_u32(&mut self, offset: usize, value: u32) {
Expand All @@ -70,24 +72,19 @@ impl<S: PageSize> Mmio<S> {
// - we've checked it's within the page size
// - when calling new() we were promised the memory is valid
// - the caller needs to guarantee the value makes sense
self.mmio_memory.add(offset).write_volatile(value)
self.mmio_memory.as_mut_ptr::<u32>().add(offset).write_volatile(value)
}
}

impl<S: PageSize> Drop for Mmio<S> {
fn drop(&mut self) {
// Restore the memory mapping and deallocate our chunk of memory. This assumes
// identity mapping, which we generally have in stage0.
// Restore the memory mapping and deallocate our chunk of memory.
let mut tables = PAGE_TABLE_REFS.get().unwrap().lock();
tables.pt_0[VirtAddr::from_ptr(self.mmio_memory).p1_index()].set_addr(
PhysAddr::new(self.mmio_memory as u64),
PageTableFlags::PRESENT | PageTableFlags::WRITABLE,
);
tables.pt_0[self.mmio_memory.p1_index()] = self.old_pte.clone();
flush_all();

// Safety: we've allocated the memory from the global allocator with `alloc` in
// `BaseMMIO::new` with the same layout.
unsafe { dealloc(self.mmio_memory.cast(), self.layout) };
todo!()
unsafe { dealloc(self.mmio_memory.as_mut_ptr(), self.layout) };
}
}
6 changes: 4 additions & 2 deletions stage0/src/hal/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,8 @@ impl<S: PageSize> Mmio<S> {
///
/// Panics if the read would go outside the memory range.
pub fn read_u32(&self, offset: usize) -> u32 {
if (offset * size_of::<u32>()) >= S::SIZE as usize {
let offset = offset * size_of::<u32>();
if offset >= S::SIZE as usize {
panic!("invalid MMIO access for read: offset would read beyond memory boundary");
}
#[cfg(feature = "sev")]
Expand All @@ -65,7 +66,8 @@ impl<S: PageSize> Mmio<S> {
/// The caller needs to guarantee that the value is valid for the register
/// it is written to.
pub unsafe fn write_u32(&mut self, offset: usize, value: u32) {
if (offset * size_of::<u32>()) >= S::SIZE as usize {
let offset = offset * size_of::<u32>();
if offset >= S::SIZE as usize {
panic!("invalid MMIO access for write: offset would write beyond memory boundary");
}
#[cfg(feature = "sev")]
Expand Down

0 comments on commit f568c62

Please sign in to comment.