qemu-rust
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 09/11] rust/block: Add read support for block drivers


From: Paolo Bonzini
Subject: Re: [PATCH v2 09/11] rust/block: Add read support for block drivers
Date: Wed, 19 Feb 2025 07:11:07 +0100
User-agent: Mozilla Thunderbird

On 2/18/25 19:20, Kevin Wolf wrote:
+    /// The described blocks are stored in a child node.
+    Data {
+        /// Child node in which the data is stored
+        node: Arc<BdrvChild>,

Having Arc<> here shouldn't be necessary, since the BdrvChild is already reference counted. Since the code is called under the bdrv_graph_rdlock there's no risk of the BdrvChild going away, and you can just make it a &BdrvChild.

Likewise, even BochsImage should not need a standard Rust Arc<BdrvChild>. However you need to add your own block::Arc<BdrvChild> and map Clone/Drop to bdrv_ref/bdrv_unref. Then BochsImage can use block::Arc<BdrvChild>; this makes it even clearer that Mapping should not use the Arc<> wrapper, because bdrv_ref is GLOBAL_STATE_CODE() and would abort if run from a non-main thread.

That said, I'm not sure how to include "block graph lock must be taken" into the types, yet. That has to be taken into account too, sooner or later. You probably have a lot of items like this one so it'd be nice to have TODO comments as much as you can.

(This boundary is where you get an unholy mix of C and Rust concepts. It takes a while to get used to, and it teaches you a lot of the parts of Rust that you usually take for granted. So while it's not hard, it's unusual and it does feel like water and oil in the beginning).

+) -> std::os::raw::c_int {
+    let s = unsafe { &mut *((*bs).opaque as *mut D) };

&mut is not safe here (don't worry, we went through the same thing for devices :)). You can only get an & unless you go through an UnsafeCell (or something that contains one). You'll need to split the mutable and immutable parts of BochsImage in separate structs, and embed the former into the latter. Long term you there should be a qemu_api::coroutine::CoMutex<>, but for the short term you can just use a BqlRefCell<> or a standard Rust RefCell<>. You can see how PL011Registers is included into PL011State in rust/hw/char/pl011/src/device.rs, and a small intro is also present in docs/devel/rust.rst.

Anyway, the BdrvChild needs to remain in BochsImage, so that it is accessible outside the CoMutex critical section and can be placed into the Mapping.

+    let mut offset = offset as u64;
+    let mut bytes = bytes as u64;
+
+    while bytes > 0 {
+        let req = Request::Read { offset, len: bytes };
+        let mapping = match qemu_co_run_future(s.map(&req)) {
+            Ok(mapping) => mapping,
+            Err(e) => return -i32::from(Errno::from(e).0),

This is indeed not great, but it's partly so because you're doing a lot (for some definition of "a lot") in the function. While it would be possible to use a trait, I wrote the API thinking of minimal glue code that only does the C<->Rust conversion.

In this case, because you have a lot more code than just a call into the BlockDriver trait, you'd have something like

fn bdrv_co_preadv_part(
    bs: &dyn BlockDriver,
    offset: i64,
    bytes: i64,
    qiov: &bindings::QEMUIOVector,
    mut qiov_offset: usize,
    flags: bindings::BdrvRequestFlags) -> io::Result<()>

and then a wrapper (e.g. rust_co_preadv_part?) that only does

   let s = unsafe { &mut *((*bs).opaque as *mut D) };
   let qiov = unsafe { &*qiov };
   let result = bdrv_co_preadv_part(s, offset, bytes,
         qiov, qiov_offset, flags);
   errno::into_negative_errno(result)

This by the way has also code size benefits because &dyn, unlike generics, does not need to result in duplicated code.

For now, I'd rather keep into_negative_errno() this way, to keep an eye on other cases where you have an io::Error<()>. Since Rust rarely has Error objects that aren't part of a Result, it stands to reason that the same is true of QEMU code, but if I'm wrong then it can be changed.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]