[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Asynchronous IO in Rust

From: lurchi
Subject: Re: [GNUnet-developers] Asynchronous IO in Rust
Date: Thu, 23 Mar 2017 21:58:39 +0100

On Mi, 2017-03-22 at 02:43 +0100, Jeff Burdges wrote:
> I suppose qml-rust runs Qt in a separate thread.

Exactly. While it might be possible to integrate the Qt event loop with
another event loop via the C++ API, it's not possible with qml-rust
(yet?). So the plan is to communicate accross threads using Qt's
signals and slots. That's what the developer of qml-rust suggests, too.

> > 
> > Did I understand it correctly that modifying the GNUnet scheduler
> > means
> > you're planning to not do IPC with the GNUnet services from Rust
> > anymore? That would be good news to me because it's easier to
> > maintain
> > the Rust bindings when they call the API functions (which are
> > supposed
> > to change rarely).
> No.  Ain't necessarily so easy to analyze the memory safety of any
> given
> C API that uses the GNUnet scheduler.  This might remain problematic
> even if the GNUnet scheduler were running on top of mio, tokio,
> etc.  

I think I don't understand. If all the client parts are reimplemented
in Rust and communicate directly with the services via IPC (as done in
the current gnunet-rs implementation), what do we need the GNUnet
scheduler for? Then everything client-side would be scheduled directly
by the tokio event loop, wouldn't it?

I introduced the new scheduler API functions because I wanted to use
all the GNUnet APIs from Rust. The reason is better maintainability as
I mentioned. I already have a Cargo project which links to some GNUnet
libraries and calls API functions, but no time to continue working on

But maybe you have something else in mind. We should definitely talk
about it in a mumble meeting as Grothoff suggested.

Attachment: signature.asc
Description: This is a digitally signed message part

reply via email to

[Prev in Thread] Current Thread [Next in Thread]