gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Why old-school C?


From: Jeff Burdges
Subject: Re: [GNUnet-developers] Why old-school C?
Date: Fri, 31 Jul 2015 01:16:00 +0200

Just some thoughts on doing GNUNet code in Rust : 

Rust is playing the "long game" in that they want to get things right.
We should probably adopt this attitude when writing GNUNet layers in
Rust, while staying more pragmatic when working in C. 

What would be an example?  

I suspect one obstacle to GNUNet's wider adoption is the relative
disjointness from outside projects.  An important example is GNUNet
using its own scheduler and IO layers, as opposed to using say libevent,
which afaik does exactly the same thing.  A curious developer who sees a
project using libevent will think "Oh yeah I've love to learn that
anyways", while viewing GNUNet's scheduler, etc. as obstacles (less
transferable knowledge).

Rust has a very good FFI to C code, but writing safe wrappers around
unsafe C code can incur overhead.  In particular, Rust folk had issues
with libuv, yet another take on libevent.  And GNUNet utils appears
quite unsafe by Rust standards, even before worrying about leaked
sockets.  Instead, Rust has tools like :
  https://github.com/carllerche/mio  
  https://github.com/dwrensha/gj  
These are a callback based asynchronous IO toolkit, much like libevent,
GNUNet utils, etc., but they do it the "Rust way", i.e. safe and pretty.
And  Rust's closures make callbacks comprehensible. 

In this short term, there would be more work required to write GNUNet
code using mio, gj, etc.  Worse, mio does not support Window yet,
depends upon unstable language features, and might still face tricky
networking hurdles.  In the long term, we'd have more people actually
wanting to write code with us though.




On Wed, 2015-07-15 at 11:21 +0200, Jeff Burdges wrote:
> I'm a huge fan of Rust, and plan on use it some around GNUnet, but..
> 
> It's important to remember that Rust remains immature because they're
> attempting to do hard stuff well.  In particular, they have not yet
> settled on the "Rust way" to handle key material :
> https://github.com/rust-lang/rfcs/issues/766 
> 
> Rust's libsodium bindings automatically call sodium_memzero
> https://github.com/dnaq/sodiumoxide but do not use libsodium's
> allocators.  Also, Rust did not yet stabilized allocators
> https://github.com/rust-lang/rfcs/issues/538 so projects trying to do
> that remain messy.  Example : https://github.com/seb-m/tars
> 
> It's tricky to audit Rust code that employs cryptography until this 
> gets sorted out.  At the same time, one should not shy away from writing 
> Rust code that employs cryptography, but you should expect to interact 
> with the Rust language community rather closely, and the Rust code is
> going to require maintenance.  It's more work, not less.
> 
> On Thu, 2015-07-09 at 22:49 +0800, Andrew Cann wrote:
> >   * side channel attacks
> >     Some things, like the number of CPU cycles it takes to execute this
> >     decrypt() function, could in principle be modeled inside a programming
> >     language. I don't know if any of the dependently typed assembly 
> > languages
> >     let you do this.
> 
> We're not implementing new crypto primitives in GNUnet, but I'll respond 
> anyways : 
> 
> In principle maybe, but in practice the languages I know about use LLVM,
> including Rust, and LLVM has no plans to support this :
> https://moderncrypto.org/mail-archive/curves/2015/000466.html
> https://moderncrypto.org/mail-archive/curves/2015/000470.html
> Actually that whole thread is interesting.  
> 
> On Rust specifically, see slides 116-117 of this talk : 
> http://files.meetup.com/10495542/2014-12-18%20-%20Rust%20Cryptography.pdf
> Also, there is a project to produce constant-time code using Rust by
> avoiding LLVM, but it's quite immature.
> 
> At present, crypto primitives are commonly written in assembler for these 
> reasons!
> 
> >   * scalability/performance
> >     What if you could guarantee that your service will process any message 
> > of n
> >     bytes in O(n log(n)) time and memory. Or that a network of n available
> >     peers connected in such-and-such a topology can route any message in 
> > less
> >     than m hops. There are programming languages that could let you express
> >     these kinds of constraints and check them at compile time.
> 
> >   * disclosure via protcols, meta data leakage
> >     I'm not sure exactly what you have in mind, but if you want to prevent
> >     leakage there are type theories that let you enforce things like "the 
> > value
> >     in this variable at time t can not effect the output of this function at
> >     any future time". 
> 
> This is like when people talk about doing the proof of the Four-Color
> Theorem or Classification of Finite Simple Groups using computer assisted
> theorem provers.  Any real analysis of scalability or metadata leakage is
> far beyond where foreseeable computer assisted provers help much.
> 
> Jeff
> 




Attachment: signature.asc
Description: This is a digitally signed message part


reply via email to

[Prev in Thread] Current Thread [Next in Thread]