qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 0/2] Add dbus-vmstate


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v2 0/2] Add dbus-vmstate
Date: Fri, 23 Aug 2019 16:14:48 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

* Daniel P. Berrangé (address@hidden) wrote:
> On Fri, Aug 23, 2019 at 03:56:34PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (address@hidden) wrote:
> > > On Fri, Aug 23, 2019 at 03:26:02PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Daniel P. Berrangé (address@hidden) wrote:
> > > > > On Fri, Aug 23, 2019 at 03:09:48PM +0100, Dr. David Alan Gilbert 
> > > > > wrote:
> > > > > > * Marc-André Lureau (address@hidden) wrote:
> > > > > > > Hi
> > > > > > > 
> > > > > > > On Fri, Aug 23, 2019 at 5:00 PM Dr. David Alan Gilbert
> > > > > > > <address@hidden> wrote:
> > > > > > > >
> > > > > > > > * Daniel P. Berrangé (address@hidden) wrote:
> > > > > > > >
> > > > > > > > <snip>
> > > > > > > >
> > > > > > > > > This means QEMU still has to iterate over every single client
> > > > > > > > > on the bus to identify them. If you're doing that, there's
> > > > > > > > > no point in owning a well known service at all. Just iterate
> > > > > > > > > over the unique bus names and look for the exported object
> > > > > > > > > path /org/qemu/VMState
> > > > > > > > >
> > > > > > > >
> > > > > > > > Not knowing anything about DBus security, I want to ask how do
> > > > > > > > we handle security here?
> > > > > > > 
> > > > > > > First of all, we are talking about cooperative processes, and 
> > > > > > > having a
> > > > > > > specific bus for each qemu instance. So some amount of 
> > > > > > > security/trust
> > > > > > > is already assumed.
> > > > > > 
> > > > > > Some but we need to keep it as limited as possible; for example two
> > > > > > reasons for having separate processes both come down to security:
> > > > > > 
> > > > > >   a) vtpm - however screwy the qemu is, you can never get to the 
> > > > > > keys in
> > > > > > the vtpm
> > > > > 
> > > > > Processes connected to dbus can only call the DBus APIs that vtpm
> > > > > actually exports.  The vtpm should simply *not* export a DBus
> > > > > API that allows anything to fetch the keys.
> > > > > 
> > > > > If it did want to export APIs for fetching keys, then we would
> > > > > have to ensure suitable dbus /selinux policy was created to
> > > > > prevent unwarranted access.
> > > > 
> > > > This was really just one example of where the security/trust isn't
> > > > assumed; however a more concrete case is migration of a vtpm, and even
> > > > though it's probably encrypted blob you still don't want some other
> > > > device to grab the migration data - or to say reinitialise the vtpm.
> > > 
> > > That can be dealt with by the dbus security policies, provided
> > > you either run the vtpm as a different user ID from the other
> > > untrustworthy helpers, or use a different selinux context for
> > > vtpm. You can then express that only the user that QEMU is
> > > running under can talk to vtpm over dbus.
> > 
> > The need for the extra user ID or selinux context is a pain;
> > but probably warranted for the vTPM;  in general though some of this
> > exists because of the choice of DBus and wouldn't be a problem for
> > something that had a point-to-point socket it sent everything over.
> 
> NB be careful to use s/DBus/DBus bus/
> 
> DBus the protocol is fine to be used in a point-to-point socket
> scenario - the use of the bus is strictly optional.
> 
> If all communication we expect is exclusively  Helper <-> QEMU,
> then I'd argue in favour of dbus in point-to-point mode.
> 
> The use cases Stefan brought up for virtiofsd though is what
> I think brings the idea of using the bus relevant. It is the
> desire to allow online control/mgmt of the helper, which
> introduces a 3rd party which isn't QEMU. Instead either libvirt
> or a standalone admin/debugging tool. With multiple parties
> involved I think the bus becomes relevant
> 
> With p2p mode you could have 2 dbus socket for Helper <-> QEMU
> and another dbus socket for Helper <-> libvirt/debugging, but
> this isn't an obvious security win over using the bus, as you
> now need different access rules for each of the p2p sockets
> to say who can connect to which socket. 

Right; point-2-point doesn't worry me much as long as we're careful;
it's now we're suddenly proposing something much more general that
I think we need to start being really careful.

> > > Where I think you could have problems is if you needed finer
> > > grainer control with selinux. eg if vstpm exports 2 different
> > > services, you can't allow access to one service, but forbid
> > > access to the other service.
> > > 
> > > > > >   b) virtio-gpu, loads of complex GPU code that can't break the main
> > > > > > qemu process.
> > > > > 
> > > > > That's no problem - virtio-gpu crashes, it disappears from the dbus
> > > > > bus, but everything else keeps running.
> > > > 
> > > > Crashing is the easy case; assume it's malicious and you don't want it
> > > > getting to say a storage device provided by another vhost-user device.
> > > 
> > > If we assume that the 2 processes can't commnuicate / access each
> > > other outside DBus, then the attack avenues added by use of dbus
> > > are most likely either:
> > > 
> > >  - invoking some DBus method that should not be allowed due
> > >    to incomplete dbus security policy. 
> > > 
> > >  - finding a crash in a dbus client library that you can somehow
> > >    exploit to get remote code execution in the separate process
> > > 
> > >    I won't claim this is impossible, but I think it helps to be
> > >    using a standard, widely used battle tested RPC impl, rather
> > >    than a home grown RPC protocol.
> > 
> > It's only the policy case I worry about; and my point here is if we
> > decide to use dbus then we have to think properly about security and
> > defined stuff.
> > 
> > > 
> > > 
> > > > > > > But if necessary, dbus can enforce policies on who is allowed to 
> > > > > > > own a
> > > > > > > name, or to send/receive message from. As far as I know, this is
> > > > > > > mostly user/group policies.
> > > > > > > 
> > > > > > > But there is also SELinux checks to send_msg and acquire_svc (see
> > > > > > > dbus-daemon(1))
> > > > > > 
> > > > > > But how does something like SELinux interact with a private dbus 
> > > > > > rather than the system dbus?
> > > > > 
> > > > > There's already two dbus-daemon's on each host - the system one and
> > > > > the session one, and they get different selinux contexts,
> > > > > system_dbus_t and unconfined_dbus_t.
> > > > > 
> > > > > Since libvirt would be responsible for launching these private dbus
> > > > > daemons it would be easy to make it run  svirt_dbus_t for example.
> > > > > Actually it would be  svirt_dbus_t:s0:cNNN,cMMM to get uniqueness
> > > > > per VM.
> > > > > 
> > > > > Will of course require us to talk to the SELinux maintainers to
> > > > > get some sensible policy rules created.
> > > > 
> > > > This all relies on SELinux and running privileged qemu/vhost-user pairs;
> > > > needing to do that purely to enforce security seems wrong.
> > > 
> > > Compare to an alternative bus-less solution where each helper has
> > > a direct UNIX socket connection to QEMU.
> > > 
> > > If two helpers are running as the same user ID, then can still
> > > directly attack each other via things like ptrace or /proc/$PID/mem,
> > > unless you've used SELinux to isolate them, or run each as a distinct
> > > user ID.  If you do the latter, then we can still easily isolate
> > > them using dbus.
> > 
> > You can lock those down pretty easily though.
> 
> How were you thinking ?
> 
> If you're not using SELinux or separate user IDs, then AFAICT you've
> got a choice of using seccomp or containers.  seccomp is really hard
> to get a useful policy out of with QEMU, and using containers for
> each helper process adds a level of complexity worse than selinux
> or separate user IDs, so isn't an obvious win over using dbus.

You can just drop the CAP_SYS_PTRACE on the whole lot for that;
I thought there was something for /proc/.../mem as well.

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]