[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC] Implementing RLIMIT_AS
From: |
Diego Nieto Cid |
Subject: |
Re: [RFC] Implementing RLIMIT_AS |
Date: |
Fri, 20 Dec 2024 11:17:17 -0300 |
Hello,
On Fri, Dec 20, 2024 at 12:18:36PM +0300, Sergey Bugaev wrote:
> On Thu, Dec 19, 2024 at 6:56 PM Diego Nieto Cid <dnietoc@gmail.com> wrote:
> > I thought of adding an RPC call that sets the `hard_limit` field
> > which, I guess, should be located among the other task related RPCs.
>
> Indeed, this could be something like
>
> routine vm_set_size_limit(target_task : vm_task_t;
> size_limit : vm_size_t);
>
> but as Luca says, you need to consider who's allowed to increase and
> decrease the limit. Yes, having access to host priv port is basically
> equivalent to being Unix root (although it takes some steps to gain
> actual Hurd UID of 0 once you get access to the host priv port in
> gnumach exploits).
>
> I suppose the default state should be 'unlimited', so either
> (vm_size_t) -1, or VM_MAX_USER_ADDRESS - VM_MIN_USER_ADDRESS.
Yes, the default should be 'unlimited'.
>Also make sure to avoid limiting the kernel's own maps.
>
Oh right, I need to check for the kernel map, even though the default
means no limit it may be nice to check at the enforcing point whether
the allocation happens against the kernel map or not.
> > + /* TODO only hard limits are enforced */
> > + /* TODO does getting rlim_t here make sense? */
> > + vm_size_t hard_limit; /* hard limit as set by
> > RLIMIT_AS */
>
> I'd name this max_size, or size_limit perhaps. And maybe don't
> reference Unix concepts (rlim_t, RLIM_INFINITY, RLIMIT_AS) this
> blatantly :)
>
OK :)
>
> The checking should happen at vm_map level, rather than vm_user
> (vm_user is wrappers around vm_map API that are exported from Mach
> for usage from userland), since there are ways to get VM entries
> allocated in a map without calling vm_allocate or vm_map, such as
> receiving out-of-line memory in a message. You could look for all the
> places where map->size is increased, and add checks for not exceeding
> the limit. Of course, do the checks and bail out with KERN_NO_SPACE
> before any changes to the map are made, i.e. well before map->size
> gets increased. This would also be correct wrt locking the map.
>
Ok, I placed the checks at vm_user because it looked more understandable
but will make some effort with the vm_map code.
> Hope that helps. And now some overall design questions for the feature
> (not to discourage you): why?
To be honest, I'm trying to make the zzuf testsuit pass. Currently it fails
on a test that exhausts memory[1] when the driver calls it with the memory
limited to 256M [2][3].
[1] https://github.com/samhocevar/zzuf/blob/master/test/bug-memory.c
[2]
https://github.com/samhocevar/zzuf/blob/master/test/check-zzuf-M-max-memory#L41
[3] https://github.com/samhocevar/zzuf/blob/master/src/myfork.c#L261
> do we actually want this limit?
Hrm don't know :( I gathered from here[4] that it's something we'd like to
have. But I may have misunderstood Samuel on that.
[4] https://lists.gnu.org/archive/html/bug-hurd/2024-12/msg00133.html
> what's it useful for?
> isn't address space cheap?
> is it a sort of advisory limit, or is it meant to be robust against malicious
> tasks?
I don't know. It probably makes more sense to limit physical memory which is
the scarce resource (even more so in 64-bit).
> "Because Unix has it" should not be, by itself, considered enough of a reason
> to bring something into Mach.
>
It was just easier to do in Mach where the address space size was already
accounted for.
But I guess it could be enforced on GLIBC by accounting the total memory
allocated by either brK(), mmap() or mremap() calls.
> Isn't the limit trivial to work around by spawning a new task (forking
> at Unix level)? Even if the new task inherits the parent's limit, you
> now have twice as much address space available. Moreover, the Hurd's
> exec server will happily give anyone a fresh new task derived from
> itself (as opposed to the caller) if you pass oldtask =
> MACH_PORT_NULL.
>
There's RLIMIT_NPROC to mitigate that work around, I guess. But it isn't
enforced :/
Thanks
Re: [RFC] Implementing RLIMIT_AS, Samuel Thibault, 2024/12/21