qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 7/9] memory: introduce RAM_NORESERVE and wire it up in qem


From: David Hildenbrand
Subject: Re: [PATCH v1 7/9] memory: introduce RAM_NORESERVE and wire it up in qemu_ram_mmap()
Date: Tue, 2 Mar 2021 20:02:34 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0

On 02.03.21 18:32, Peter Xu wrote:
On Tue, Feb 09, 2021 at 02:49:37PM +0100, David Hildenbrand wrote:
@@ -899,13 +899,17 @@ int kvm_s390_mem_op_pv(S390CPU *cpu, uint64_t offset, 
void *hostbuf,
   * to grow. We also have to use MAP parameters that avoid
   * read-only mapping of guest pages.
   */
-static void *legacy_s390_alloc(size_t size, uint64_t *align, bool shared)
+static void *legacy_s390_alloc(size_t size, uint64_t *align, bool shared,
+                               bool noreserve)
  {
      static void *mem;
if (mem) {
          /* we only support one allocation, which is enough for initial ram */
          return NULL;
+    } else if (noreserve) {
+        error_report("Skipping reservation of swap space is not supported.");
+        return NULL

Semicolon missing.

Thanks for catching that!


      }
mem = mmap((void *) 0x800000000ULL, size,
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index b50dc86a3c..bb99843106 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -20,6 +20,7 @@
  #include "qemu/osdep.h"
  #include "qemu/mmap-alloc.h"
  #include "qemu/host-utils.h"
+#include "qemu/error-report.h"
#define HUGETLBFS_MAGIC 0x958458f6 @@ -174,12 +175,18 @@ void *qemu_ram_mmap(int fd,
                      size_t align,
                      bool readonly,
                      bool shared,
-                    bool is_pmem)
+                    bool is_pmem,
+                    bool noreserve)

Maybe at some point we should use flags too here to cover all bools.


Right. I guess the main point was to not reuse RAM_XXX.

Should I introduce RAM_MMAP_XXX ?

Thanks!

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]