qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 1/3] mm: support hugetlb free page reporting


From: Mike Kravetz
Subject: Re: [RFC PATCH 1/3] mm: support hugetlb free page reporting
Date: Wed, 23 Dec 2020 10:47:03 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.1

On 12/22/20 7:57 PM, Liang Li wrote:
>> On 12/21/20 11:46 PM, Liang Li wrote:
>>> +static int
>>> +hugepage_reporting_cycle(struct page_reporting_dev_info *prdev,
>>> +                      struct hstate *h, unsigned int nid,
>>> +                      struct scatterlist *sgl, unsigned int *offset)
>>> +{
>>> +     struct list_head *list = &h->hugepage_freelists[nid];
>>> +     unsigned int page_len = PAGE_SIZE << h->order;
>>> +     struct page *page, *next;
>>> +     long budget;
>>> +     int ret = 0, scan_cnt = 0;
>>> +
>>> +     /*
>>> +      * Perform early check, if free area is empty there is
>>> +      * nothing to process so we can skip this free_list.
>>> +      */
>>> +     if (list_empty(list))
>>> +             return ret;
>>
>> Do note that not all entries on the hugetlb free lists are free.  Reserved
>> entries are also on the free list.  The actual number of free entries is
>> 'h->free_huge_pages - h->resv_huge_pages'.
>> Is the intention to process reserved pages as well as free pages?
> 
> Yes, Reserved pages was treated as 'free pages'

If that is true, then this code breaks hugetlb.  hugetlb code assumes that
h->free_huge_pages is ALWAYS >= h->resv_huge_pages.  This code would break
that assumption.  If you really want to add support for hugetlb pages, then
you will need to take reserved pages into account.

P.S. There might be some confusion about 'reservations' based on the
commit message.  My comments are directed at hugetlb reservations described
in Documentation/vm/hugetlbfs_reserv.rst.

>>> +
>>> +     spin_lock_irq(&hugetlb_lock);
>>> +
>>> +     if (huge_page_order(h) > MAX_ORDER)
>>> +             budget = HUGEPAGE_REPORTING_CAPACITY;
>>> +     else
>>> +             budget = HUGEPAGE_REPORTING_CAPACITY * 32;
>>> +
>>> +     /* loop through free list adding unreported pages to sg list */
>>> +     list_for_each_entry_safe(page, next, list, lru) {
>>> +             /* We are going to skip over the reported pages. */
>>> +             if (PageReported(page)) {
>>> +                     if (++scan_cnt >= MAX_SCAN_NUM) {
>>> +                             ret = scan_cnt;
>>> +                             break;
>>> +                     }
>>> +                     continue;
>>> +             }
>>> +
>>> +             /*
>>> +              * If we fully consumed our budget then update our
>>> +              * state to indicate that we are requesting additional
>>> +              * processing and exit this list.
>>> +              */
>>> +             if (budget < 0) {
>>> +                     atomic_set(&prdev->state, PAGE_REPORTING_REQUESTED);
>>> +                     next = page;
>>> +                     break;
>>> +             }
>>> +
>>> +             /* Attempt to pull page from list and place in scatterlist */
>>> +             if (*offset) {
>>> +                     isolate_free_huge_page(page, h, nid);
>>
>> Once a hugetlb page is isolated, it can not be used and applications that
>> depend on hugetlb pages can start to fail.
>> I assume that is acceptable/expected behavior.  Correct?
>> On some systems, hugetlb pages are a precious resource and the sysadmin
>> carefully configures the number needed by applications.  Removing a hugetlb
>> page (even for a very short period of time) could cause serious application
>> failure.
> 
> That' true, especially for 1G pages. Any suggestions?
> Let the hugepage allocator be aware of this situation and retry ?

I would hate to add that complexity to the allocator.

This question is likely based on my lack of understanding of virtio-balloon
usage and this reporting mechanism.  But, why do the hugetlb pages have to
be 'temporarily' allocated for reporting purposes?

-- 
Mike Kravetz



reply via email to

[Prev in Thread] Current Thread [Next in Thread]