|
From: | Denis V. Lunev |
Subject: | Re: [PATCH] block: fix possible int overflow |
Date: | Wed, 6 Nov 2024 16:45:15 +0100 |
User-agent: | Mozilla Thunderbird |
On 11/6/24 10:53, Kevin Wolf wrote:
[ Cc: qemu-block ] Am 06.11.2024 um 09:04 hat Dmitry Frolov geschrieben:The sum "cluster_index + count" may overflow uint32_t. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by: Dmitry Frolov <frolov@swemel.ru>Thanks, applied to the block branch. While trying to check if this can be triggered in practice, I found this line in parallels_fill_used_bitmap(): s->used_bmap_size = DIV_ROUND_UP(payload_bytes, s->cluster_size); s->used_bmap_size is unsigned long, payload_bytes is the int64_t result of bdrv_getlength() for the image file, which could certainly be made more than 4 GB * cluster_size. I think we need an overflow check there, too. When allocate_clusters() calculates new_usedsize, it doesn't seem to consider the overflow case either. Denis, can you take a look? Kevin
Hi, Kevin, Dmitry! In general, the situation is the following. On-disk format heavily uses offsets from the beginning of the disk denominated in clusters. These offsets are saved in uint32 on disk. This means that the image with 4T virtual size and 1M cluster size will use offsets from 0 to 4 * 2^10 in different tables on disk. There is existing problem in the format specification that we can not easily apply limits to the virtual size of the disk as we also can have arbitrary size growing metadata like CBT, which is kept in the same address space (cluster offsets). Though in reality I have never seen images with non-1 Mb cluster size and in order to nearly overflow them we would need really allocated images of 4 PB. Theoretically the problem is possible but it looks impractical to me in the real life so far. Thank you in advance, Den
[Prev in Thread] | Current Thread | [Next in Thread] |