[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH RESEND 6/9] hw/arm/smmu-common: Manage IOTLB block entries
From: |
Auger Eric |
Subject: |
Re: [PATCH RESEND 6/9] hw/arm/smmu-common: Manage IOTLB block entries |
Date: |
Tue, 30 Jun 2020 17:46:01 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 |
Hi Peter,
On 6/26/20 3:53 PM, Auger Eric wrote:
> Hi Peter,
>
> On 6/25/20 5:30 PM, Peter Maydell wrote:
>> On Thu, 11 Jun 2020 at 17:16, Eric Auger <eric.auger@redhat.com> wrote:
>>>
>>> At the moment each entry in the IOTLB corresponds to a page sized
>>> mapping (4K, 16K or 64K), even if the page belongs to a mapped
>>> block. In case of block mapping this unefficiently consume IOTLB
>>> entries.
>>>
>>> Change the value of the entry so that it reflects the actual
>>> mapping it belongs to (block or page start address and size).
>>>
>>> Also the level/tg of the entry is encoded in the key. In subsequent
>>> patches we will enable range invalidation. This latter is able
>>> to provide the level/tg of the entry.
>>>
>>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>
>>
>>> -uint64_t smmu_get_iotlb_key(uint16_t asid, uint64_t iova)
>>> +uint64_t smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
>>> + uint8_t tg, uint8_t level)
>>> {
>>> - return iova >> 12 | (uint64_t)(asid) << SMMU_IOTLB_ASID_SHIFT;
>>> + return iova >> 12 | (uint64_t)(asid) << SMMU_IOTLB_ASID_SHIFT |
>>> + (uint64_t)(level) << SMMU_IOTLB_LEVEL_SHIFT |
>>> + (uint64_t)(tg) << SMMU_IOTLB_TG_SHIFT;
>>> }
>>
>>> SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
>>> - hwaddr iova)
>>> + SMMUTransTableInfo *tt, hwaddr iova)
>>> {
>>> - uint64_t key = smmu_get_iotlb_key(cfg->asid, iova);
>>> - SMMUTLBEntry *entry = g_hash_table_lookup(bs->iotlb, &key);
>>> + uint8_t tg = (tt->granule_sz - 10) / 2;
>>> + int level = tt->starting_level;
>>> + SMMUTLBEntry *entry = NULL;
>>> +
>>> + while (level <= 3) {
>>> + uint64_t subpage_size = 1ULL << level_shift(level, tt->granule_sz);
>>> + uint64_t mask = subpage_size - 1;
>>> + uint64_t key;
>>> +
>>> + key = smmu_get_iotlb_key(cfg->asid, iova & ~mask, tg, level);
>>> + entry = g_hash_table_lookup(bs->iotlb, &key);
>>> + if (entry) {
>>> + break;
>>> + }
>>> + level++;
>>
>> Rather than looping around doing multiple hash table lookups like
>> this, why not just avoid including the tg and level in the
>> key equality test?
>>
>> If I understand the range-based-invalidation feature correctly,
>> the only time we care about the TG/LVL is if we're processing
>> an invalidate-range command that specifies them. But in that
>> case there should never be multiple entries in the bs->iotlb
>> with the same iova, so we can just check whether the entry
>> matches the requested TG/LVL once we've pulled it out of the
>> hash table. (Or we could architecturally validly just blow
>> it away regardless of requested TG/LVL -- they are only hints,
>> not required-to-match.)
>
> This change could have been done independently on the RIL feature. As we
> now put block entries in the IOTLB , when we look for an iova
> translation, the IOVA can be mapped using different block sizes or using
> page entries. So we start looking at blocks of the bigger size (entry
> level) downto the page, for instance 4TB/512MB/64KB. We cannot know
> which block and size the address belongs to. I do not know if we can
> make any hypothesis on whether the driver is forbidden to invalidate an
> address that is not the starting address of an initial mapping.
>
> Not a justification but an info, this is implemented the same way on x86
> (except they don't have variable TG), see vtd_lookup_iotlb in
> hw/i386/intel_iommu.c
Does that make more sense overall? I will respin shortly.
Thanks
Eric
>
> Thanks
>
> Eric
>>
>> thanks
>> -- PMM
>>
>
>
- Re: [PATCH RESEND 3/9] hw/arm/smmu: Simplify the IOTLB key format, (continued)
[PATCH RESEND 7/9] hw/arm/smmuv3: Introduce smmuv3_s1_range_inval() helper, Eric Auger, 2020/06/11
[PATCH RESEND 9/9] hw/arm/smmuv3: Advertise SMMUv3.2 range invalidation, Eric Auger, 2020/06/11
[PATCH RESEND 8/9] hw/arm/smmuv3: Get prepared for range invalidation, Eric Auger, 2020/06/11