Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about set_user_va_idx #7194

Closed
qazsdcx opened this issue Dec 23, 2024 · 10 comments
Closed

question about set_user_va_idx #7194

qazsdcx opened this issue Dec 23, 2024 · 10 comments
Labels

Comments

@qazsdcx
Copy link

qazsdcx commented Dec 23, 2024

Hi, experts

I have a question about set_user_va_idx in core_mmu_lpae.c

static void set_user_va_idx(struct mmu_partition *prtn)
{
	uint64_t *tbl = NULL;
	unsigned int n = 0;

	assert(prtn);

	tbl = prtn->base_tables[0][get_core_pos()];

	/*
	 * If base level is 0, then we must use its entry 0.
	 */
	if (CORE_MMU_BASE_TABLE_LEVEL == 0) {
		/*
		 * If base level 0 entry 0 is not used then
		 * it's clear that we can use level 1 entry 1 inside it.
		 * (will be allocated later).
		 */
		if ((tbl[0] & DESC_ENTRY_TYPE_MASK) == INVALID_DESC) {
			user_va_idx = 1;

			return;
		}

		assert((tbl[0] & DESC_ENTRY_TYPE_MASK) == TABLE_DESC);

		tbl = core_mmu_xlat_table_entry_pa2va(prtn, 0, tbl[0]);
		assert(tbl);
	}

	/*
	 * Search level 1 table (i.e. 1GB mapping per entry) for
	 * an empty entry in the range [1GB, 4GB[.
	 */
	for (n = 1; n < 4; n++) {
		if ((tbl[n] & DESC_ENTRY_TYPE_MASK) == INVALID_DESC) {
			user_va_idx = n;
			break;
		}
	}

	assert(user_va_idx != -1);
}

My question is, is the constraint 4 in the for loop the number of level_1 page tables? Should this value be NUM_BASE_LEVEL_ENTRIES?

for (n = 1; n < 4; n++) {
    if ((tbl[n] & DESC_ENTRY_TYPE_MASK) == INVALID_DESC) {
        user_va_idx = n;
	break;
    }
}

If I change CFG_LPAE_ADDR_SPACE_BITS, so the NUM_BASE_LEVEL_ENTRIES also will be changed. Should the number of level_1 page tables also be changed to NUM_SASE_LEVEL_ENTRIES? Value of level_1 page tables may not be 4?

@jenswi-linaro
Copy link
Contributor

If you read the comment above the function you'll see that the TA memory space has to be below 4GB. The number will be 4 as long as we have that restriction.

@qazsdcx
Copy link
Author

qazsdcx commented Dec 23, 2024

Thank you for your apply.

I have another question. If I add CFG_LPAE_ADDR_SPACE_BITS from 32 to 36, Do I need to add the MAX_XLAT_TABLES?

@jenswi-linaro
Copy link
Contributor

Not necessarily. It depends on how the memory is mapped, for instance, how scattered it is. If you increase CFG_LPAE_ADDR_SPACE_BITS over 39, you'll get another level, which will cost a few extra translation tables.

@qazsdcx
Copy link
Author

qazsdcx commented Dec 24, 2024

Thank you for your apply.

@qazsdcx
Copy link
Author

qazsdcx commented Dec 24, 2024

I also have a problem:
I want to map 4G + 256M memory in CFG_RESERVED_VASPACE_SIZE.
So I do the modfication:

1.increase CFG_LPAE_ADDR_SPACE_BITS to 36.
2.increase MAX_XLAT_TABLES to 12.
3.CFG_RESERVED_VASPACE_SIZE := (4 * 1024*1024*1024L + 256 * 1024*1024)

below is memory map info with register_phys_mem_pgdir when system poweron:

[ 3.822238][D/TC] 00 dump_mmap_table:838 type TEE_RAM_RX va 0x80200000..0x802b2fff pa 0x80200000..0x802b2fff size 0x000b3000 (smallpg)
[ 3.824198][D/TC] 00 dump_mmap_table:838 type TEE_RAM_RW va 0x802b3000..0x803fffff pa 0x802b3000..0x803fffff size 0x0014d000 (smallpg)
[ 3.826158][D/TC] 00 dump_mmap_table:838 type RAM_SEC va 0x80400000..0x80407fff pa 0x12050000..0x12057fff size 0x00008000 (smallpg)
[ 3.828119][D/TC] 00 dump_mmap_table:838 type SHM_VASPACE va 0x80600000..0x825fffff pa 0x00000000..0x01ffffff size 0x02000000 (pgdir)
[ 3.830051][D/TC] 00 dump_mmap_table:838 type RES_VASPACE va 0x82600000..0x1925fffff pa 0x00000000..0x10fffffff size 0x110000000 (pgdir)
[ 3.832022][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x192600000..0x1927fffff pa 0x08200000..0x083fffff size 0x00200000 (pgdir)
[ 3.833983][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x192800000..0x1929fffff pa 0x08400000..0x085fffff size 0x00200000 (pgdir)
[ 3.835943][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x192a00000..0x192bfffff pa 0x08600000..0x087fffff size 0x00200000 (pgdir)
[ 3.837903][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x192c00000..0x192dfffff pa 0x12000000..0x121fffff size 0x00200000 (pgdir)
[ 3.839865][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x192e00000..0x192ffffff pa 0x2a200000..0x2a3fffff size 0x00200000 (pgdir)
[ 3.841826][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x193000000..0x1931fffff pa 0x43800000..0x439fffff size 0x00200000 (pgdir)
[ 3.843787][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x193200000..0x1933fffff pa 0x46800000..0x469fffff size 0x00200000 (pgdir)
[ 3.845749][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x193400000..0x1935fffff pa 0x5f000000..0x5f1fffff size 0x00200000 (pgdir)
[ 3.847711][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x193600000..0x1937fffff pa 0x5f800000..0x5f9fffff size 0x00200000 (pgdir)
[ 3.849672][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x193800000..0x1951fffff pa 0x64000000..0x659fffff size 0x01a00000 (pgdir)
[ 3.851633][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x195200000..0x1953fffff pa 0x6c000000..0x6c1fffff size 0x00200000 (pgdir)
[ 3.853595][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x195400000..0x1955fffff pa 0x6c200000..0x6c3fffff size 0x00200000 (pgdir)
[ 3.855556][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x195600000..0x1965fffff pa 0x6d000000..0x6dffffff size 0x01000000 (pgdir)
[ 3.857517][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x196600000..0x1967fffff pa 0x6e000000..0x6e1fffff size 0x00200000 (pgdir)
[ 3.859479][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x196800000..0x1987fffff pa 0x70000000..0x71ffffff size 0x02000000 (pgdir)
[ 3.861441][D/TC] 00 dump_mmap_table:838 type IO_SEC va 0x198800000..0x19a7fffff pa 0x78000000..0x79ffffff size 0x02000000 (pgdir)
[ 3.863402][D/TC] 00 dump_mmap_table:838 type TA_RAM va 0x19a800000..0x19c5fffff pa 0x80400000..0x821fffff size 0x01e00000 (pgdir)
[ 3.865364][D/TC] 00 dump_mmap_table:838 type NSEC_SHM va 0x19c600000..0x19c7fffff pa 0x85000000..0x851fffff size 0x00200000 (pgdir)
[ 3.867325][D/TC] 00 dump_mmap_table:838 type RAM_NSEC va 0x19c800000..0x19c9fffff pa 0x85200000..0x853fffff size 0x00200000 (pgdir)
[ 3.869286][D/TC] 00 dump_mmap_table:838 type RAM_SEC va 0x19ca00000..0x19e3fffff pa 0x86600000..0x87ffffff size 0x01a00000 (pgdir)
[ 3.871248][D/TC] 00 dump_mmap_table:838 type RAM_SEC va 0x19e400000..0x29e3fffff pa 0x1340800000..0x14407fffff size 0x100000000 (pgdir)
[ 3.873582][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 1 / 12
[ 3.874802][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 2 / 12
[ 3.900452][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 3 / 12
[ 3.902027][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 4 / 12
[ 3.911137][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 5 / 12
[ 3.923361][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 6 / 12
[ 3.935587][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 7 / 12
[ 3.947806][D/TC] 00 core_mmu_xlat_table_alloc:531 xlat tables used 8 / 12

After system power on, I call core_mmu_add_mapping in my pta, I want map 1G memory.

data_ptr = core_mmu_add_mapping(MEM_AREA_RAM_NSEC, (paddr_t)in_ptr, data_len);

The data_ptr is NULL. I print the in_ptr and data_len:
in_ptr=0x93291b000, data_len=1073743136

I see the core_mmu_add_mapping. This code return NULL.

if (core_mmu_va2idx(&tbl_info, map->va + len) >= tbl_info.num_entries) {
		return NULL;
}

I print related info:
map->va=0x82600000, len=0x40000520, va_base=0x80000000, shift=0x15, num_entries=512
The return value of core_mmu_va2idx(&tbl_info, map->va + len) is greater than 512.

My question is:

  1. What is the maximum size for which core_mmu_add_mapping() works? Is it set in the configs somewhere? What should I do if I exceed this maximum ?
  2. If I want to map a large range memory size (more than 1G), which api interface I can use?

Thank you

@jenswi-linaro
Copy link
Contributor

I can't tell what is wrong directly, I think you need to debug core_mmu_add_mapping() to find out where it goes wrong.

@qazsdcx
Copy link
Author

qazsdcx commented Jan 3, 2025

Can I have a question about what is the maximum size for which core_mmu_add_mapping() works? core_mmu_add_mapping can map more than 1G( or 2G ) size memory?

@jenswi-linaro
Copy link
Contributor

It is limited by CFG_RESERVED_VASPACE_SIZE and how much has already been used.
However, the implementation also assumes that everything fits within the table returned by core_mmu_find_table(), typically 2 MB.

core_mmu_add_mapping() has been designed for small mappings during boot while only one CPU is active and no preemption can occur. It looks like you need to make a few changes to the function for your use-case.

@jenswi-linaro
Copy link
Contributor

core_mmu_map_contiguous_pages() might help in your case. For an example of how to use it, see handle_mem_share_tmem().

Copy link

github-actions bot commented Feb 5, 2025

This issue has been marked as a stale issue because it has been open (more than) 30 days with no activity. Remove the stale label or add a comment, otherwise this issue will automatically be closed in 5 days. Note, that you can always re-open a closed issue at any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants