The AMO instructions are very critical for Linux so allow low-end
RISC-V implementations without Zaamo to boot Linux by emulating AMO
instructions using Zalrsc when OpenSBI is compiled without Zaamo.
Signed-off-by: Chao-ying Fu <cfu@mips.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Link: https://lore.kernel.org/r/20250519121207.976949-1-apatel@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
When OpenSBI is built with a relatively new compiler (gcc-13 and greater)
I observed that GDB is unable to produce proper backtraces and some
variable values appear corrupted (even if the associated DWARF
location descriptor is correct).
Turns out that to properly work with debug information, debuggers often
need to unwind the stack. They generally rely on Call Frame Information
(CFI) records provided by the compiler to facilitate this task.
Currently, the GCC compiler offers two mechanisms:
- `.debug_frame` section (as described in the DWARF specification).
- `.eh_frame` sections (as described in LSB documents).
The latter (`.eh_frame`) supports stack unwinding at runtime, providing
a framework for C++ exceptions or enabling backtrace generation using
libraries like libunwind. However, the downside of this approach is that
these sections should be part of loadable segments.
The former (`.debug_frame`) is simply an ordinary debug section.
Starting from GCC 13, Linux targets enable the `-fasynchronous-unwind-tables`
and `-funwind-tables` flags by default. Relevant commit:
https://github.com/gcc-mirror/gcc/commit/3cd08f7168
When these flags are active, the compiler generates `.eh_frame` sections
instead of `.debug_frame`. Since OpenSBI is built using the **Linux
toolchain**, this behavior applies to OpenSBI as well.
The problem arises because the SBI build system uses `-Wl,--gc-sections`,
which discards the `.eh_frame` section.
Possible Fixes:
1. Enforce `.debug_frame` generation – Modify compiler flags to generate
`.debug_frame` instead of `.eh_frame`.
2. Preserve `.eh_frame` in the linker script – Add `KEEP(*(.eh_frame))`
to ensure the section is not discarded.
I chose Option 1 because it avoids any runtime overhead.
Signed-off-by: Parshintsev Anatoly <anatoly.parshintsev@syntacore.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250421124729.36364-1-anatoly.parshintsev@syntacore.com
Signed-off-by: Anup Patel <anup@brainfault.org>
If we hotplug a core and then perform a suspend-to-RAM operation on a
multi-core platform, the hotplugged CPU may be woken up along with the rest
of the system, particularly on platforms that wake all cores from the
deepest sleep state. When this happens, the hotplugged CPU enters the
sbi_hsm_wait WFI wait loop instead of transitioning into a
platform-specific low-power state. To address this, we add a HSM stop call
within the wait loop. This allows platforms that support HSM stop to enter
a low-power state when the CPU is unexpectedly woken up.
Signed-off-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250418064506.15771-1-nick.hu@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
sstateen* and hstateen* CSRs must be zeroed by M-mode if the mstateen*
registers are missing, to avoid security issues.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-10-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
We already detect Smstateen, but Ssstateen exists as well and it doesn't
have the M-state CSRs.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-9-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The current logic clears some bits based on SBI known extensions.
Be safe and do not leave enabled anything that SBI doesn't control.
This is not a breaking change, because the register must be initialized
to 0 by the ISA on reset, but it is better to not depend on it when we
don't need to.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-8-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The Sstateen extension defines 4 sstateen registers, but SBI currently
configures the execution environment to throw illegal instruction
exception when accessing sstateen1-3.
SBI should implement all sstateen registers, so delegate the
implementation to hardware by setting the SE bit.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-7-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Not resetting sstateen is a potential security hole, because U might be
able to access state that S does not properly context-switch.
Similar for hstateen with VS and HS.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-6-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
hstatus.HU must be cleared, because U-mode could otherwise use the
HLS/HSV instructions. This would allow U-mode to read physical memory
directly if vgatp and vsatp was 0.
The remaining fields don't seem like a security vulnerability now, but
clearing the whole CSR is not an issue, so do that be safe.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-5-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Most CSRs are XLEN bits wide, but some are 64 bit, so rv32 needs two
accesses, plaguing the code with ifdefs.
Add new helpers that split 64 bit operation into two operations on rv32.
The helpers don't use "csr + 0x10", but append "H" at the end of the csr
name to get a compile-time error when accessing a non 64 bit register.
This has the downside that you have to use the name when accessing them.
e.g. csr_read64(0x1234) or csr_read64(CSR_SATP) won't compile and the
error messages you get for these bugs are not straightforward.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Radim Krčmář <rkrcmar@ventanamicro.com>
Link: https://lore.kernel.org/r/20250429142549.3673976-3-rkrcmar@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Set the scratch allocation alignment to cacheline size specified by
riscv,cbom-block-size in the DTS file to avoid two atomic variables
from the same cache line causing livelock on some platforms. If the
cacheline is not specified, we set it a default value.
Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250423225045.267983-1-Raj.Vishwanathan@gmail.com
Signed-off-by: Anup Patel <anup@brainfault.org>
In current implementation, the length of hartindex_to_context_table[]
array is fixed as SBI_HARTMASK_MAX_BITS. However, the number of harts
supported by the platform might not be SBI_HARTMASK_MAX_BITS and is
usually smaller than SBI_HARTMASK_MAX_BITS. This means it is unnecessary
to allocate such fixed-length array here.
Precisely, current implementation always allocates 1024 bytes for
hartindex_to_context_table[128] on RV64 platform. However, a platform
supports two harts only needs hartindex_to_context_table[2], which only
needs 16 bytes.
This commit calculates needed size of hartindex_to_context_table[]
according to supported number of harts on the platform when registering
per-domain data, so that memory usage of per-domain context data can be
reduced.
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250326062051.3763530-1-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Now that the generic platform only sets .vendor_ext_provider if the
function is really implemented, there is no need for a separate hook to
check if a vendor extension is implemented.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-11-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
This function has been obsoleted by the fdt_driver library and is no
longer used.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-10-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
In addition to deduplicating the code, this also improves the match
selection logic to respect the priority order of the compatible strings,
as implemented in commit 0ffe265fd969 ("lib: utils/fdt: Respect
compatible string fallback priority").
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-9-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Now that all of the overrides are modifying generic_platform_ops
directly, remove the unused hooks and forwarding functions. The
remaining members of struct platform_override match struct fdt_driver,
so use that type instead. This allows a future commit to reuse the
fdt_driver-based init function.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-8-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Switch all existing platform overrides to use the helper pattern instead
of the platform hooks. After this commit, only the .match_table and
.init members of struct platform_override are used.
There are two minor behavioral differences:
- For Allwinner D1, fdt_add_cpu_idle_states() is now called before the
body of generic_final_init(). This should have no functional impact.
- For StarFive JH7110, if the /chosen/starfive,boot-hart-id property is
missing, the code now falls back to using generic_coldboot_harts,
instead of accepting any hart.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-7-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Currently the generic platform follows the middleware pattern: it
implements the sbi_platform hooks, while providing its own set of hooks
for further customization. This has a few disadvantages: each location
where customization is needed requires a separate platform_override
hook, including places where the generic function does nothing except
forward to a platform_override hook, and the extra layer of function
pointers adds runtime overhead.
Let's restructure the generic platform to follow the helper pattern.
Allow platform overrides to treat the generic platform as a template,
adding or replacing the sbi_platform_operations as needed. Export the
generic implementations, so they can be called as helpers from inside
the override functions. With this pattern, the platform_override
function pointers are replaced by direct calls, and the forwarding
functions can be removed.
The forwarding functions are not exported, since there is no reason for
an override to call them. generic_vendor_ext_check() must be rewritten,
since now there is a new way to override vendor_ext_provider.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-6-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
In preparation for reusing the fdt_driver code to match platform
overrides, add a new .init hook matching the type signature from
fdt_driver. This hook replaces the existing .fw_init hook, since
it is called at roughly the same place in the init process.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-5-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
struct fdt_match expects match data to be const. Follow this expectation
so that no type casting is needed.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-4-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
This function accesses the FDT blob, which means it is only valid to
call during cold boot, before a lower privilege level has an opportunity
to clobber that memory.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-3-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The addresses of these functions are used to set function pointers in
struct platform_override, so it is not valid for them to be inline.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250325234342.711447-2-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
OpenSBI supports multiple supervisor domains run on same platform. When
these supervisor domains want to communicate with OpenSBI through MPXY
channels, they will allocate MPXY shared memory from their own memory
regions. Therefore, the MPXY state data structure must be per-domain and
per-hart data structure.
This commit registers per-domain MPXY state data in sbi_mpxy_init(). The
original MPXY state allocated in scratch region is also removed. We also
replace sbi_scratch_thishart_offset_ptr() macro as new
sbi_domain_mpxy_state_thishart_ptr() macro which gets MPXY state from
per-domain data.
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250325071314.3113941-1-alvinga@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
print error message and turncat the string when length
of extension name string exceed buffer size
Signed-off-by: Jimmy Ho <jimmy.ho@sifive.com>
Reviewed-by: Nick Hu <nick.hu@sifive.com>
Reviewed-by: Zong Li <zong.li@sifive.com>
Link: https://lore.kernel.org/r/20250321001450.11189-1-jimmy.ho@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
No need to initialise the nodes to be added to the linked list
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123944.505756-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Obtaining a 64-bit address under rv32 does not require combining two
32-bit registers because we ignore upper 32-bits on rv32.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123832.505033-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Shared memory needs to be accessed in M-Mode so for now the high
address of shared memory can't non-zero.
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250319123719.504622-1-wxjstz@126.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Referencing commit 0c304b661965
("lib: sbi: Allow programmable counters to monitor cycle/instret events")
to support this functionality for Andes PMU.
Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Link: https://lore.kernel.org/r/20250328084142.540807-1-ycliang@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The (event ID & "second column mask") should equal
the "first column match value". Modify the example
to fit the description.
Signed-off-by: Leo Yu-Chi Liang <ycliang@andestech.com>
Reviewed-by: Samuel Holland <samuel.holland@sifive.com>
Link: https://lore.kernel.org/r/20250324043943.2513070-1-ycliang@andestech.com
Signed-off-by: Anup Patel <anup@brainfault.org>
sbi_send_ipi() should return SBI_ERR_INVALID_PARAM if even one hartid
constructed from hart_mask_base and hart_mask, is not valid.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250314163021.154530-6-ajones@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Provide a function to count the number of set bits in a hartmask,
which builds on a new function for the same that operates on a
bitmask. While at it, improve the performance of sbi_popcount()
which is used in the implementation.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250314163021.154530-5-ajones@ventanamicro.com
Signed-off-by: Anup Patel <anup@brainfault.org>
Recursively expanded variables (defined with '=') are expanded at
evaluation time. These version information variables are evaluated
inside a recipe as part of GENFLAGS. As a result, the shell commands
are executed separately for each compiler invocation. Convert the
version information variables to be simply expanded, so the shell
commands are executed only once, at Makefile evaluation time. This
speeds up the build by as much as 75%.
A separate check is needed to maintain the behavior of preferring the
value of OPENSBI_BUILD_TIME_STAMP from the environment.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250313035755.3796610-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The Control Transfer Records (CTR) extension provides a method to
record a limited branch history in register-accessible internal chip
storage.
This extension is similar to Arch LBR in x86 and BRBE in ARM.
The Extension has been stable and the latest release can be found here
https://github.com/riscv/riscv-control-transfer-records/release
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250307124451.122828-1-rkanwal@rivosinc.com
Signed-off-by: Anup Patel <anup@brainfault.org>
When redirecting an exception to S-mode, transform the (v)stvec CSR
value as described in the privileged spec to derive the S-mode PC.
Since OpenSBI never redirects interrupts, only synchronous exceptions,
the only action needed is to mask out the (v)stvec.MODE field.
Reported-by: Jan Reinhard <jan.reinhard@sysgo.com>
Closes: https://github.com/riscv-software-src/opensbi/issues/391
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviwed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250305014729.3143535-1-samuel.holland@sifive.com
Signed-off-by: Anup Patel <anup@brainfault.org>
When -march=rv64im_zalrsc_zicsr is used, provide atomic operations
and locks using lr and sc instructions only.
Signed-off-by: Chao-ying Fu <cfu@mips.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250226014727.19710-1-cfu@mips.com
Signed-off-by: Anup Patel <anup@brainfault.org>
The PXA variant of the uart8250 adds the UART Unit Enable bit (UUE) that
needs to be set to enable the XScale PXA UART. And it is required for
some RISC-V SoCs like the Spacemit K1 that implement the PXA UART.
This introduces the "intel,xscale-uart" compatible to handle setting the
UUE bit.
Signed-off-by: Junhui Liu <junhui.liu@pigmoral.tech>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://lore.kernel.org/r/20250327-pxa-uart-support-v2-1-c4400c1fcd0b@pigmoral.tech
Signed-off-by: Anup Patel <anup@brainfault.org>
Similarly to what is done for SPELP, handle SSTATUS.SDT upon event
injection. In order to mimick an interrupt, set SDT to 1 for injection and
save its previous value in interrupted_flags[5:5]. Restore it upon
completion.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
As raised during the ARC review, SPELP was not handled during the event
injection process. Save it as part of the interrupted flags, clear it
before injecting the event and restore it after completion.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
For some reason, there was a pair of useless parenthesis around MSTATUS_*
value usage. Remove them.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
As raised by Andrew on the kvm-unit-test review, this flags are meant to
hold SSTATUS bits in the specification. Rename them to match that.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
The SSE specification did specified that read only parameters should
return SBI_EBADRANGE but was modified recently to return SBI_EDENIED.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
This printf is mainly useful for debugging, remove it.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
The latest specification added new high priority RAS events and renamed
the PMU to PMU_OVERFLOW.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Harts associated with an ACLINT_MSWI need not have sequential hartids.
It is insufficient to use first_hartid and hart_count. To account for
non-sequential hart ids, include the empty hart-ids' generate hart-count.
Signed-off-by: Raj Vishwanathan <Raj.Vishwanathan@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Simplify the code and improve consistency by using the new macros where
possible. sbi_hart_count() obsoletes sbi_scratch_last_hartindex().
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
There is currently no helper for iterating through the harts in a
system, and code must choose between sbi_scratch_last_hartindex() and
sbi_platform_hart_count() for the loop condition.
sbi_scratch_last_hartindex() has unusual semantics, leading to the
likelihood of off-by-one errors, and sbi_platform_hart_count() is
provided by the platform and so may not be properly bounded.
Add a new helper which definitively reports the number of harts managed
by this OpenSBI instance, i.e. the number of valid hart indexes, and a
convenient iterator macro.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The compiler generates much better code for sbi_hartindex_to_hartid()
and sbi_hartindex_to_scratch() when using a constant for the bounds
check. This works out nicely because the underlying arrays are already
a constant size, so the only change needed is to fill the remainder of
each array with the appropriate default/out-of-bounds value. The
ellipsis in the designated initializer is a GCC extension (also
supported by Clang), but avoids runtime initialization of the array.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
The internal limit on the number of harts is SBI_HARTMASK_MAX_BITS, as
this value determines the size of various bitmaps and arrays (including
hartindex_to_hartid_table and hartindex_to_scratch_table). Clamp the
value provided by the platform, and drop the extra array element.
Update the documentation to indicate that hart_index2id must be sized
based on hart_count, and that hart indexes must be contiguous. As of
commit 5e90e54a1a53 ("lib: utils:Check that hartid is valid"), there is
no restriction on the valid hart ID values.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>