blob: 3cb27f06bcba3ffcc4b006c716677c3283e6e486
1 | # |
2 | # General architecture dependent options |
3 | # |
4 | |
5 | config KEXEC_CORE |
6 | bool |
7 | |
8 | config HOTPLUG_SMT |
9 | bool |
10 | |
11 | config OPROFILE |
12 | tristate "OProfile system profiling" |
13 | depends on PROFILING |
14 | depends on HAVE_OPROFILE |
15 | select RING_BUFFER |
16 | select RING_BUFFER_ALLOW_SWAP |
17 | help |
18 | OProfile is a profiling system capable of profiling the |
19 | whole system, include the kernel, kernel modules, libraries, |
20 | and applications. |
21 | |
22 | If unsure, say N. |
23 | |
24 | config OPROFILE_EVENT_MULTIPLEX |
25 | bool "OProfile multiplexing support (EXPERIMENTAL)" |
26 | default n |
27 | depends on OPROFILE && X86 |
28 | help |
29 | The number of hardware counters is limited. The multiplexing |
30 | feature enables OProfile to gather more events than counters |
31 | are provided by the hardware. This is realized by switching |
32 | between events at an user specified time interval. |
33 | |
34 | If unsure, say N. |
35 | |
36 | config HAVE_OPROFILE |
37 | bool |
38 | |
39 | config OPROFILE_NMI_TIMER |
40 | def_bool y |
41 | depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64 |
42 | |
43 | config KPROBES |
44 | bool "Kprobes" |
45 | depends on MODULES |
46 | depends on HAVE_KPROBES |
47 | select KALLSYMS |
48 | help |
49 | Kprobes allows you to trap at almost any kernel address and |
50 | execute a callback function. register_kprobe() establishes |
51 | a probepoint and specifies the callback. Kprobes is useful |
52 | for kernel debugging, non-intrusive instrumentation and testing. |
53 | If in doubt, say "N". |
54 | |
55 | config JUMP_LABEL |
56 | bool "Optimize very unlikely/likely branches" |
57 | depends on HAVE_ARCH_JUMP_LABEL |
58 | help |
59 | This option enables a transparent branch optimization that |
60 | makes certain almost-always-true or almost-always-false branch |
61 | conditions even cheaper to execute within the kernel. |
62 | |
63 | Certain performance-sensitive kernel code, such as trace points, |
64 | scheduler functionality, networking code and KVM have such |
65 | branches and include support for this optimization technique. |
66 | |
67 | If it is detected that the compiler has support for "asm goto", |
68 | the kernel will compile such branches with just a nop |
69 | instruction. When the condition flag is toggled to true, the |
70 | nop will be converted to a jump instruction to execute the |
71 | conditional block of instructions. |
72 | |
73 | This technique lowers overhead and stress on the branch prediction |
74 | of the processor and generally makes the kernel faster. The update |
75 | of the condition is slower, but those are always very rare. |
76 | |
77 | ( On 32-bit x86, the necessary options added to the compiler |
78 | flags may increase the size of the kernel slightly. ) |
79 | |
80 | config STATIC_KEYS_SELFTEST |
81 | bool "Static key selftest" |
82 | depends on JUMP_LABEL |
83 | help |
84 | Boot time self-test of the branch patching code. |
85 | |
86 | config OPTPROBES |
87 | def_bool y |
88 | depends on KPROBES && HAVE_OPTPROBES |
89 | depends on !PREEMPT |
90 | |
91 | config KPROBES_ON_FTRACE |
92 | def_bool y |
93 | depends on KPROBES && HAVE_KPROBES_ON_FTRACE |
94 | depends on DYNAMIC_FTRACE_WITH_REGS |
95 | help |
96 | If function tracer is enabled and the arch supports full |
97 | passing of pt_regs to function tracing, then kprobes can |
98 | optimize on top of function tracing. |
99 | |
100 | config UPROBES |
101 | def_bool n |
102 | help |
103 | Uprobes is the user-space counterpart to kprobes: they |
104 | enable instrumentation applications (such as 'perf probe') |
105 | to establish unintrusive probes in user-space binaries and |
106 | libraries, by executing handler functions when the probes |
107 | are hit by user-space applications. |
108 | |
109 | ( These probes come in the form of single-byte breakpoints, |
110 | managed by the kernel and kept transparent to the probed |
111 | application. ) |
112 | |
113 | config HAVE_64BIT_ALIGNED_ACCESS |
114 | def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS |
115 | help |
116 | Some architectures require 64 bit accesses to be 64 bit |
117 | aligned, which also requires structs containing 64 bit values |
118 | to be 64 bit aligned too. This includes some 32 bit |
119 | architectures which can do 64 bit accesses, as well as 64 bit |
120 | architectures without unaligned access. |
121 | |
122 | This symbol should be selected by an architecture if 64 bit |
123 | accesses are required to be 64 bit aligned in this way even |
124 | though it is not a 64 bit architecture. |
125 | |
126 | See Documentation/unaligned-memory-access.txt for more |
127 | information on the topic of unaligned memory accesses. |
128 | |
129 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
130 | bool |
131 | help |
132 | Some architectures are unable to perform unaligned accesses |
133 | without the use of get_unaligned/put_unaligned. Others are |
134 | unable to perform such accesses efficiently (e.g. trap on |
135 | unaligned access and require fixing it up in the exception |
136 | handler.) |
137 | |
138 | This symbol should be selected by an architecture if it can |
139 | perform unaligned accesses efficiently to allow different |
140 | code paths to be selected for these cases. Some network |
141 | drivers, for example, could opt to not fix up alignment |
142 | problems with received packets if doing so would not help |
143 | much. |
144 | |
145 | See Documentation/unaligned-memory-access.txt for more |
146 | information on the topic of unaligned memory accesses. |
147 | |
148 | config ARCH_USE_BUILTIN_BSWAP |
149 | bool |
150 | help |
151 | Modern versions of GCC (since 4.4) have builtin functions |
152 | for handling byte-swapping. Using these, instead of the old |
153 | inline assembler that the architecture code provides in the |
154 | __arch_bswapXX() macros, allows the compiler to see what's |
155 | happening and offers more opportunity for optimisation. In |
156 | particular, the compiler will be able to combine the byteswap |
157 | with a nearby load or store and use load-and-swap or |
158 | store-and-swap instructions if the architecture has them. It |
159 | should almost *never* result in code which is worse than the |
160 | hand-coded assembler in <asm/swab.h>. But just in case it |
161 | does, the use of the builtins is optional. |
162 | |
163 | Any architecture with load-and-swap or store-and-swap |
164 | instructions should set this. And it shouldn't hurt to set it |
165 | on architectures that don't have such instructions. |
166 | |
167 | config KRETPROBES |
168 | def_bool y |
169 | depends on KPROBES && HAVE_KRETPROBES |
170 | |
171 | config USER_RETURN_NOTIFIER |
172 | bool |
173 | depends on HAVE_USER_RETURN_NOTIFIER |
174 | help |
175 | Provide a kernel-internal notification when a cpu is about to |
176 | switch to user mode. |
177 | |
178 | config HAVE_IOREMAP_PROT |
179 | bool |
180 | |
181 | config HAVE_KPROBES |
182 | bool |
183 | |
184 | config HAVE_KRETPROBES |
185 | bool |
186 | |
187 | config HAVE_OPTPROBES |
188 | bool |
189 | |
190 | config HAVE_KPROBES_ON_FTRACE |
191 | bool |
192 | |
193 | config HAVE_NMI |
194 | bool |
195 | |
196 | config HAVE_NMI_WATCHDOG |
197 | depends on HAVE_NMI |
198 | bool |
199 | # |
200 | # An arch should select this if it provides all these things: |
201 | # |
202 | # task_pt_regs() in asm/processor.h or asm/ptrace.h |
203 | # arch_has_single_step() if there is hardware single-step support |
204 | # arch_has_block_step() if there is hardware block-step support |
205 | # asm/syscall.h supplying asm-generic/syscall.h interface |
206 | # linux/regset.h user_regset interfaces |
207 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h |
208 | # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit} |
209 | # TIF_NOTIFY_RESUME calls tracehook_notify_resume() |
210 | # signal delivery calls tracehook_signal_handler() |
211 | # |
212 | config HAVE_ARCH_TRACEHOOK |
213 | bool |
214 | |
215 | config HAVE_DMA_CONTIGUOUS |
216 | bool |
217 | |
218 | config GENERIC_SMP_IDLE_THREAD |
219 | bool |
220 | |
221 | config GENERIC_IDLE_POLL_SETUP |
222 | bool |
223 | |
224 | # Select if arch init_task initializer is different to init/init_task.c |
225 | config ARCH_INIT_TASK |
226 | bool |
227 | |
228 | # Select if arch has its private alloc_task_struct() function |
229 | config ARCH_TASK_STRUCT_ALLOCATOR |
230 | bool |
231 | |
232 | # Select if arch has its private alloc_thread_stack() function |
233 | config ARCH_THREAD_STACK_ALLOCATOR |
234 | bool |
235 | |
236 | # Select if arch wants to size task_struct dynamically via arch_task_struct_size: |
237 | config ARCH_WANTS_DYNAMIC_TASK_STRUCT |
238 | bool |
239 | |
240 | config HAVE_REGS_AND_STACK_ACCESS_API |
241 | bool |
242 | help |
243 | This symbol should be selected by an architecure if it supports |
244 | the API needed to access registers and stack entries from pt_regs, |
245 | declared in asm/ptrace.h |
246 | For example the kprobes-based event tracer needs this API. |
247 | |
248 | config HAVE_CLK |
249 | bool |
250 | help |
251 | The <linux/clk.h> calls support software clock gating and |
252 | thus are a key power management tool on many systems. |
253 | |
254 | config HAVE_DMA_API_DEBUG |
255 | bool |
256 | |
257 | config HAVE_HW_BREAKPOINT |
258 | bool |
259 | depends on PERF_EVENTS |
260 | |
261 | config HAVE_MIXED_BREAKPOINTS_REGS |
262 | bool |
263 | depends on HAVE_HW_BREAKPOINT |
264 | help |
265 | Depending on the arch implementation of hardware breakpoints, |
266 | some of them have separate registers for data and instruction |
267 | breakpoints addresses, others have mixed registers to store |
268 | them but define the access type in a control register. |
269 | Select this option if your arch implements breakpoints under the |
270 | latter fashion. |
271 | |
272 | config HAVE_USER_RETURN_NOTIFIER |
273 | bool |
274 | |
275 | config HAVE_PERF_EVENTS_NMI |
276 | bool |
277 | help |
278 | System hardware can generate an NMI using the perf event |
279 | subsystem. Also has support for calculating CPU cycle events |
280 | to determine how many clock cycles in a given period. |
281 | |
282 | config HAVE_PERF_REGS |
283 | bool |
284 | help |
285 | Support selective register dumps for perf events. This includes |
286 | bit-mapping of each registers and a unique architecture id. |
287 | |
288 | config HAVE_PERF_USER_STACK_DUMP |
289 | bool |
290 | help |
291 | Support user stack dumps for perf event samples. This needs |
292 | access to the user stack pointer which is not unified across |
293 | architectures. |
294 | |
295 | config HAVE_ARCH_JUMP_LABEL |
296 | bool |
297 | |
298 | config HAVE_RCU_TABLE_FREE |
299 | bool |
300 | |
301 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
302 | bool |
303 | |
304 | config HAVE_ALIGNED_STRUCT_PAGE |
305 | bool |
306 | help |
307 | This makes sure that struct pages are double word aligned and that |
308 | e.g. the SLUB allocator can perform double word atomic operations |
309 | on a struct page for better performance. However selecting this |
310 | might increase the size of a struct page by a word. |
311 | |
312 | config HAVE_CMPXCHG_LOCAL |
313 | bool |
314 | |
315 | config HAVE_CMPXCHG_DOUBLE |
316 | bool |
317 | |
318 | config ARCH_WANT_IPC_PARSE_VERSION |
319 | bool |
320 | |
321 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
322 | bool |
323 | |
324 | config ARCH_WANT_OLD_COMPAT_IPC |
325 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
326 | bool |
327 | |
328 | config HAVE_ARCH_SECCOMP_FILTER |
329 | bool |
330 | help |
331 | An arch should select this symbol if it provides all of these things: |
332 | - syscall_get_arch() |
333 | - syscall_get_arguments() |
334 | - syscall_rollback() |
335 | - syscall_set_return_value() |
336 | - SIGSYS siginfo_t support |
337 | - secure_computing is called from a ptrace_event()-safe context |
338 | - secure_computing return value is checked and a return value of -1 |
339 | results in the system call being skipped immediately. |
340 | - seccomp syscall wired up |
341 | |
342 | config SECCOMP_FILTER |
343 | def_bool y |
344 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET |
345 | help |
346 | Enable tasks to build secure computing environments defined |
347 | in terms of Berkeley Packet Filter programs which implement |
348 | task-defined system call filtering polices. |
349 | |
350 | See Documentation/prctl/seccomp_filter.txt for details. |
351 | |
352 | config HAVE_GCC_PLUGINS |
353 | bool |
354 | help |
355 | An arch should select this symbol if it supports building with |
356 | GCC plugins. |
357 | |
358 | menuconfig GCC_PLUGINS |
359 | bool "GCC plugins" |
360 | depends on HAVE_GCC_PLUGINS |
361 | depends on !COMPILE_TEST |
362 | help |
363 | GCC plugins are loadable modules that provide extra features to the |
364 | compiler. They are useful for runtime instrumentation and static analysis. |
365 | |
366 | See Documentation/gcc-plugins.txt for details. |
367 | |
368 | config GCC_PLUGIN_CYC_COMPLEXITY |
369 | bool "Compute the cyclomatic complexity of a function" |
370 | depends on GCC_PLUGINS |
371 | help |
372 | The complexity M of a function's control flow graph is defined as: |
373 | M = E - N + 2P |
374 | where |
375 | |
376 | E = the number of edges |
377 | N = the number of nodes |
378 | P = the number of connected components (exit nodes). |
379 | |
380 | config GCC_PLUGIN_SANCOV |
381 | bool |
382 | depends on GCC_PLUGINS |
383 | help |
384 | This plugin inserts a __sanitizer_cov_trace_pc() call at the start of |
385 | basic blocks. It supports all gcc versions with plugin support (from |
386 | gcc-4.5 on). It is based on the commit "Add fuzzing coverage support" |
387 | by Dmitry Vyukov <dvyukov@google.com>. |
388 | |
389 | config GCC_PLUGIN_LATENT_ENTROPY |
390 | bool "Generate some entropy during boot and runtime" |
391 | depends on GCC_PLUGINS |
392 | help |
393 | By saying Y here the kernel will instrument some kernel code to |
394 | extract some entropy from both original and artificially created |
395 | program state. This will help especially embedded systems where |
396 | there is little 'natural' source of entropy normally. The cost |
397 | is some slowdown of the boot process (about 0.5%) and fork and |
398 | irq processing. |
399 | |
400 | Note that entropy extracted this way is not cryptographically |
401 | secure! |
402 | |
403 | This plugin was ported from grsecurity/PaX. More information at: |
404 | * https://grsecurity.net/ |
405 | * https://pax.grsecurity.net/ |
406 | |
407 | config HAVE_CC_STACKPROTECTOR |
408 | bool |
409 | help |
410 | An arch should select this symbol if: |
411 | - its compiler supports the -fstack-protector option |
412 | - it has implemented a stack canary (e.g. __stack_chk_guard) |
413 | |
414 | config CC_STACKPROTECTOR |
415 | def_bool n |
416 | help |
417 | Set when a stack-protector mode is enabled, so that the build |
418 | can enable kernel-side support for the GCC feature. |
419 | |
420 | choice |
421 | prompt "Stack Protector buffer overflow detection" |
422 | depends on HAVE_CC_STACKPROTECTOR |
423 | default CC_STACKPROTECTOR_NONE |
424 | help |
425 | This option turns on the "stack-protector" GCC feature. This |
426 | feature puts, at the beginning of functions, a canary value on |
427 | the stack just before the return address, and validates |
428 | the value just before actually returning. Stack based buffer |
429 | overflows (that need to overwrite this return address) now also |
430 | overwrite the canary, which gets detected and the attack is then |
431 | neutralized via a kernel panic. |
432 | |
433 | config CC_STACKPROTECTOR_NONE |
434 | bool "None" |
435 | help |
436 | Disable "stack-protector" GCC feature. |
437 | |
438 | config CC_STACKPROTECTOR_REGULAR |
439 | bool "Regular" |
440 | select CC_STACKPROTECTOR |
441 | help |
442 | Functions will have the stack-protector canary logic added if they |
443 | have an 8-byte or larger character array on the stack. |
444 | |
445 | This feature requires gcc version 4.2 or above, or a distribution |
446 | gcc with the feature backported ("-fstack-protector"). |
447 | |
448 | On an x86 "defconfig" build, this feature adds canary checks to |
449 | about 3% of all kernel functions, which increases kernel code size |
450 | by about 0.3%. |
451 | |
452 | config CC_STACKPROTECTOR_STRONG |
453 | bool "Strong" |
454 | help |
455 | Since this config will increase stack size. We repleace it |
456 | |
457 | config CC_STACKPROTECTOR_STRONG_AMLOGIC |
458 | bool "Strong" |
459 | select CC_STACKPROTECTOR |
460 | help |
461 | Functions will have the stack-protector canary logic added in any |
462 | of the following conditions: |
463 | |
464 | - local variable's address used as part of the right hand side of an |
465 | assignment or function argument |
466 | - local variable is an array (or union containing an array), |
467 | regardless of array type or length |
468 | - uses register local variables |
469 | |
470 | This feature requires gcc version 4.9 or above, or a distribution |
471 | gcc with the feature backported ("-fstack-protector-strong"). |
472 | |
473 | On an x86 "defconfig" build, this feature adds canary checks to |
474 | about 20% of all kernel functions, which increases the kernel code |
475 | size by about 2%. |
476 | |
477 | endchoice |
478 | |
479 | config THIN_ARCHIVES |
480 | bool |
481 | help |
482 | Select this if the architecture wants to use thin archives |
483 | instead of ld -r to create the built-in.o files. |
484 | |
485 | config LD_DEAD_CODE_DATA_ELIMINATION |
486 | bool |
487 | help |
488 | Select this if the architecture wants to do dead code and |
489 | data elimination with the linker by compiling with |
490 | -ffunction-sections -fdata-sections and linking with |
491 | --gc-sections. |
492 | |
493 | This requires that the arch annotates or otherwise protects |
494 | its external entry points from being discarded. Linker scripts |
495 | must also merge .text.*, .data.*, and .bss.* correctly into |
496 | output sections. Care must be taken not to pull in unrelated |
497 | sections (e.g., '.text.init'). Typically '.' in section names |
498 | is used to distinguish them from label names / C identifiers. |
499 | |
500 | config LTO |
501 | def_bool n |
502 | |
503 | config ARCH_SUPPORTS_LTO_CLANG |
504 | bool |
505 | help |
506 | An architecture should select this option it supports: |
507 | - compiling with clang, |
508 | - compiling inline assembly with clang's integrated assembler, |
509 | - and linking with either lld or GNU gold w/ LLVMgold. |
510 | |
511 | choice |
512 | prompt "Link-Time Optimization (LTO) (EXPERIMENTAL)" |
513 | default LTO_NONE |
514 | help |
515 | This option turns on Link-Time Optimization (LTO). |
516 | |
517 | config LTO_NONE |
518 | bool "None" |
519 | |
520 | config LTO_CLANG |
521 | bool "Use clang Link Time Optimization (LTO) (EXPERIMENTAL)" |
522 | depends on ARCH_SUPPORTS_LTO_CLANG |
523 | depends on !FTRACE_MCOUNT_RECORD || HAVE_C_RECORDMCOUNT |
524 | depends on !KASAN |
525 | select LTO |
526 | select THIN_ARCHIVES |
527 | select LD_DEAD_CODE_DATA_ELIMINATION |
528 | help |
529 | This option enables clang's Link Time Optimization (LTO), which allows |
530 | the compiler to optimize the kernel globally at link time. If you |
531 | enable this option, the compiler generates LLVM IR instead of object |
532 | files, and the actual compilation from IR occurs at the LTO link step, |
533 | which may take several minutes. |
534 | |
535 | If you select this option, you must compile the kernel with clang >= |
536 | 5.0 (make CC=clang) and GNU gold from binutils >= 2.27, and have the |
537 | LLVMgold plug-in in LD_LIBRARY_PATH. |
538 | |
539 | endchoice |
540 | |
541 | config CFI |
542 | bool |
543 | |
544 | config CFI_PERMISSIVE |
545 | bool "Use CFI in permissive mode" |
546 | depends on CFI |
547 | help |
548 | When selected, Control Flow Integrity (CFI) violations result in a |
549 | warning instead of a kernel panic. This option is useful for finding |
550 | CFI violations in drivers during development. |
551 | |
552 | config CFI_CLANG |
553 | bool "Use clang Control Flow Integrity (CFI) (EXPERIMENTAL)" |
554 | depends on LTO_CLANG |
555 | depends on KALLSYMS |
556 | select CFI |
557 | help |
558 | This option enables clang Control Flow Integrity (CFI), which adds |
559 | runtime checking for indirect function calls. |
560 | |
561 | config CFI_CLANG_SHADOW |
562 | bool "Use CFI shadow to speed up cross-module checks" |
563 | default y |
564 | depends on CFI_CLANG |
565 | help |
566 | If you select this option, the kernel builds a fast look-up table of |
567 | CFI check functions in loaded modules to reduce overhead. |
568 | |
569 | config HAVE_ARCH_WITHIN_STACK_FRAMES |
570 | bool |
571 | help |
572 | An architecture should select this if it can walk the kernel stack |
573 | frames to determine if an object is part of either the arguments |
574 | or local variables (i.e. that it excludes saved return addresses, |
575 | and similar) by implementing an inline arch_within_stack_frames(), |
576 | which is used by CONFIG_HARDENED_USERCOPY. |
577 | |
578 | config HAVE_CONTEXT_TRACKING |
579 | bool |
580 | help |
581 | Provide kernel/user boundaries probes necessary for subsystems |
582 | that need it, such as userspace RCU extended quiescent state. |
583 | Syscalls need to be wrapped inside user_exit()-user_enter() through |
584 | the slow path using TIF_NOHZ flag. Exceptions handlers must be |
585 | wrapped as well. Irqs are already protected inside |
586 | rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on |
587 | irq exit still need to be protected. |
588 | |
589 | config HAVE_VIRT_CPU_ACCOUNTING |
590 | bool |
591 | |
592 | config HAVE_VIRT_CPU_ACCOUNTING_GEN |
593 | bool |
594 | default y if 64BIT |
595 | help |
596 | With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. |
597 | Before enabling this option, arch code must be audited |
598 | to ensure there are no races in concurrent read/write of |
599 | cputime_t. For example, reading/writing 64-bit cputime_t on |
600 | some 32-bit arches may require multiple accesses, so proper |
601 | locking is needed to protect against concurrent accesses. |
602 | |
603 | |
604 | config HAVE_IRQ_TIME_ACCOUNTING |
605 | bool |
606 | help |
607 | Archs need to ensure they use a high enough resolution clock to |
608 | support irq time accounting and then call enable_sched_clock_irqtime(). |
609 | |
610 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
611 | bool |
612 | |
613 | config HAVE_ARCH_HUGE_VMAP |
614 | bool |
615 | |
616 | config HAVE_ARCH_SOFT_DIRTY |
617 | bool |
618 | |
619 | config HAVE_MOD_ARCH_SPECIFIC |
620 | bool |
621 | help |
622 | The arch uses struct mod_arch_specific to store data. Many arches |
623 | just need a simple module loader without arch specific data - those |
624 | should not enable this. |
625 | |
626 | config MODULES_USE_ELF_RELA |
627 | bool |
628 | help |
629 | Modules only use ELF RELA relocations. Modules with ELF REL |
630 | relocations will give an error. |
631 | |
632 | config MODULES_USE_ELF_REL |
633 | bool |
634 | help |
635 | Modules only use ELF REL relocations. Modules with ELF RELA |
636 | relocations will give an error. |
637 | |
638 | config HAVE_UNDERSCORE_SYMBOL_PREFIX |
639 | bool |
640 | help |
641 | Some architectures generate an _ in front of C symbols; things like |
642 | module loading and assembly files need to know about this. |
643 | |
644 | config HAVE_IRQ_EXIT_ON_IRQ_STACK |
645 | bool |
646 | help |
647 | Architecture doesn't only execute the irq handler on the irq stack |
648 | but also irq_exit(). This way we can process softirqs on this irq |
649 | stack instead of switching to a new one when we call __do_softirq() |
650 | in the end of an hardirq. |
651 | This spares a stack switch and improves cache usage on softirq |
652 | processing. |
653 | |
654 | config PGTABLE_LEVELS |
655 | int |
656 | default 2 |
657 | |
658 | config ARCH_HAS_ELF_RANDOMIZE |
659 | bool |
660 | help |
661 | An architecture supports choosing randomized locations for |
662 | stack, mmap, brk, and ET_DYN. Defined functions: |
663 | - arch_mmap_rnd() |
664 | - arch_randomize_brk() |
665 | |
666 | config HAVE_ARCH_MMAP_RND_BITS |
667 | bool |
668 | help |
669 | An arch should select this symbol if it supports setting a variable |
670 | number of bits for use in establishing the base address for mmap |
671 | allocations, has MMU enabled and provides values for both: |
672 | - ARCH_MMAP_RND_BITS_MIN |
673 | - ARCH_MMAP_RND_BITS_MAX |
674 | |
675 | config HAVE_EXIT_THREAD |
676 | bool |
677 | help |
678 | An architecture implements exit_thread. |
679 | |
680 | config ARCH_MMAP_RND_BITS_MIN |
681 | int |
682 | |
683 | config ARCH_MMAP_RND_BITS_MAX |
684 | int |
685 | |
686 | config ARCH_MMAP_RND_BITS_DEFAULT |
687 | int |
688 | |
689 | config ARCH_MMAP_RND_BITS |
690 | int "Number of bits to use for ASLR of mmap base address" if EXPERT |
691 | range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX |
692 | default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT |
693 | default ARCH_MMAP_RND_BITS_MIN |
694 | depends on HAVE_ARCH_MMAP_RND_BITS |
695 | help |
696 | This value can be used to select the number of bits to use to |
697 | determine the random offset to the base address of vma regions |
698 | resulting from mmap allocations. This value will be bounded |
699 | by the architecture's minimum and maximum supported values. |
700 | |
701 | This value can be changed after boot using the |
702 | /proc/sys/vm/mmap_rnd_bits tunable |
703 | |
704 | config HAVE_ARCH_MMAP_RND_COMPAT_BITS |
705 | bool |
706 | help |
707 | An arch should select this symbol if it supports running applications |
708 | in compatibility mode, supports setting a variable number of bits for |
709 | use in establishing the base address for mmap allocations, has MMU |
710 | enabled and provides values for both: |
711 | - ARCH_MMAP_RND_COMPAT_BITS_MIN |
712 | - ARCH_MMAP_RND_COMPAT_BITS_MAX |
713 | |
714 | config ARCH_MMAP_RND_COMPAT_BITS_MIN |
715 | int |
716 | |
717 | config ARCH_MMAP_RND_COMPAT_BITS_MAX |
718 | int |
719 | |
720 | config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
721 | int |
722 | |
723 | config ARCH_MMAP_RND_COMPAT_BITS |
724 | int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT |
725 | range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX |
726 | default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
727 | default ARCH_MMAP_RND_COMPAT_BITS_MIN |
728 | depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS |
729 | help |
730 | This value can be used to select the number of bits to use to |
731 | determine the random offset to the base address of vma regions |
732 | resulting from mmap allocations for compatible applications This |
733 | value will be bounded by the architecture's minimum and maximum |
734 | supported values. |
735 | |
736 | This value can be changed after boot using the |
737 | /proc/sys/vm/mmap_rnd_compat_bits tunable |
738 | |
739 | config HAVE_COPY_THREAD_TLS |
740 | bool |
741 | help |
742 | Architecture provides copy_thread_tls to accept tls argument via |
743 | normal C parameter passing, rather than extracting the syscall |
744 | argument from pt_regs. |
745 | |
746 | config HAVE_STACK_VALIDATION |
747 | bool |
748 | help |
749 | Architecture supports the 'objtool check' host tool command, which |
750 | performs compile-time stack metadata validation. |
751 | |
752 | config HAVE_ARCH_HASH |
753 | bool |
754 | default n |
755 | help |
756 | If this is set, the architecture provides an <asm/hash.h> |
757 | file which provides platform-specific implementations of some |
758 | functions in <linux/hash.h> or fs/namei.c. |
759 | |
760 | config ISA_BUS_API |
761 | def_bool ISA |
762 | |
763 | # |
764 | # ABI hall of shame |
765 | # |
766 | config CLONE_BACKWARDS |
767 | bool |
768 | help |
769 | Architecture has tls passed as the 4th argument of clone(2), |
770 | not the 5th one. |
771 | |
772 | config CLONE_BACKWARDS2 |
773 | bool |
774 | help |
775 | Architecture has the first two arguments of clone(2) swapped. |
776 | |
777 | config CLONE_BACKWARDS3 |
778 | bool |
779 | help |
780 | Architecture has tls passed as the 3rd argument of clone(2), |
781 | not the 5th one. |
782 | |
783 | config ODD_RT_SIGACTION |
784 | bool |
785 | help |
786 | Architecture has unusual rt_sigaction(2) arguments |
787 | |
788 | config OLD_SIGSUSPEND |
789 | bool |
790 | help |
791 | Architecture has old sigsuspend(2) syscall, of one-argument variety |
792 | |
793 | config OLD_SIGSUSPEND3 |
794 | bool |
795 | help |
796 | Even weirder antique ABI - three-argument sigsuspend(2) |
797 | |
798 | config OLD_SIGACTION |
799 | bool |
800 | help |
801 | Architecture has old sigaction(2) syscall. Nope, not the same |
802 | as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), |
803 | but fairly different variant of sigaction(2), thanks to OSF/1 |
804 | compatibility... |
805 | |
806 | config COMPAT_OLD_SIGACTION |
807 | bool |
808 | |
809 | config ARCH_NO_COHERENT_DMA_MMAP |
810 | bool |
811 | |
812 | config CPU_NO_EFFICIENT_FFS |
813 | def_bool n |
814 | |
815 | config HAVE_ARCH_VMAP_STACK |
816 | def_bool n |
817 | help |
818 | An arch should select this symbol if it can support kernel stacks |
819 | in vmalloc space. This means: |
820 | |
821 | - vmalloc space must be large enough to hold many kernel stacks. |
822 | This may rule out many 32-bit architectures. |
823 | |
824 | - Stacks in vmalloc space need to work reliably. For example, if |
825 | vmap page tables are created on demand, either this mechanism |
826 | needs to work while the stack points to a virtual address with |
827 | unpopulated page tables or arch code (switch_to() and switch_mm(), |
828 | most likely) needs to ensure that the stack's page table entries |
829 | are populated before running on a possibly unpopulated stack. |
830 | |
831 | - If the stack overflows into a guard page, something reasonable |
832 | should happen. The definition of "reasonable" is flexible, but |
833 | instantly rebooting without logging anything would be unfriendly. |
834 | |
835 | config VMAP_STACK |
836 | default y |
837 | bool "Use a virtually-mapped stack" |
838 | depends on HAVE_ARCH_VMAP_STACK && !KASAN |
839 | ---help--- |
840 | Enable this if you want the use virtually-mapped kernel stacks |
841 | with guard pages. This causes kernel stack overflows to be |
842 | caught immediately rather than causing difficult-to-diagnose |
843 | corruption. |
844 | |
845 | This is presently incompatible with KASAN because KASAN expects |
846 | the stack to map directly to the KASAN shadow map using a formula |
847 | that is incorrect if the stack is in vmalloc space. |
848 | |
849 | source "kernel/gcov/Kconfig" |
850 |