This is a fuzzing PR, with a testcase involving a SHF_ALLOC and
SHF_COMPRESSED SHT_RELA section, ie. a compressed dynamic reloc
section. BFD doesn't handle compressed relocation sections, with most
of the code reading relocs using sh_size (often no bfd section is
created) but in the case of SHF_ALLOC dynamic relocs we had some code
using the bfd section size. This led to a mismatch, sh_size is
compressed, size is uncompressed, and from that some uninitialised
memory. Consistently using sh_size is enough to fix this PR, but I've
also added tests to exclude SHF_COMPRESSED reloc sections from
consideration.
PR 30362
* elf.c (bfd_section_from_shdr): Exclude reloc sections with
SHF_COMPRESSED flag from normal reloc processing.
(_bfd_elf_get_dynamic_reloc_upper_bound): Similarly exclude
SHF_COMPRESSED sections from consideration. Use sh_size when
sizing to match slurp_relocs.
(_bfd_elf_canonicalize_dynamic_reloc): Likewise.
(_bfd_elf_get_synthetic_symtab): Use NUM_SHDR_ENTRIES to size
plt relocs.
* elf32-arm.c (elf32_arm_get_synthetic_symtab): Likewise.
* elf32-ppc.c (ppc_elf_get_synthetic_symtab): Likewise.
* elf64-ppc.c (ppc64_elf_get_synthetic_symtab): Likewise.
* elfxx-mips.c (_bfd_mips_elf_get_synthetic_symtab): Likewise.
Except it isn't out of bounds because space for a larger array has
been allocated.
* dwarf2.c (struct trie_leaf): Make ranges a C99 flexible array.
(alloc_trie_leaf, insert_arange_in_trie): Adjust sizing.
If pe_ILF_object_p succeeds, pe_ILF_build_a_bfd will have changed the
bfd from being file backed to in-memory. This can have unfortunate
results for targets checked by bfd_check_format_matches after that
point as they will be matching against the created in-memory image
rather than the file. bfd_preserve_restore also has a problem if it
flips the BFD_IN_MEMORY flag, because the flag affects iostream
meaning and should be set if using _bfd_memory_iovec. To fix these
problems, save and restore iostream and iovec along with flags, and
modify bfd_reinit to make the bfd file backed again. Restoring the
iovec and iostream allows the hack in bfd_reinit keeping BFD_IN_MEMORY
(part of BFD_FLAGS_SAVED) to be removed.
One more detail: If restoring from file backed to in-memory then the
bfd needs to be forcibly removed from the cache lru list, since after
the bfd becomes in-memory a bfd_close will delete the bfd's memory
leaving the lru list pointing into freed memory.
* cache.c (bfd_cache_init): Clear BFD_CLOSED_BY_CACHE here..
(bfd_cache_lookup_worker): ..rather than here.
(bfd_cache_close): Comment.
* format.c (struct bfd_preserve): Add iovec and iostream fields.
(bfd_preserve_save): Save them..
(bfd_preserve_restore): ..and restore them, calling
bfd_cache_close if the iovec differs.
(bfd_reinit): Add preserve param. If the bfd has been flipped
to in-memory, reopen the file. Restore flags.
* peicode.h (pe_ILF_cleanup): New function.
(pe_ILF_object_p): Return it.
* bfd.c (BFD_FLAGS_SAVED): Delete.
* bfd-in2.h: Regenerate.
The bfd_elf_hash loop is taken straight from the sysV document, but it
is poorly optimized. This refactoring removes about 5 x86 insns from
the 15 insn loop.
1) The if (..) is meaningless -- we're xoring with that value, and of
course xor 0 is a nop. On x86 (at least) we actually compute the xor'd
value and then cmov. Removing the if test removes the cmov.
2) The 'h ^ g' to clear the top 4 bits is not needed, as those 4 bits
will be shifted out in the next iteration. All we need to do is sink
a mask of those 4 bits out of the loop.
3) anding with 0xf0 after shifting by 24 bits can allow betterin
encoding on RISC ISAs than masking with '0xf0 << 24' before shifting.
RISC ISAs often require materializing larger constants.
bfd/
* elf.c (bfd_elf_hash): Refactor to optimize loop.
(bfd_elf_gnu_hash): Refactor to use 32-bit type.
It said for 'info inferiors' and 'info connections' that the argument
could be 'a space separated list of inferior numbers' which is correct
but incomplete. In fact the arguments can be any space separated
combination of numbers and (ascending) ranges.
The beginning of the section now describes the ID list as a new keyword.
Co-Authored-By: Christina Schimpe <christina.schimpe@intel.com>
Spotted some code in print_one_breakpoint_location that was not
indented correctly, this commit just changes the indentation.
There should be no user visible changes after this commit.
On amd64 (at least) if a user sets a watchpoint before the inferior
has started then GDB will assume that a hardware watchpoint can be
created.
When the inferior starts there is a chance that the watchpoint can't
actually be create as a hardware watchpoint, in which case (currently)
GDB will silently convert the watchpoint to a software watchpoint.
Here's an example session:
(gdb) p sizeof var
$1 = 4000
(gdb) watch var
Hardware watchpoint 1: var
(gdb) info watchpoints
Num Type Disp Enb Address What
1 hw watchpoint keep y var
(gdb) starti
Starting program: /home/andrew/tmp/watch
Program stopped.
0x00007ffff7fd3110 in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) info watchpoints
Num Type Disp Enb Address What
1 watchpoint keep y var
(gdb)
Notice that before the `starti` command the watchpoint is showing as a
hardware watchpoint, but afterwards it is showing as a software
watchpoint. Additionally, note that we clearly told the user we
created a hardware watchpoint:
(gdb) watch var
Hardware watchpoint 1: var
I think this is bad. I used `starti`, but if the user did `start` or
even `run` then the inferior is going to be _very_ slow, which will be
unexpected -- after all, we clearly told the user that we created a
hardware watchpoint, and the manual clearly says that hardware
watchpoints are fast (at least compared to s/w watchpoints).
In this patch I propose adding a new warning which will be emitted
when GDB downgrades a h/w watchpoint to s/w. The session now looks
like this:
(gdb) p sizeof var
$1 = 4000
(gdb) watch var
Hardware watchpoint 1: var
(gdb) info watchpoints
Num Type Disp Enb Address What
1 hw watchpoint keep y var
(gdb) starti
Starting program: /home/andrew/tmp/watch
warning: watchpoint 1 downgraded to software watchpoint
Program stopped.
0x00007ffff7fd3110 in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) info watchpoints
Num Type Disp Enb Address What
1 watchpoint keep y var
(gdb)
The important line is:
warning: watchpoint 1 downgraded to software watchpoint
It's not much, but hopefully it will be enough to indicate to the user
that something unexpected has occurred, and hopefully, they will not
be surprised when the inferior runs much slower than they expected.
I've added an amd64 only test in gdb.arch/, I didn't want to try
adding this as a global test as other architectures might be able to
support the watchpoint request in h/w.
Also the test is skipped for extended-remote boards as there's a
different set of options for limiting hardware watchpoints on remote
targets, and this test isn't about them.
Reviewed-By: Lancelot Six <lancelot.six@amd.com>
I was seeing some failures in gdb.threads/omp-par-scope.exp when run
on a riscv64 target. It turns out the cause of the problem is that I
didn't have debug information installed for libgomp.so, which this
test makes use of. The test requires GDB to backtrace through a
libgomp function, and the riscv prologue unwinder was failing to
unwind this particular stack frame.
The reason for the failure to unwind was that the function prologue
includes a c.li (compressed load immediate) instruction, and the riscv
prologue scanning unwinder doesn't know what to do with this
instruction, though the unwinder does understand c.lui (compressed
load unsigned immediate).
This commit adds support for c.li. After this GDB is able to unwind
through libgomp, and I no longer see any unexpected failures in
gdb.threads/omp-par-scope.exp.
I've also included a new test in gdb.arch/ which specifically checks
for our c.li support.
Unfortunately MinGW doesn't support std::future yet, so this causes the
build to fail. Use GDB's version which provides a fallback for this case.
Tested for regressions on native aarch64-linux.
Approved-By: Tom Tromey <tromey@adacore.com>
PR win32/30255 points out that a call to a NULL function pointer will
leave gdb unable to "bt" on Windows.
I tracked this down to the amd64 windows unwinder. If we treat this
scenario as if it were a leaf function, unwinding works fine.
I'm not completely sure this patch is the best way. I considered
having it check for 'pc==0' -- but then I figured this could affect
any inaccessible PC, not just the special 0 value.
No test case because I can't run dejagnu tests on Windows. I tested
this by hand using the test case in the bug.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30255
This is a bug in the intersection of two obscure options that cannot
even be invoked from ld with a feature added to stop ld of the
same input file repeatedly from crashing the linker.
The latter fix involved tracking input files (internally to libctf) not
just with their input CU name but with a version of their input CU name
that was augmented with a numeric prefix if their linker input file name
was changed, to prevent distinct CTF dicts with the same cuname from
overwriting each other. (We can't use just the linker input file name
because one linker input can contain many CU dicts, particularly under
ld -r). If these inputs then produced conflicting types, those types
were emitted into similarly-named output dicts, so we needed similar
machinery to detect clashing output dicts and add a numeric prefix to
them as well.
This works fine, except that if you used the cu-mapping feature to force
double-linking of CTF (so that your CTF can be grouped into output dicts
larger than a single translation unit) and then also used
CTF_LINK_EMPTY_CU_MAPPINGS to force every possible output dict in the
mapping to be created (even if empty), we did the creation of empty dicts
first, and then all the actual content got considered to be a clash. So
you ended up with a pile of useless empty dicts and then all the content
was in full dicts with the same names suffixed with a #0. This seems
likely to confuse consumers that use this facility.
Fixed by generating all the EMPTY_CU_MAPPINGS empty dicts after linking
is complete, not before it runs.
No impact on ld, which does not do cu-mapped links or pass
CTF_LINK_EMPTY_CU_MAPPINGS to ctf_link().
libctf/
* ctf-link.c (ctf_create_per_cu): Don't create new dicts iff one
already exists and we are making one for no input in particular.
(ctf_link): Emit empty CTF dicts corresponding to no input in
particular only after linkiing is complete.
CTF dicts have per-dict errno values: as with other errno values these
are set on error and left unchanged on success. This means that all
errors *must* set the CTF errno: if a call leaves it unchanged, the
caller is apt to find a previous, lingering error and misinterpret
it as the real error.
There are many places in libctf where we carry out operations on parent
dicts as a result of carrying out other user-requested operations on
child dicts (e.g. looking up information on a pointer to a type will
look up the type as well: the pointer might well be in a child and the
type it's a pointer to in the parent). Those operations on the parent
might fail; if they do, the error must be correctly reflected on the
child that the user-visible operation was carried out on. In many
places this was not happening.
So, audit and fix all those places. Add tests for as many of those
cases as possible so they don't regress.
libctf/
* ctf-create.c (ctf_add_slice): Use the original dict.
* ctf-lookup.c (ctf_lookup_variable): Propagate errors.
(ctf_lookup_symbol_idx): Likewise.
* ctf-types.c (ctf_member_next): Likewise.
(ctf_type_resolve_unsliced): Likewise.
(ctf_type_aname): Likewise.
(ctf_member_info): Likewise.
(ctf_type_rvisit): Likewise.
(ctf_func_type_info): Set the error on the right dict.
(ctf_type_encoding): Use the original dict.
* testsuite/libctf-writable/error-propagation.*: New test.
The newly-introduced libctf-lookup unnamed-field-info test checks
C compiler-observed field offsets against libctf-computed ones
by #including the testcase in the lookup runner as well as
generating CTF for it. This only works if the host, on which
the lookup runner is compiled and executed, is the same architecture as
the target, for which the CTF is generated: when crossing, the trick
may fail.
So pass down an indication of whether this is a cross into the
testsuite, and add a new no_cross flag to .lk files that is used to
suppress test execution when a cross-compiler is being tested.
libctf/
* Makefile.am (check_DEJAGNU): Pass down TEST_CROSS.
* Makefile.in: Regenerated.
* testsuite/lib/ctf-lib.exp (run_lookup_test): Use it to
implement the new no_cross option.
* testsuite/libctf-lookup/unnamed-field-info.lk: Mark as
no_cross.
In an experiment I'm trying, I needed Ada symbol cache entries to be
allocated with 'new'. This patch reimplements the symbol cache to use
the libiberty hash table and to use new and delete. A couple of other
minor cleanups are done.
Whenever we start gdb in the testsuite, we have the rather verbose:
...
$ gdb
GNU gdb (GDB) 14.0.50.20230405-git
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
(gdb)
...
This makes gdb.log longer than necessary and harder to read.
We do need to test that the output is produced, but that should be limited to
one or a few test-cases.
Fix this by adding -q to INTERNAL_GDBFLAGS, such that we simply have:
...
$ gdb -q
(gdb)
...
Tested on x86_64-linux.
For test-case gdb.arch/amd64-disp-step-self-call.exp I get:
...
gdb compile failed, ld: warning: amd64-disp-step-self-call0.o: \
missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future \
version of the linker
...
Fix this by adding the missing .note.GNU-stack.
Likewise for gdb.arch/i386-disp-step-self-call.exp.
Tested on x86_64-linux.
This commit:
commit cf141dd8ccd36efe833aae3ccdb060b517cc1112
Date: Wed Feb 22 12:15:34 2023 +0000
gdb: fix reg corruption from displaced stepping on amd64
Added two test scripts gdb.arch/amd64-disp-step-self-call.exp and
gdb.arch/i386-disp-step-self-call.exp. These scripts contained a test
that included a stack address in the test name, this makes it harder
to compare results between runs.
This commit gives the tests proper names that doesn't include an
address.
Also in gdb.arch/i386-disp-step-self-call.exp I noticed that we were
writing 8-bytes rather than 4 in order to clear the return address
entry on the stack. This is also fixed in this commit.
This changes apply_ext_lang_type_printers to use unique_xmalloc_ptr,
removing some manual memory management. Regression tested on x86-64
Fedora 36.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Clang with LTO (clang -flto) garbage collects unused global variables,
Thus, gdb.base/align-c.exp and gdb.base/align-c++.exp fail with
hundreds of FAILs like so:
$ make check \
TESTS="gdb.*/align-*.exp" \
RUNTESTFLAGS="CC_FOR_TARGET='clang -flto' CXX_FOR_TARGET='clang++ -flto'"
...
FAIL: gdb.base/align-c.exp: get integer valueof "a_char"
FAIL: gdb.base/align-c.exp: print _Alignof(char)
FAIL: gdb.base/align-c.exp: get integer valueof "a_char_x_char"
FAIL: gdb.base/align-c.exp: print _Alignof(struct align_pair_char_x_char)
FAIL: gdb.base/align-c.exp: get integer valueof "a_char_x_unsigned_char"
...
AIX GCC has the same issue, and there the easier way of adding
__attribute__((used)) to globals does not help.
So add explicit uses of all globals to the generated code.
For the C++ test, that reveals that the static variable members of the
generated structs are not defined anywhere, leading to undefined
references. Fixed by emitting initialization for all static members.
Lastly, I noticed that CXX_FOR_TARGET was being ignored -- that's
because the align-c++.exp testcase is compiling with the C compiler
driver. Fixed by passing "c++" as option to prepare_for_testing.
Change-Id: I874b717afde7b6fb1e45e526912b518a20a12716
The following commit changed gdbarch_components.py but failed to
format it with black:
commit cf141dd8ccd36efe833aae3ccdb060b517cc1112
Date: Wed Feb 22 12:15:34 2023 +0000
gdb: fix reg corruption from displaced stepping on amd64
This commit just runs black on the file and commits the result.
The change is just the addition of an extra "," -- there will be no
change to the generated source files after this commit.
There will be no user visible changes after this commit.
This commit allows Frame.read_var to accept named arguments, and also
improves (I think) some of the error messages emitted when values of
the wrong type are passed to this function.
The read_var method takes two arguments, one a variable, which is
either a gdb.Symbol or a string, while the second, optional, argument
is always a gdb.Block.
I'm now using 'O!' as the format specifier for the second argument,
which allows the argument type to be checked early on. Currently, if
the second argument is of the wrong type then we get this error:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Second argument must be block.
Error while executing Python code.
(gdb)
After this commit, we now get an error like this:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 2 must be gdb.Block, not str
Error while executing Python code.
(gdb)
Changes are:
1. Exception type is TypeError not RuntimeError, this is unfortunate
as user code _could_ be relying on this, but I think the improvement
is worth the risk, user code relying on the exact exception type is
likely to be pretty rare,
2. New error message gives argument position and expected argument
type, as well as the type that was passed.
If the first argument, the variable, has the wrong type then the
previous exception was already a TypeError, however, I've updated the
text of the exception to more closely match the "standard" error
message we see above. If the first argument has the wrong type then
before this commit we saw this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Argument must be a symbol or string.
Error while executing Python code.
(gdb)
And after we see this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 1 must be gdb.Symbol or str, not int
Error while executing Python code.
(gdb)
For existing code that doesn't use named arguments and doesn't rely on
exceptions, there will be no changes after this commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
Following on from the previous commit, this updates
Frame.read_register to accept named arguments. As with the previous
commit there's no huge benefit for the users in accepting named
arguments here -- this function only takes a single argument after
all.
But I do think it is worth keeping Frame.read_register method in sync
with the PendingFrame.read_register method, this allows for the
possibility that the user has some code that can operate on either a
Frame or a Pending frame.
Minor update to allow for named arguments, and an extra test to check
the new functionality.
Reviewed-By: Tom Tromey <tom@tromey.com>
Update the two gdb.PendingFrame methods gdb.PendingFrame.read_register
and gdb.PendingFrame.create_unwind_info to accept keyword arguments.
There's no huge benefit for making this change, both of these methods
only take a single argument, so it is (maybe) less likely that a user
will take advantage of the keyword arguments in these cases, but I
think it's nice to be consistent, and I don't see any particular draw
backs to making this change.
For PendingFrame.read_register I've changed the argument name from
'reg' to 'register' in the documentation and used 'register' as the
argument name in GDB. My preference for APIs is to use full words
where possible, and given we didn't support named arguments before
this change should not break any existing code.
There should be no user visible changes (for existing code) after this
commit.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
Update gdb.UnwindInfo.add_saved_register to accept named keyword
arguments.
As part of this update we now use gdb_PyArg_ParseTupleAndKeywords
instead of PyArg_UnpackTuple to parse the function arguments.
By switching to gdb_PyArg_ParseTupleAndKeywords, we can now use 'O!'
as the argument format for the function's value argument. This means
that we can check the argument type (is gdb.Value) as part of the
argument processing rather than manually performing the check later in
the function. One result of this is that we now get a better error
message (at least, I think so). Previously we would get something
like:
ValueError: Bad register value
Now we get:
TypeError: argument 2 must be gdb.Value, not XXXX
It's unfortunate that the exception type changed, but I think the new
exception type actually makes more sense.
My preference for argument names is to use full words where that's not
too excessive. As such, I've updated the name of the argument from
'reg' to 'register' in the documentation, which is the argument name
I've made GDB look for here.
For existing unwinder code that doesn't throw any exceptions nothing
should change with this commit. It is possible that a user has some
code that throws and catches the ValueError, and this code will break
after this commit, but I think this is going to be sufficiently rare
that we can take the risk here.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit aims to address a problem that exists with the current
approach to displaced stepping, and was identified in PR gdb/22921.
Displaced stepping is currently supported on AArch64, ARM, amd64,
i386, rs6000 (ppc), and s390. Of these, I believe there is a problem
with the current approach which will impact amd64 and ARM, and can
lead to random register corruption when the inferior makes use of
asynchronous signals and GDB is using displaced stepping.
The problem can be found in displaced_step_buffers::finish in
displaced-stepping.c, and is this; after GDB tries to perform a
displaced step, and the inferior stops, GDB classifies the stop into
one of two states, either the displaced step succeeded, or the
displaced step failed.
If the displaced step succeeded then gdbarch_displaced_step_fixup is
called, which has the job of fixing up the state of the current
inferior as if the step had not been performed in a displaced manner.
This all seems just fine.
However, if the displaced step is considered to have not completed
then GDB doesn't call gdbarch_displaced_step_fixup, instead GDB
remains in displaced_step_buffers::finish and just performs a minimal
fixup which involves adjusting the program counter back to its
original value.
The problem here is that for amd64 and ARM setting up for a displaced
step can involve changing the values in some temporary registers. If
the displaced step succeeds then this is fine; after the step the
temporary registers are restored to their original values in the
architecture specific code.
But if the displaced step does not succeed then the temporary
registers are never restored, and they retain their modified values.
In this context a temporary register is simply any register that is
not otherwise used by the instruction being stepped that the
architecture specific code considers safe to borrow for the lifetime
of the instruction being stepped.
In the bug PR gdb/22921, the amd64 instruction being stepped is
an rip-relative instruction like this:
jmp *0x2fe2(%rip)
When we displaced step this instruction we borrow a register, and
modify the instruction to something like:
jmp *0x2fe2(%rcx)
with %rcx having its value adjusted to contain the original %rip
value.
Now if the displaced step does not succeed, then %rcx will be left
with a corrupted value. Obviously corrupting any register is bad; in
the bug report this problem was spotted because %rcx is used as a
function argument register.
And finally, why might a displaced step not succeed? Asynchronous
signals provides one reason. GDB sets up for the displaced step and,
at that precise moment, the OS delivers a signal (SIGALRM in the bug
report), the signal stops the inferior at the address of the displaced
instruction. GDB cancels the displaced instruction, handles the
signal, and then tries again with the displaced step. But it is that
first cancellation of the displaced step that causes the problem; in
that case GDB (correctly) sees the displaced step as having not
completed, and so does not perform the architecture specific fixup,
leaving the register corrupted.
The reason why I think AArch64, rs600, i386, and s390 are not effected
by this problem is that I don't believe these architectures make use
of any temporary registers, so when a displaced step is not completed
successfully, the minimal fix up is sufficient.
On amd64 we use at most one temporary register.
On ARM, looking at arm_displaced_step_copy_insn_closure, we could
modify up to 16 temporary registers, and the instruction being
displaced stepped could be expanded to multiple replacement
instructions, which increases the chances of this bug triggering.
This commit only aims to address the issue on amd64 for now, though I
believe that the approach I'm proposing here might be applicable for
ARM too.
What I propose is that we always call gdbarch_displaced_step_fixup.
We will now pass an extra argument to gdbarch_displaced_step_fixup,
this a boolean that indicates whether GDB thinks the displaced step
completed successfully or not.
When this flag is false this indicates that the displaced step halted
for some "other" reason. On ARM GDB can potentially read the
inferior's program counter in order figure out how far through the
sequence of replacement instructions we got, and from that GDB can
figure out what fixup needs to be performed.
On targets like amd64 the problem is slightly easier as displaced
stepping only uses a single replacement instruction. If the displaced
step didn't complete the GDB knows that the single instruction didn't
execute.
The point is that by always calling gdbarch_displaced_step_fixup, each
architecture can now ensure that the inferior state is fixed up
correctly in all cases, not just the success case.
On amd64 this ensures that we always restore the temporary register
value, and so bug PR gdb/22921 is resolved.
In order to move all architectures to this new API, I have moved the
minimal roll-back version of the code inside the architecture specific
fixup functions for AArch64, rs600, s390, and ARM. For all of these
except ARM I think this is good enough, as no temporaries are used all
that's needed is the program counter restore anyway.
For ARM the minimal code is no worse than what we had before, though I
do consider this architecture's displaced-stepping broken.
I've updated the gdb.arch/amd64-disp-step.exp test to cover the
'jmpq*' instruction that was causing problems in the original bug, and
also added support for testing the displaced step in the presence of
asynchronous signal delivery.
I've also added two new tests (for amd64 and i386) that check that GDB
can correctly handle displaced stepping over a single instruction that
branches to itself. I added these tests after a first version of this
patch relied too much on checking the program-counter value in order
to see if the displaced instruction had executed. This works fine in
almost all cases, but when an instruction branches to itself a pure
program counter check is not sufficient. The new tests expose this
problem.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=22921
Approved-By: Pedro Alves <pedro@palves.net>
Fix memory leaks and do a general tidy of the code for printing coff
and stabs debug.
* prdbg.c: Delete unnneeded forward function declarations.
Delete unnecessary casts throughout. Free all strings
returned from pop_type throughout file.
(struct pr_stack): Delete "num_parents". Replace tests for
"num_parents" non-zero with tests of "parents" non-NULL
throughout. Free "parents" before assigning, and set to NULL
after freeing. Remove const from "method". Always strdup
strings assigned to method, and free before assigning.
(print_debugging_info): Free info.stack and info.filename.
objdump -g can't be used much. Trying to dump PE files invariably
seems to run into "debug_name_type: no current file" or similar
errors, because parse_coff expects a C_FILE symbol to be the first
symbol. Dumping -gstabs output works since the N_SO stab is present.
Pre-setting the file name won't hurt stabs dumping.
* rddbg.c (read_debugging_info): Call debug_set_filename.