This eliminates the funky double-storage of the endianness spec and removes the unparsed u8 storage
of ei_data off FileHeader. Now, FileHeader just stores the appropriate parsed endianness spec enum,
and all methods that want to use it grab it from there.
This type derives Debug, so you can "{:?}" format it to get a string like ELF32/ELF64,
which is intuitive to me. If someone wants some other human readable format, they can implement
it themselves.
If the number of segments is greater than or equal to PN_XNUM (0xffff),
e_phnum is set to PN_XNUM, and the actual number of program header table
entries is contained in the sh_info field of the section header at index 0.
The phnum.m68k.so is a sample object file that tests this code path but then
actually only has 1 segment - it just indirects phnum through shdr0.
The spec allows for ELF files that have section header tables but no shstrtab. In this case,
we want to still be able to get the section headers, but signal that there was no shstrtab with
an empty option.
These check that there's no special alignment constraint for parsing a ParseAt out of a bytes buffer,
and a simple error case for failing to parse because the bytes are too small to parse the type.
Also, integrate gnu_hash lookups into the arch smoke tests which look up all the dynamic symbols
in their .gnu.hash tables.
Note that the GnuHashTable::find() method does not currently take any symbol versioning
into account.
The actions-rs helpers spew out a bunch of warnings about using deprecated features that are going away soon. They also aren't any faster than the old manual way of doing things which worked fine.
The old way would just always use the cached corpus from the first run then never update the cache. If I understand this restore-keys configuration, then this should change the behavior to always update the corpus used in this latest run then try to restore from the most recent run the next time around.
When parsing invalid ELF data with ranges larger than actual file size, CachedReader would
eagerly allocate a buffer to land the read of that huge size even though the read would later fail.
This could cause unbounded vec allocations.
CachedReader now seeks to find the actual stream lengthh at the beginning and validates read requests
against that.
Also, add fuzz testing for some basic ElfStream interfaces (that's what caught this bug).
Also, rustfmt the fuzz targets.
The shdrs are commonly used for nearly every other interface method, so there's no need to parse them
out lazily all the time. This interface can (and does) allocate, so let's just allocate and parse
them up front to get it out of the way for the other interface methods.