There's no current need for users of this elf crate to implement their own
parsers, so this doesn't need to be public and clutter the public interface.
This interprets the string table section's data bytes as &str references with
lifetime checking bound to the underlying section data lifetime, so the
underyling data bytes for the string contents aren't copied or re-allocated
when getting a string from the table.
It does do two read passes over the string contents, first to find the
terminating NUL byte and second to do Utf8 validation checking.
I opted to do the Utf8 checking because my goal for this crate is to have zero
unsafe code in it, and the unchecked from_utf8 is unsafe. I have not yet
thought through what opinion I want this crate to have w.r.t. reading malformed
data in general but so far I've been making it return errors as it encounters
it.
This eliminates the Endian argument from Parse::parse by bundling it into the ReadExt's responsibility and
allows implementors of Parse to not have to care about endianness and plumbing it around correctly.
Also, change FileHeader::parse to no longer implement Parse. It previously needed to take bogus unused Endian and Class
specifications. There's a unique bootstrapping situation with parsing the FileHeader, as it is the structure that informs
the Endian and Class specifications used by all the other parsers. I opted to simply make this implement parse() (but not
for the trait Parse) by taking the underlying delegate reader rather than a ReadExt which seemed intuitive to me.
This is a partial patch that starts us using the new interface in a minimal but non-optimal way
The next step is to change the Parse trait to take a ReadExt instead of a std::io::Read
This will be used to obviate the need to pass around the Endian, Class tuples all over the place
when parsing, as the reader will remember them.
My longer-term thought is that the File object will create and remember one of these Readers which
will be able to also be used to make some of this parsing lazy.
This is getting us closer to having a more centralized trait-driven parser combinator approach
that can hopefully clean up the code a bit and make it easier to follow.
This patch adds verification to the initial e_ident[] parsing verification which returns
an error if the file's e_ident contains an unknown endianness value. This allows the later
integer parsing methods (read_u*) to assume that they are given a valid endianness configuration
(either little or big).
This declutters the code a bit. It's obvious what these methods are doing and
its not important to state which module they're defined in on each invocation.
Also move SectionHeader into section.rs
Also, refactor the code for reading section data and section names from the section string table to be more idiomatic
My plan is to make the various ELF data structures implement this Parse trait which knows
how to parse that structure based on the given endianness and data width (32- or 64-bit).
It no longer implements any macros
Also, remove the pub on mod utils, since it's only needed internally and shouldn't be considered part of the public interface
This patch starts changing the type representations for our ELF types. It splits out the raw constants defined in
the Generic System V Application Binary Interface out into its own file (gabi.rs) as native type constants. The
type definitions in types.rs are now left to be higher level semantic representations of these fields which may
or may not directly map to the on-disk ELF data layout.
The intent here is to structure the code such that the opinionated rust representation of these parsed ELF structures
live separately from the official gabi definitions. This could set us up for a future where consumers of this library that
desire their own different rust representations of parsed ELF structures to consume the gabi definitions without
also having to much about with our representations here.