The C standard mandates that the type of a bare integer literal is a signed integral type that is large enough to hold the value of the literal.  For example, on a platform with 16-bit ‘int’ and 32-bit ‘long’ types, the type of the literal 1 would be ‘int’ (since a value of 1 fits into 16 bits), while the type of the literal 65536 would be ‘long’ since a value of 65536 needs at least 17 bits in its representation.

Prior to Elftoolchain revision [rXXXX], constants defined by the ELF specification were defined using the C preprocessor’s #define directives, as follows:

Generated file: <sys/elfdefinitions.h>
#define	EM_NONE	0
#define EM_M32	1
/* ... other, similar, definitions ... */

The problem with this code is that the specifications for ELF use ‘unsigned’ fields and quantities for the most part, forcing the compiler (or the developer) to handle the necessary type conversions.

typedef uint16_t Elf32_Half;

Elf32_Half e_machine = EM_M32;  // Implicit signed-to-unsigned conversion.

In the interest of modelling the problem domain accurately, I have switched to using/generating ‘unsigned’ literals for those ELF constants that are used with ELF fields holding ‘unsigned’ quantities.

Generated file: <sys/elfdefinitions.h>
#define	EM_NONE	0U
#define EM_M32	1U
/* ... other, similar, definitions ... */

This change should not change the behavior of the code for the most part.