|C Tutorial||Integer types|
Introduction to C language
Type combination and promotion
Floating point types
Int vs float arithmatic
Unary Increment Operators
Pre and Post Variations
C Programming Cleverness and Ego Issues
Other Assignment Operators
The "integral" types in C form a family of integer types. They all behave like integers and
can be mixed together and used in similar ways. The differences are due to the different
number of bits ("widths") used to implement each type -- the wider types can store a
greater ranges of values.
char ASCII character -- at least 8 bits. Pronounced "car". As a practical matter
char is basically always a byte which is 8 bits which is enough to store a single
ASCII character. 8 bits provides a signed range of -128..127 or an unsigned range is
0..255. char is also required to be the "smallest addressable unit" for the machine -- each byte in memory has its own address.
short Small integer -- at least 16 bits which provides a signed range of
-32768..32767. Typical size is 16 bits. Not used so much.
int Default integer -- at least 16 bits, with 32 bits being typical. Defined to be the "most comfortable" size for the computer. If you do not really care about the
range for an integer variable, declare it int since that is likely to be an appropriate size (16 or 32 bit) which works well for that machine.
long Large integer -- at least 32 bits. Typical size is 32 bits which gives a signed range of about -2 billion ..+2 billion. Some compilers support "long long" for 64 bit ints.
The integer types can be preceded by the qualifier unsigned which disallows
representing negative numbers, but doubles the largest positive number representable. For
example, a 16 bit implementation of short can store numbers in the range
-32768..32767, while unsigned short can store 0..65535. You can think of pointers
as being a form of unsigned long on a machine with 4 byte pointers. In my opinion,
it's best to avoid using unsigned unless you really need to. It tends to cause more
misunderstandings and problems than it is worth.
Extra: Portability Problems
Instead of defining the exact sizes of the integer types, C defines lower bounds. This
makes it easier to implement C compilers on a wide range of hardware. Unfortunately it
occasionally leads to bugs where a program runs differently on a 16-bit-int machine than
it runs on a 32-bit-int machine. In particular, if you are designing a function that will be
implemented on several different machines, it is a good idea to use typedefs to set up
types like Int32 for 32 bit int and Int16 for 16 bit int. That way you can prototype a function Foo(Int32) and be confident that the typedefs for each machine will be set so that the function really takes exactly a 32 bit int. That way the code will behave the same
on all the different machines.