5.1 TypesC# is a strongly typed language. That means that every object you create or use in a C# program must have a specific type (e.g., you must declare the object to be an integer or a string or a Dog or a Button). The type tells the compiler how big the object is and what it can do. Types come in two flavors: those that are built into the language (intrinsic types) and those you create (classes, structs, and interfaces, discussed in Chapter 8, Chapter 13, and Chapter 14, respectively). C# offers a number of intrinsic types, shown in Table 5-1.
Each type has a name (e.g., int) and a size (e.g., 4 bytes). The size tells you how many bytes each object of this type occupies in memory. (Programmers generally don't like to waste memory if they can avoid it, but with the cost of memory these days, you can afford to be mildly profligate if doing so simplifies your program.) The description field of Table 5-1 tells you the minimum and maximum values you can hold in objects of each type.
Intrinsic types can't do much. You can use them to add two numbers together, and they can display their values as strings. User-defined types can do a lot more; their abilities are determined by the methods you create, as discussed in detail in Chapter 9. Objects of an intrinsic type are called variables. Variables are discussed in detail later in this chapter. 5.1.1 Numeric TypesMost of the intrinsic types are used for working with numeric values (byte, sbyte, short, ushort, int, uint, float, double, decimal, long, and ulong). The numeric types can be broken into two sets: unsigned and signed. An unsigned value (byte, ushort, uint, ulong) can hold only positive values. A signed value (sbyte, short, int, long) can hold positive or negative values but the highest value is only half as large. That is, a ushort can hold any value from 0 through 65,535, but a short can hold only -32,768 through 32,767. Notice that 32,767 is nearly half of 65,535 (it is off by one to allow for holding the value zero). The reason a ushort can hold up to 65,535 is that 65,535 is a round number in binary arithmetic (216). Another way to divide the types is into those used for integer values (whole numbers) and those used for floating-point values ( fractional or rational numbers). The byte, sbyte, ushort, uint, ulong, short, int, and long types all hold whole number values.[1]
The double and float types hold fractional values. For most uses, float will suffice, unless you need to hold a really big fractional number, in which case you might use a double. The decimal value type was added to the language to support accounting applications. Typically you decide which size integer to use (short, int, or long) based on the magnitude of the value you want to store. For example, a ushort can only hold values from 0 through 65,535, while a uint can hold values from 0 through 4,294,967,295. That said, memory is fairly cheap, and programmer time is increasingly expensive; most of the time you'll simply declare your variables to be of type int, unless there is a good reason to do otherwise. Most programmers choose signed types unless they have a good reason to use an unsigned value. This is, in part, just a matter of tradition. Suppose you need to keep track of inventory. You expect to house up to 40,000 or even 50,000 copies of each book. A signed short can only hold up to 32,767 values. You might be tempted to use an unsigned short (which can hold up to 65,535 values), but it is easier and preferable to just use a signed int (with a maximum value of 2,147,483,647). That way, if you have a runaway best seller, your program won't break (if you anticipate selling more than 2 billion copies of your book, perhaps you'll want to use a long!).[2]
It is better to use an unsigned variable when the fact that the value must be positive is an inherent characteristic of the data. For example, if you had a variable to hold a person's age, you would use an unsigned int because an age cannot be negative. Float, double, and decimal offer varying degrees of size and precision. For most small fractional numbers, float is fine. Note that the compiler assumes that any number with a decimal point is a double unless you tell it otherwise. (Section 5.2 discusses how you tell it otherwise.) 5.1.2 Non-Numeric Types: char and boolIn addition to the numeric types, the C# language offers two other types: char and bool. The char type is used from time to time when you need to hold a single character. The char type can represent a simple character (A), a Unicode character (\u0041), or an escape character enclosed by single quote marks ('\n'). You'll see chars used in this book, and their use will be explained in context. The one remaining type of importance is bool, which holds a Boolean value. A Boolean value is a one that is either true or false.[3] Boolean values are used frequently in C# programming as you'll see throughout this book. Virtually every comparison (e.g., is myDog bigger than yourDog?) results in a Boolean value.
5.1.3 Types and Compiler ErrorsThe compiler will help you by complaining if you try to use a type improperly. The compiler complains in one of two ways: it issues a warning or it issues an error.
Programmers talk about design-time, compile-time, and runtime. Design-time is when you are designing the program, compile-time is when you compile the program, and runtime is (surprise!) when you run the program. The earlier you unearth a bug, the better. It is better (and cheaper) to discover a bug in your logic at design-time rather than later. Likewise, it is better (and cheaper) to find bugs in your program at compile-time than at run-time. Not only is it better; it is more reliable. A compile-time bug will fail every time you run the compiler, but a run-time bug can hide. Run-time bugs slip under a crack in your logic and lurk there (sometimes for months), biding their time, waiting to come out when it will be most expensive (or most embarrassing) to you. It will be a constant theme of this book that you want the compiler to find bugs. The compiler is your friend. The more bugs the compiler finds, the fewer bugs your users will find. A strongly typed language like C# helps the compiler find bugs in your code. Here's how: suppose you tell the compiler that Milo is of type Dog. Sometime later you try to use Milo to display text. Oops, Dogs don't display text. Your compiler will stop with an error: Dog does not contain a definition for 'showText' Very nice. Now you can go figure out if you used the wrong object or you called the wrong method. Visual Studio .NET actually finds the error even before the compiler does. When you try to add a method, IntelliSense pops up a list of valid methods to help you, as shown in Figure 5-1. Figure 5-1. IntelliSenseWhen you try to add a method that does not exist, it won't be in the list. That is a pretty good clue that you are not using the object properly. |