Datatype
|
In computer science, a datatype (often simply type) is a name or label for a set of values and some operations which can be performed on that set of values. Programming languages implicitly or explicitly support one or more datatypes; these types may act as a statically or dynamically checked constraint on the programs that can be written in a given language.
Contents |
Basis
The basic idea of typing is to give some semantic meaning to what is ultimately just mere bits. Types are usually associated either with values in memory or with objects such as variables. Because any value is simply a set of bits for computers, there is no distinction in hardware even among memory addresses, instruction code, characters, integers and floating-point numbers. Types tell you how you should treat those mere bits.
Major functions that type systems provide are:
- Safety - types make it impossible to code some operations which cannot be valid in a certain context. This mechanism effectively catches the majority of common mistakes made by programmers. For example, an expression
"Hello, World" / 3
is invalid because a string literal cannot be divided by an integer in the usual sense. As discussed below, strong typing offers more safety, but it does not necessarily guarantee complete safety (see type-safety for more information). - Optimization - static type checking might provide useful information to a compiler. For example, if a type says a value is aligned at a multiple of 4, the memory access can be optimized.
- Documentation - using types in languages also improves documentation of code. For example, the declaration of a variable as being of a specific type documents how the variable is used. In fact, many languages allow programmers to define semantic types derived from primitive types; either composed of elements of one or more primitive types, or simply as aliases for names of primitive types.
- Abstraction (or modularity) - types allow programmers to think about programs in higher level, not bothering with low-level implementation. For example, programmers can think of strings as values instead of a mere array of bytes. Or types allow programmers to express the interface between two subsystems. This localizes the definitions required for interoperability of the subsystems and prevents inconsistencies when those subsystems communicate.
Typically each value is associated with one particular type although a type may have more than one subtype. Other entities, such as objects, modules, communication channels, dependencies, or even types themselves can be associated with a type. For example, a datatype is a type of a value, a class is a type of an object and a kind is a type of a type.
A type system, specified in each programming language, stipulates the ways typed programs are allowed to behave and makes behavior outside these rules illegal. An effect system is typically more fine-grained than a type system.
More formally, the study of type systems is known as type theory, with the aid of lambda calculus.
Type checking
The process of verifying and enforcing the constraints of types is called type checking. This may occur either at compile-time (a static check) or run-time (a dynamic check). Static type checking is a primary task of the semantic analysis carried out by a compiler. If type rules are enforced strongly (that is, generally allowing only those automatic type conversions which do not lose information), the process is called strongly typed, if not, weakly typed.
Static and dynamic typing
In dynamic typing, type checking is often done at runtime because variables can be differently typed according to execution path. Static type systems for dynamic types usually need to explicitly represent the concept of an execution path, and allow types to depend on it. This seems to require either a trivial or a cumbersome type system to work well.
C, Java, ML, and Haskell are statically typed, whereas Scheme, Lisp, Perl, PHP, Visual Basic, Ruby, and Python, are dynamically typed. Dynamic typing is often associated with so-called "scripting languages" and other rapid application development environments. One tends to see dynamic types more often used in interpreted languages, whereas static types are used in compiled languages. See typed and untyped languages for the complete list of typed and untyped languages.
Duck typing is a humorous way of describing the (dynamic) typing typical of many scripting languages which guess the type of a value. Initially coined by Dave Thomas in the Ruby community, its premise is that "(referring to a value) if it walks like a duck, and quacks like a duck, then it is a duck".
To see how type checking works, consider the following pseudocode example:
var x; // (1) x := 5; // (2) x := "hi"; // (3)
In this example, (1) declares the name x; (2) associates the integer value 5 to the name x; and (3) associates the string value "hi" to the name x. In most statically typed systems, the above code fragment would be illegal, because (2) and (3) bind x to values of inconsistent type.
By contrast, a purely dynamically typed system would permit the above program to execute, because the name x would not be required to have a consistent type. The implementation of a dynamically typed language will catch errors related to the misuse of values - "type errors" - at the time the erroneous statement or expression is computed. In other words, dynamic typing catches errors during program execution. A typical implementation of dynamic typing will keep all program values "tagged" with a type, and check the type tag before any value is used in an operation. For example:
var x = 5; // (1) var y = "hi"; // (2) x + y; // (3)
In this code fragment, (1) binds the value 5 to x; (2) binds the value "hi" to y; and (3) attempts to add x to y. In a dynamically typed language, the value bound to x might be a pair (integer, 5), and the value bound to y might be a pair (string, "hi"). When the program attempts to execute line 3, the language implementation would check the type tags integer and string, discover that the operation + (addition) is not defined over these two types, and signals an error.
Some statically typed languages have a "back door" in the language that enables programmers to write code that does not statically type check. For example, C and Java have "casts".
The presence of static typing in a programming language does not necessarily imply the absence of dynamic typing mechanisms. For example, Java is statically typed, but certain operations require the support of runtime type tests, which are a form of dynamic typing. See programming language for more discussion of the interactions between static and dynamic typing.
Static and dynamic type checking in practice
The choice between static and dynamic typing requires some trade-offs. Many programmers strongly favor one over the other; some to the point of considering languages following the disfavored system to be unusable or crippled.
Static typing finds type errors reliably and at compile time. This should increase the reliability of the delivered program. However, programmers disagree over how common type errors are, and thus what proportion of those bugs which are written would be caught by static typing. Static typing advocates believe programs are more reliable when they have been type-checked, while dynamic typing advocates point to distributed code that has proven reliable and to small bug databases. The value of static typing, then, presumably increases as the strength of the type system is increased. Advocates of strongly typed languages such as ML and Haskell have suggested that almost all bugs can be considered type errors, if the types used in a program are sufficiently well declared by the programmer or inferred by the compiler.
Static typing usually results in compiled code that executes more quickly. When the compiler knows the exact data types that are in use, it can produce machine code that just does the right thing. Further, compilers in statically typed languages can find shortcuts more easily. Some dynamically-typed languages such as Common Lisp allow optional type declarations for optimization for this very reason. Static typing makes this pervasive. See optimization.
Statically-typed languages which lack type inference – such as Java – require that programmers declare the types they intend a method or function to use. This can serve as additional documentation for the program, which the compiler will not permit the programmer to ignore or drift out of synchronization. However, a language can be statically typed without requiring type declarations, so this is not a consequence of static typing.
Static typing allows construction of libraries which are less likely to be accidentally misused by their users. This can be used as an additional mechanism for communicating the intentions of the library developer.
A static type system constrains the use of powerful language constructs more than it constrains less powerful ones. This makes powerful constructs harder to use, and thus places the burden of choosing the "right tool for the problem" on the shoulders of the programmer, who might otherwise be inclined to use the most powerful tool available. Choosing overly powerful tools may cause additional performance, reliability or correctness problems, because there are theoretical limits on the properties that can be expected from powerful language constructs. For example, indiscriminate use of recursion or global variables may cause well-documented adverse effects.
Dynamic typing allows constructs that would be illegal in some static type systems. For example, eval functions that execute arbitrary data as code are possible (however, the typing within that evaluated code might be static). Furthermore, dynamic typing accommodates transitional code and prototyping, such as allowing a string to be used in place of a data structure.
Dynamic typing allows debuggers to be more functional; in particular, the debugger can modify the code arbitrarily and let the program continue to run. Programmers in dynamic languages sometimes "program in the debugger" and thus have a shorter edit-compile-test-debug cycle. However, the need to use debuggers is considered by some to be a sign of design or development process problems.
Dynamic typing may allow compilers and interpreters to run more quickly, since there may be less checking to perform and less code to revisit when the source code changes. This, too, may reduce the edit-compile-test-debug cycle.
Strong and weak typing
Main article: strongly-typed programming language
A strongly typed language has several meanings; for a clearer discussion, see the main article on the topic. One definition is that it does not allow an operation to succeed on arguments which are of the wrong type. An example of the absence of strong typing is a C cast gone wrong; if you cast a value in C, not only is the compiler required to allow the code, but the runtime is expected to allow it as well. This allows C code to be compact and fast, but it can make debugging more difficult.
Sometimes the term memory-safe language (or just safe language) is used to describe languages that do not allow undefined operations to occur. For example, a memory-safe language will also check array bounds.
Weak typing means that types are implicitly converted (or cast) when they are used. If we were to revisit the previous example:
var x = 5; // (1) var y = "hi"; // (2) x + y; // (3)
If the code above were written in a weakly-typed language, such as Visual Basic, the code would run, yielding the result "5hi". The number 5 is converted to the string "5" to make sense of the operation (the + operator is overloaded to mean both addition and concatenation). However, there are problems with such conversions when operators are overloaded in this way. For example, would the result of the following code be 9 or "54"?
var x = 5; var y = "4"; x + y;
In contrast, the REXX language (which is weakly-typed because it only has one type) does not overload the + operator, and hence + always means addition. The equivalent of the first example would fail (one operand not a number), and the second would yield "9", unambiguously. Careful language design has also allowed other languages to appear to be weakly-typed (through type inference and other techniques) for usability while preserving the type checking and protection offered by languages such as Java.
Polymorphism and types
The type system allows operations to be done relying on contexts by type. For example, in an arithmetic expression, a + b
, if a and b are typed as integer, an underlying operation can be integer addition. If the type is real, floating-point addition is probably done. In generics the type of values determines which code will be executed. See also: type polymorphism
Explicit or implicit declaration and inference
Many static type systems, such as C's and Java's, require type declarations: the programmer must explicitly associate each variable with a particular type. Others, such as Haskell's, perform type inference: the compiler draws conclusions about the types of variables based on how the variables are used. For example, given a function f(x,y) in which x and y are added together, the compiler can infer that x and y must be numbers -- since addition is only defined for numbers. Therefore, any call to f elsewhere in the program that specifies a non-numeric type (such as a string or list) as an argument would be erroneous.
Numerical and string constants and expressions in code can and often do imply type in a particular context. For example, an expression 3.14
might imply that its type is floating-point while [1, 2, 3]
might imply a list of integers; typically an array.
Categories of types
Types can generally be classified into the following categories:
- primitive types — the simplest kind of type, e.g. integer and floating-point number
- composite types — types composed of basic types, e.g. records. Abstract data types have attributes of both composite types and interfaces, depending on who you talk to.
- subtype
- derived type
- function types, e.g. binary functions
- object types, e.g. type variable
- class types — types of objects in object-oriented programming
- interfaces — types of dependencies
- protocols — types of communications channels
- kinds — types of types
- use case — type of interaction of a system and its environment
- recursive type
Compatibility, equivalence and substitutability
The question of compatibility and equivalence is a complicated and controversial topic and it is related to the problem of substitutability: that is, given type A and type B, are they equal types or compatible? Can the value with type B be used in the place where the value of A?
If type A is compatible with type B, A is a subtype of B while not always vice versa. The definition is known as the Liskov substitution principle.
Type conversion may be done to make a type compatible or substitutable in context.