Integers and Floating-Point Numbers¶
Integers and floating-point values are the basic building blocks of
arithmetic and computation. Built-in representations of such values are
called numeric primitives, while representations of integers and
floating-point numbers as immediate values in code are known as numeric
literals. For example,
1 is an integer literal, while
1.0 is a
floating-point literal; their binary in-memory representations as
objects are numeric primitives.
Julia provides a broad range of primitive numeric types, and a full complement of arithmetic and bitwise operators as well as standard mathematical functions are defined over them. These map directly onto numeric types and operations that are natively supported on modern computers, thus allowing Julia to take full advantage of computational resources. Additionally, Julia provides software support for Arbitrary Precision Arithmetic, which can handle operations on numeric values that cannot be represented effectively in native hardware representations, but at the cost of relatively slower performance.
The following are Julia’s primitive numeric types:
- Integer types:
|Type||Signed?||Number of bits||Smallest value||Largest value|
|✓||8||-2^7||2^7 - 1|
|8||0||2^8 - 1|
|✓||16||-2^15||2^15 - 1|
|16||0||2^16 - 1|
|✓||32||-2^31||2^31 - 1|
|32||0||2^32 - 1|
|✓||64||-2^63||2^63 - 1|
|64||0||2^64 - 1|
|✓||128||-2^127||2^127 - 1|
|128||0||2^128 - 1|
- Floating-point types:
|Type||Precision||Number of bits|
Additionally, full support for Complex and Rational Numbers is built on top of these primitive numeric types. All numeric types interoperate naturally without explicit casting, thanks to a flexible, user-extensible type promotion system.
Literal integers are represented in the standard manner:
The default type for an integer literal depends on whether the target system has a 32-bit architecture or a 64-bit architecture:
# 32-bit system:julia>typeof(1)Int32# 64-bit system:julia>typeof(1)Int64
The Julia internal variable
WORD_SIZE indicates whether the target system
is 32-bit or 64-bit.:
# 32-bit system:julia>WORD_SIZE32# 64-bit system:julia>WORD_SIZE64
Julia also defines the types
UInt, which are aliases for the
system’s signed and unsigned native integer types respectively.:
# 32-bit system:julia>IntInt32julia>UIntUInt32# 64-bit system:julia>IntInt64julia>UIntUInt64
Larger integer literals that cannot be represented using only 32 bits but can be represented in 64 bits always create 64-bit integers, regardless of the system type:
# 32-bit or 64-bit system:julia>typeof(3000000000)Int64
Unsigned integers are input and output using the
0x prefix and hexadecimal
(base 16) digits
0-9a-f (the capitalized digits
A-F also work for input).
The size of the unsigned value is determined by the number of hex digits used:
This behavior is based on the observation that when one uses unsigned hex literals for integer values, one typically is using them to represent a fixed numeric byte sequence, rather than just an integer value.
Recall that the variable
ans is set to the value of the last expression
evaluated in an interactive session. This does not occur when Julia code is
run in other ways.
Binary and octal literals are also supported:
The values returned by
typemax() are always of the
given argument type. (The above expression uses several features we have
yet to introduce, including for loops,
Strings, and Interpolation,
but should be easy enough to understand for users with some existing
In Julia, exceeding the maximum representable value of a given type results in a wraparound behavior:
Thus, arithmetic with Julia integers is actually a form of modular arithmetic. This reflects the
characteristics of the underlying arithmetic of integers as implemented on
modern computers. In applications where overflow is possible, explicit checking
for wraparound produced by overflow is essential; otherwise, the
in Arbitrary Precision Arithmetic is recommended instead.
Integer division (the
div function) has two exceptional cases: dividing by
zero, and dividing the lowest negative number (
typemin()) by -1. Both of
these cases throw a
DivideError. The remainder and modulus functions
mod) throw a
DivideError when their second argument is
Literal floating-point numbers are represented in the standard formats:
The above results are all
Float64 values. Literal
Float32 values can
be entered by writing an
f in place of
Values can be converted to
Hexadecimal floating-point literals are also valid, but only as
Half-precision floating-point numbers are also supported (
only as a storage format. In calculations they’ll be converted to
_ can be used as digit separator:
Floating-point numbers have two zeros, positive zero and negative zero.
They are equal to each other but have different binary representations, as can
be seen using the
bits function: :
Special floating-point values¶
There are three specified standard floating-point values that do not correspond to any point on the real number line:
|positive infinity||a value greater than all finite floating-point values|
|negative infinity||a value less than all finite floating-point values|
|not a number||a value not |
For further discussion of how these non-finite floating-point values are ordered with respect to each other and other floats, see Numeric Comparisons. By the IEEE 754 standard, these floating-point values are the results of certain arithmetic operations:
Most real numbers cannot be represented exactly with floating-point numbers, and so for many purposes it is important to know the distance between two adjacent representable floating-point numbers, which is often known as machine epsilon.
eps(), which gives the distance between
and the next larger representable floating-point value:
julia>eps(Float32)1.1920929f-7julia>eps(Float64)2.220446049250313e-16julia>eps()# same as eps(Float64)2.220446049250313e-16
These values are
values, respectively. The
eps() function can also take a
floating-point value as an argument, and gives the absolute difference
between that value and the next representable floating point value. That
eps(x) yields a value of the same type as
x such that
x+eps(x) is the next representable floating-point value larger
The distance between two adjacent representable floating-point numbers is not
constant, but is smaller for smaller values and larger for larger values. In
other words, the representable floating-point numbers are densest in the real
number line near zero, and grow sparser exponentially as one moves farther away
from zero. By definition,
eps(1.0) is the same as
1.0 is a 64-bit floating-point value.
This example highlights the general principle that the adjacent representable floating-point numbers also have adjacent binary integer representations.
If a number doesn’t have an exact floating-point representation, it must be rounded to an appropriate representable value, however, if wanted, the manner in which this rounding is done can be changed according to the rounding modes presented in the IEEE 754 standard:
The default mode used is always
RoundNearest, which rounds to the nearest
representable value, with ties rounded towards the nearest value with an even
least significant bit.
Background and References¶
Floating-point arithmetic entails many subtleties which can be surprising to users who are unfamiliar with the low-level implementation details. However, these subtleties are described in detail in most books on scientific computation, and also in the following references:
- The definitive guide to floating point arithmetic is the IEEE 754-2008 Standard; however, it is not available for free online.
- For a brief but lucid presentation of how floating-point numbers are represented, see John D. Cook’s article on the subject as well as his introduction to some of the issues arising from how this representation differs in behavior from the idealized abstraction of real numbers.
- Also recommended is Bruce Dawson’s series of blog posts on floating-point numbers.
- For an excellent, in-depth discussion of floating-point numbers and issues of numerical accuracy encountered when computing with them, see David Goldberg’s paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
- For even more extensive documentation of the history of, rationale for, and issues with floating-point numbers, as well as discussion of many other topics in numerical computing, see the collected writings of William Kahan, commonly known as the “Father of Floating-Point”. Of particular interest may be An Interview with the Old Man of Floating-Point.
Arbitrary Precision Arithmetic¶
To allow computations with arbitrary-precision integers and floating point numbers,
Julia wraps the GNU Multiple Precision Arithmetic Library (GMP) and the GNU MPFR Library, respectively.
BigFloat types are available in Julia for arbitrary precision
integer and floating point numbers respectively.
Constructors exist to create these types from primitive numerical types, and
parse() can be use to construct them from
created, they participate in arithmetic with all other numeric types thanks to
type promotion and conversion mechanism:
The default precision (in number of bits of the significand) and
rounding mode of
BigFloat operations can be changed globally
set_rounding(), and all further calculations will take
these changes in account. Alternatively, the precision or the
rounding can be changed only within the execution of a particular
block of code by
Numeric Literal Coefficients¶
To make common numeric formulas and expressions clearer, Julia allows variables to be immediately preceded by a numeric literal, implying multiplication. This makes writing polynomial expressions much cleaner:
It also makes writing exponential functions more elegant:
The precedence of numeric literal coefficients is the same as that of unary
operators such as negation. So
2^3x is parsed as
2x^3 is parsed as
Numeric literals also work as coefficients to parenthesized expressions:
Additionally, parenthesized expressions can be used as coefficients to variables, implying multiplication of the expression by the variable:
Neither juxtaposition of two parenthesized expressions, nor placing a variable before a parenthesized expression, however, can be used to imply multiplication:
Both expressions are interpreted as function application: any expression that is not a numeric literal, when immediately followed by a parenthetical, is interpreted as a function applied to the values in parentheses (see Functions for more about functions). Thus, in both of these cases, an error occurs since the left-hand value is not a function.
The above syntactic enhancements significantly reduce the visual noise incurred when writing common mathematical formulae. Note that no whitespace may come between a numeric literal coefficient and the identifier or parenthesized expression which it multiplies.
Juxtaposed literal coefficient syntax may conflict with two numeric literal syntaxes: hexadecimal integer literals and engineering notation for floating-point literals. Here are some situations where syntactic conflicts arise:
- The hexadecimal integer literal expression
0xffcould be interpreted as the numeric literal
0multiplied by the variable
- The floating-point literal expression
1e10could be interpreted as the numeric literal
1multiplied by the variable
e10, and similarly with the equivalent
In both cases, we resolve the ambiguity in favor of interpretation as a numeric literals:
- Expressions starting with
0xare always hexadecimal literals.
- Expressions starting with a numeric literal followed by
Eare always floating-point literals.
Literal zero and one¶
Julia provides functions which return literal 0 and 1 corresponding to a specified type or the type of a given variable.
|Literal zero of type |
|Literal one of type |