In ancient history, the concepts of chance and randomness were intertwined with that of fate.
Many ancient peoples threw dice to determine fate, and this later evolved into games of chance.
Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate.
Beyond religion and games of chance, randomness has been attested for sortition since at least ancient
Athenian democracy in the form of a kleroterion.
The formalization of odds and chance was perhaps earliest done by the Chinese of 3000 years ago.
The Greek philosophers discussed randomness at length, but only in non-quantitative forms.
It was only in the 16th century that Italian mathematicians began to formalize the odds associated
with various games of chance. The invention of calculus had a positive impact on the formal study of
randomness. In the 1888 edition of his book The Logic of Chance, John Venn wrote a chapter on The
conception of randomness that included his view of the randomness of the digits of pi, by using them
to construct a random walk in two dimensions.
The early part of the 20th century saw a rapid growth in the formal analysis of randomness,
as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century,
ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer
scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for
designing better algorithms.In some cases, such randomized algorithms even outperform the best deterministic methods.
In the physical sciences
In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics
to explain phenomena in thermodynamics and the properties of gases.
According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.
That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly.
For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take
for the atom to decay—only the probability of decay in a given time. Thus, quantum mechanics does not specify the
outcome of individual experiments, but only the probabilities. Hidden variable theories reject the view that nature
contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain
statistical distribution are at work behind the scenes, determining the outcome in each case.
In biology
The modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural
selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival
and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely
random however as e.g. biologically important regions may be more protected from mutations.
Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction
of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to
the formation of new possibilities.
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment),
and to some extent randomly. For example, the density of freckles that appear on a person's skin is controlled by genes and exposure to
light; whereas the exact location of individual freckles seems random.
As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For
instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict
their trajectories.
In mathmatics
The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events,
originally in the context of gambling, but later in connection with physics. Statistics is used to infer the underlying
probability distribution of a collection of empirical observations. For the purposes of simulation, it is necessary to
have a large supply of random numbers—or means to generate them on demand.
Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is
that a string of bits is random if and only if it is shorter than any computer program that can produce that string
(Kolmogorov randomness), which means that random strings are those that cannot be compressed. Pioneers of this field
include Andrey Kolmogorov and his student Per Martin-Löf, Ray Solomonoff, and Gregory Chaitin. For the notion of
infinite sequence, mathematicians generally accept Per Martin-Löf's semi-eponymous definition: An infinite sequence is
random if and only if it withstands all recursively enumerable null sets. The other notions of random sequences
include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales.
It was shown by Yongge Wang that these randomness notions are generally different.
Randomness occurs in numbers such as log(2) and pi. The decimal digits of pi constitute an infinite sequence and
"never repeat in a cyclical fashion." Numbers like pi are also considered likely to be normal:
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0
through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality
even in base 10, much less normality in other number bases.
part of section remains.