At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine.[1] Any abstract computer will do, as long as it is Turing-complete, i.e. every finite binary string has at least one program that will compute it on the abstract computer.
The abstract computer is used to give precise meaning to the phrase `simple explanation'. In the formalism used, explanations, or theories of phenomena are computer programs that generate observation strings when run on the abstract computer. A simple explanation is a short computer program; a complex explanation is a long computer program. Simple explanations are more likely, so a high-probability observation string is one generated by a short computer program, or perhaps by any of a large number of slightly longer computer programs. A low-probability observation string is one that can only be generated by a long computer program.
These ideas can be made specific and the probabilities used to construct a prior probability
distribution for the given observation. Solomonoff's main reason for inventing this prior is so that it can be used in Bayes' rule when the actual prior is unknown, enabling prediction under uncertainty. It predicts the most likely continuation of that observation, and provides a measure of how likely this continuation will be.
Although the universal probability of an observation (and it's extension) is incomputible, there is a computer algorithm, Levin Search, which, when run for longer and longer periods of time, will generate a sequence of approximations which converge to the Universal probability distribution.
Solomonoff proved this distribution to be machine-invariant within a constant factor (called the invariance theorem).
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960.ⶠHe clarified these ideas more fully in 1964 with two more publications.â¸â¹
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[2] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the prior obtained by this formula, in Bayes' rule for prediction.
In the mathematical formalism used, the observations have the form of finite binary strings, and the universal prior is a probability distribution over the set of finite binary strings. The prior is universal in the Turing-computability sense, i.e. no string has zero probability. It is not computable,[clarification needed] but it can be approximated.[3]
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. In a theoretic sense, the prior is universal. It is used in inductive inference theory, and analyses of algorithms. Since it is not computable,[clarification needed] it must be approximated.[4]
Overview
editAlgorithmic probability deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?
Among Solomonoff's inspirations for the Algorithmic probability were Occam's razor and [Epicurus#Epistemology|Epicurus' principle of multiple explanations]]. These are essentially two different non-mathematical approximations of the universal prior.
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.[5]
Epicurus's Principle of Multiple Explanations proposes that `if more than one theory is consistent with the observations, keep all such theories'.[6]
At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine. Any abstract computer will do, as long as it is Turing-complete, i.e. every finite binary string has
at least one program that will compute it on the abstract computer.
The abstract computer is used to give precise meaning to the phrase `simple explanation'. In the formalism used, explanations, or theories of phenomena are computer programs that generate observation strings when run on the abstract computer. A simple explanation is a short computer program; a complex explanation is a long computer program. Simple explanations are more likely, so a high-probability observation string is one generated by a short computer program, or perhaps by any of a large number of slightly longer computer programs. A low-probability observation string is one that can only be generated by a long computer program.
Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; and special coding methods from modern computing theory. The prior obtained from the formula is used in Bayes rule for prediction.[7]
In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.[8]
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[9] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,[10] publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."[11] He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I[12] and Part II.[13]
He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The universal probability distribution is the probability distribution on all possible output strings with random input.[14]
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory,[clarification needed] Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity was motivated by information theory and problems in randomness, while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.[15]
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this issue is a variant of Leonid Levin's Search Algorithm,[16] which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.
Key people
editSee also
editReferences
edit- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Hutter, 2(8):2572, 2007.
- ^ Hutter, 2(8):2572, 2007.
- ^ Li and Vitanyi, 2008, p. 341
- ^ Li and Vitanyi, 2008, p. 339.
- ^ Li and Vitanyi, 2008, p. 347
- ^ Li and Vitanyi, 2008, p. 339.
- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I". Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
- ^ Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
- ^ Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
- ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973
Sources
edit- Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008
Further reading
edit- Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
External links
edit{{DEFAULTSORT:Algorithmic Probability}} Category:Algorithmic information theory Category:Probability interpretations Category:Artificial intelligence
Algorithmic Probability
Algorithmic probability
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. It was invented by {Ray Solomonoff(link)} in the 1960s. It is used in inductive inference theory, and analyses of algorithms.
In the mathematical formalism used, the observations have the form of finite binary strings, and the universal prior is a probability distribution over the set of finite binary strings. The prior is universal in the Turing-computability sense, i.e. no string has zero probability. It is not computable, but it can be approximated.¹
Overview
Algorithmic probability deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?
Among Solomonoff's inspirations for the Algorithmic probability were Occam's razor and Epicurus' principle of multiple explanations. These are essentially two different nonmathematical approximations of the universal prior.
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.³
Epicurus's Principle of Multiple Explanations proposes that `if more than one theory is consistent with the observations, keep all such theories'.â´
At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine. Any abstract computer will do, as long as it is Turing-complete, i.e. every finite binary string has at least one program that will compute it on the abstract computer.
The abstract computer is used to give precise meaning to the phrase `simple explanation'. In the formalism used, explanations, or theories of phenomena are computer programs that generate observation strings when run on the abstract computer. A simple explanation is a short computer program; a complex explanation is a long computer program. Simple explanations are more likely, so a high-probability observation string is one generated by a short computer program, or perhaps by any of a large number of slightly longer computer programs. A low-probability observation string is one that can only be generated by a long computer program.
These ideas can be made specific and used to construct a prior probability distribution for the observation. Solomonoff proved it to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation.
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960.ⶠHe clarified these ideas more fully in 1964 with two more publications.â¸â¹
He described a universal computer with a randomly generated input program.
The program computes some possibly infinite output. The universal
probability distribution is the probability distribution on all possible
output strings with random input.¹â°
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity, was motivated by information theory and problems in randomness while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes's rule was invented by Solomonoff with Kolmogorov complexity as a side product.¹¹
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be arbitrarily large. One way of dealing with this is a variant of Leonid Levin's Search Algorithm,¹² which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.
Key people
- Ray Solomonoff - Andrey Kolmogorov - Leonid Levin
See also
- Solomonoff's theory of inductive inference - Algorithmic information theory - Bayesian inference - Inductive inference - Inductive probability - Kolmogorov complexity - Universal Turing machine - Information-based complexity
References
[1] Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability",
Scholarpedia, 2(8):2572, 2007.
[2] Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and
Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008, p 347
[3] ibid, p. 341 [4] ibid, p. 339. [5] Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519. [6] Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of
Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
[7] Solomonoff, R., "A Preliminary Report on a General Theory of Inductive
Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
[8] Solomonoff, R., "A Formal Theory of Inductive Inference, Part I".
Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
[9] Solomonoff, R., "A Formal Theory of Inductive Inference, Part II"
Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
[10] Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution
and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
[11] Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE
Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
[12] Levin, L.A., "Universal Search Problems", in Problemy Peredaci
Informacii 9, pp. 115–116, 1973
Further reading
- Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal
Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
External links
- Algorithmic Probability at Scholarpedia - Solomonoff's publications
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms.
In the mathematical formalism used, the observations have the form of finite binary strings, and the universal prior is a probability distribution over the set of finite binary strings. The prior is universal in the Turing-computability sense, i.e. no string has zero probability. It is not computable,[clarification needed] but it can be approximated.[1]
Overview
editAlgorithmic probability deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?
Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; and special coding methods from modern computing theory. The prior obtained from the formula is used in Bayes rule for prediction.[2]
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.[3]
In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.[4]
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[5] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,[6] publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."[7] He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I[8] and Part II.[9]
He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The universal probability distribution is the probability distribution on all possible output strings with random input.[10]
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory,[clarification needed] Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity was motivated by information theory and problems in randomness, while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.[11]
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this issue is a variant of Leonid Levin's Search Algorithm,[12] which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.
Key people
editSee also
editReferences
edit- ^ Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability", Scholarpedia, 2(8):2572, 2007.
- ^ Li and Vitanyi, 2008, p. 347
- ^ Li and Vitanyi, 2008, p. 341
- ^ Li and Vitanyi, 2008, p. 339.
- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I". Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
- ^ Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
- ^ Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
- ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973
Sources
edit- Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008
Further reading
edit- Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
External links
edit{{DEFAULTSORT:Algorithmic Probability}} Category:Algorithmic information theory Category:Probability interpretations Category:Artificial intelligence
The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many[1][2](though not all[3])
to be the seminal event for artificial intelligence as a field.
The project lasted approximately 6 to 8 weeks, and was essentially an extended brainstorming session. 11 mathematicians and scientists were originally planned to be attendees, and while not all attended, more than 10 others came for short times.
Planning the Summer Research Project: The Proposal
editIn the early 1950s, there were various names for the field of "thinking machines" such as cybernetics, automata theory, and complex information processing [4] These indicate how different the ideas were on what such machines would be like.
In 1955 John McCarthy,John McCarthy then a young Assistant Professor of Mathematics at Dartmouth College, decided to organize a group to clarify and develop ideas about thinking machines. He picked the name 'Artificial Intelligence' for the new field. He chose the name partly for its neutrality; avoiding a focus on narrow automata theory, and avoiding cybernetics which was heavily focused on analog feedback, as well as him potentially having to accept the assertive Norbert Wiener as guru or having to argue with him.[5]
In early 1955, McCarthy approached the Rockefeller Foundation to request funding for a summer seminar at Dartmouth for about 10 participants. In June, he and Claude Shannon, a founder of Information Theory then at Bell Labs, met with Robert Morison, Director of Biological and Medical Research to discuss the idea and possible funding, though Morison, was unsure whether money would be made available for such a visionary project.[6]
On September 2, 1955, the project was formally proposed by McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. The proposal is credited with introducing the term 'artificial intelligence'.
The Proposal states [7]
“ | We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. | ” |
The proposal goes on to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity (these areas within the field of artificial intelligence are considered still relevant to the work of the field). [8]
On May 26, 1956, McCarthy notified Robert Morison of the planned 11 attendees:
For the full period:
1) Dr. Marvin Minsky 2) Dr. Julian Bigelow 3) Professor D.M. Mackay 4) Mr. Ray Solomonoff 5) Mr. John Holland 6) Mr. John McCarthy.
For four weeks:
7) Dr. Claude Shannon 8) Mr. Nathanial Rochester 9) Mr. Oliver Selfridge.
For the first two weeks:
10) Mr. Allen Newell 11) Professor Herbert Simon.
He noted, ``We will concentrate on a problem of devising a way of programming a calculator to form concepts and to form generalizations. This of course is subject to change when the group gets together.[9]
The actual participants came at different times, mostly for much shorter times. Trenchard More replaced Rochester for three weeks and MacKay and Holland did not attend --- but the project was set to begin.
Around June 18, 1956, the earliest participants (perhaps only Ray Solomonoff, maybe with Tom Etter) arrived at the Dartmouth campus in Hanover, N.H., to join John McCarthy who already had an apartment there. Ray and Marvin stayed at Professors' apartments, but most would stay at the Hanover Inn.
When Did It Happen?
editThe Dartmouth Workshop is said to have run for six weeks in the summer of 1956.[10] Ray Solomonoff's notes written during the Workshop time, 1956, however, say it ran for ``roughly eight weeks, from about June 18 to August 17.[11] Solomonoff's Dartmouth notes start on June 22; June 28 mentions Minsky, June 30 mentions Hanover, N.H., July 1 mentions Tom Etter. On August 17, Ray gave a final talk. [12]
Who Was There?
editUnfortunately McCarthy lost his list of attendees! Instead, after the Dartmouth Project McCarthy sent Ray a preliminary list of participants and visitors plus those interested in the subject. There are 47 people listed.[13]
Solomonoff, however, made a complete list in his notes of the summer project:[14]
1) Ray Solomonoff 2) Marvin Minsky 3) John McCarthy 4) Claude Shannon 5) Trenchard More 6) Nat Rochester 7) Oliver Selfridge 8) Julian Bigelow 9) W. Ross Ashby 10) W.S. McCulloch 11) Abraham Robinson 12) Tom Etter 13) John Nash 14) David Sayre 15) Arthur Samuel 16) Shoulders 17) Shoulder's friend 18) Alex Bernstein 19) Herbert Simon 20) Allen Newell
Shannon attended Ray's talk on July 10 and Bigelow gave a talk on August 15. Ray doesn't mention Bernard Widrow, but apparently he visited, along with W.A. Clark and B.G. Farley.[15] Trenchard mentions R. Culver and Ray mentions Bill Shutz. Herb Gelernter didn't attend, but was influenced later by what Rochester learned.[16] Gloria Minsky also commuted there (with their part-beagle dog, Senje, who would start out in the car back seat and end up curled around her like a scarf), and attended some sessions (without Senje)[17].
Ray Solomonoff, Marvin Minsky, and John McCarthy were the only three who stayed for the full time. Trenchard took attendence during two weeks of his three week visit. From three to about eight people would attend the daily sessions.[18]
The Meetings and Some Results
editThey had the entire top floor of the Dartmouth Math Department to themselves, and most weekdays they would meet at the main math classroom where someone might lead a discussion focusing on his ideas, or more frequently, a general discussion would be held.
It was not a directed group research project, discussions convered many topics but several directions are considered to have been initiated or encouraged by the Workshop: the rise of symbolic methods, systems focussed on limited domains (early Expert Systems), and deductive systems versus inductive systems. One participant, Arthur Samuel said, "It was very interesting, very stimulating, very exciting".[19]
Ray Solomonoff kept notes during the summer giving his impression of the talks and the ideas from various discussions. These are available, along with other notes concerning the Dartmouth Summer Research Project on AI, at: http://raysolomonoff.com/dartmouth/
references
edit- ^ Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153
- ^ Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006
- ^ Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing,October-December, 2011, IEEE Computer Society
- ^ McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004.
- ^ Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010
- ^ Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing,October-December, 2011, IEEE Computer Society, (citing letters, from Rockefeller Foundation Archives, Dartmouth file6, 17, 1955 etc.
- ^ McCarthy, J., Minsky, M., Rochester, N., Shannon, C.E., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence., http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf August, 1955
- ^ McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence retrieved 10:47 (UTC), 9th of April 2006
- ^ Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing,October-December, 2011, IEEE Computer Society
- ^ Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010, P. 53
- ^ Solomonoff, R.J.,dart56ray622716talk710.pdf, 1956 URL:{http://raysolomonoff.com/dartmouth/boxbdart/dart56ray622716talk710.pdf
- ^ Papers at http://raysolomonoff.com/dartmouth/boxbdart
- ^ McCarthy, J., List, Sept., 1956; List among Solomonoff papers to be posted on website solomonof.com
- ^ http://raysolomonoff.com/dartmouth/boxbdart/dart56ray812825who.pdf 1956
- ^ Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing,October-December, 2011, IEEE Computer Society
- ^ Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010,
- ^ personal communication
- ^ More, Trenchard, 1956, http://raysolomonoff.com/dartmouth/boxa/dart56more5th6thweeks.pdf
- ^ McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004.
External links
edit- [1]
- 50 Años De La Inteligencia Artificial - Campus Multidisciplinar en Percepción e Inteligencia - Albacete 2006 (Spain).
end of dart
==Dartmouth Summer The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 undertaking now considered the seminal event for artificial intelligence as a field.
Planning the Project
editOrganised by John McCarthy (then at Dartmouth College) and formally proposed by McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon, the proposal is credited with introducing the term 'artificial intelligence'.
The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon.
On May 26, 1956, McCarthy notified Robert Morison of the 11 attendees:
\textbf{for the full period:} 1) Dr. Marvin Minsky 2)\ Dr. Julian Bigelow
3)\ Professor D.M. Mackay 4)\ Mr. Ray Solomonoff 5)\ Mr. John Holland 6)\ Mr. John McCarthy.
\textbf{for four weeks:} 7) Dr. Claude Shannon, 8) Mr. Nathanial Rochester, 9)\ Mr. Oliver Selfridge.
\textbf{for the first two weeks:} 10) Mr. Allen Newell and 11) Professor Herbert Simon.
He noted, ``We will concentrate on a problem of devising a way of programming a calculator to form concepts and to form generalizations. This of course is subject to change when the group gets together.\cite{mcc:56mccmor}
The actual participants came at different times, mostly for much shorter times. Trenchard More replaced Rochester for three weeks, and MacKay and Holland did not attend --- but the project was set to begin. So around June 18, 1956, the earliest participants (perhaps only Ray, maybe with Tom Etter) arrived at the Dartmouth campus in Hanover, N.H., to join John McCarthy who already had an apartment there. Ray and Marvin stayed at Professors' apartments, but most would stay at the Hanover Inn.
Founding statement
editThe project lasted a month, and it was essentially an extended brainstorming session. The introduction states:
“ | We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. | ” |
(McCarthy et al. 1955) [1]
The proposal goes on to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity (these areas within the field of artificial intelligence are considered still relevant to the work of the field). According to Stottler Henke Associates, besides the proposal's authors, attendees at the conference included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Herbert A. Simon, and Allen Newell. [2] [3][4]
See also
edit- History of artificial intelligence
- AI@50—a 50th anniversary conference, including some of the original delegates.
References
edit- ^ McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence retrieved 10:47 (UTC), 9th of April 2006
- ^ Stottler-Henke retrieved 18:19 (UTC), 27th of July 2006
- ^ Artificial Intelligence: Past, Present, and Future (Vox of Dartmouth)
- ^ The Dartmouth Artificial Intelligence Conference: The Next Fifty Years
External links
editfoundations
editbegin foundations of mathematics-------------
1960-1967: The beginning of Algorithmic Information Theory. In 1960, 1964, Ray Solomonoff publishes Algorithmic probability[1] and Solomonoff Prediction (Theory of Inductive Inference) which connect probability to program length, using concepts of Occam's Razor and Epicurus' Theory of Multiple explanations. He establishes a Universal Prior that can be used in Bayes rule of causation for prediction.[2][3] In 1965 Andrey Kolmogorov publishes his version of Occam's Razor, which becomes known as Kolmogorov Complexity.[4] In 1968 Gregory Chaitin publishes his version of complexity, similar to that of Kolmogorov.[5] All three, Solomonoff, Kolmogorov and Chaitin are founders of Algorithmic Information Theory.[6]
from foundations of MAthematics
More paradoxes
edit1920: Thoralf Skolem corrected Löwenheim's proof of what is now called the downward Löwenheim-Skolem theorem, leading to Skolem's paradox discussed in 1922 (the existence of countable models of ZF, making infinite cardinalities a relative property).
1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements.
1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth can not be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).
1936: Alfred Tarski proved his truth undefinability theorem.
1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
1938: Gödel proved the consistency of the axiom of choice and of the Generalized Continuum-Hypothesis.
1936 - 1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).
1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable.
1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
1960-1967: The beginning of Algorithmic Information Theory. In 1960, 1964, Ray Solomonoff publishes Algorithmic probability[7] and Solomonoff Prediction (Theory of Inductive Inference) which connect probability to program length, using concepts of Occam's Razor and Epicurus' Theory of Multiple explanations. He establishes a Universal Prior that can be used in Bayes rule of causation for prediction.[8][9] In 1965 Andrey Kolmogorov publishes his version of Occam's Razor, which becomes known as Kolmogorov Complexity.[10] In 1968 Gregory Chaitin publishes his version of complexity, similar to that of Kolmogorov.[11] All three, Solomonoff, Kolmogorov and Chaitin are founders of Algorithmic Information Theory.[12]
1960-68: Inspired by the fundamental randomness in physics, in 1968 Gregory Chaitin starts publishing results on Algorithmic Information theory (measuring incompleteness and randomness in mathematics).[13] The beginning of Algorithmic Information Theory. Prior to this, In 1960, 1964, Ray Solomonoff publishes Algorithmic probability[14] and Solomonoff Prediction (Theory of Inductive Inference) which connect probability to program length, using concepts of Occam's Razor and Epicurus' Theory of Multiple explanations. He establishes a Universal Prior that can be used in Bayes rule of causation for prediction.[15][16] In 1965 Andrey Kolmogorov publishes his version of Occam's Razor, which becomes known as Kolmogorov Complexity.[17] In 1968 Gregory Chaitin publishes his version of complexity, similar to that of Kolmogorov.[18] All three, Solomonoff, Kolmogorov and Chaitin are founders of Algorithmic Information Theory.[19]
1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements.
1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.
1971: Suslin's problem is proven to be independent from ZFC.
inductive inference
editbegin sol theory of inductive inference
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Solomonoff's theory of universal inductive inference is a theory of prediction based on logical observations, such as predicting the next symbol based upon a given series of symbols. The only assumption that the theory makes is that the environment follows some unknown but computable probability distribution. It is a mathematical formalization of Occam's razor[20][21][22][23][24] and the Principle of Multiple Explanations.[25]
Prediction is done using a completely Bayesian framework. The universal prior is taken over the class of all computable sequences—this is the universal a priori probability distribution; no computable hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular computable sequence.
Origin
editPhilosophical
editThe theory is based in philosophical foundations, and was founded by Ray Solomonoff around 1960.[26] It is a mathematically formalized combination of Occam's razor.[20][21][22][23][24] and the Principle of Multiple Explanations.[25] All computable theories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Marcus Hutter's universal artificial intelligence builds upon this to calculate the expected value of an action.
Mathematical
editThe proof of the "razor" is based on the known mathematical properties of a probability distribution over a denumerable set. These properties are relevant because the infinite set of all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition of probability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for every > 0, there is some length l such that the probability of all programs longer than l is at most . This does not, however, preclude very long programs from having very high probability.
Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity. The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.
end start of sol induction
beginning of alp
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. In a theoretic sense, the prior is universal. It is used in inductive inference theory, and analyses of algorithms. Since it is not computable,[clarification needed] it must be approximated.[27]
It deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?
Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; and the concept of a Universal Prior, special coding methods from modern computing theory which Solomonoff uses to establish a Universal Prior for all possible .... The prior obtained from the formula is used in Bayes rule for prediction.[28]
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.[29]
In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.[30]
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[31] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,[32] publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."[33] He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I[34] and Part II.[35]
He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The universal probability distribution is the probability distribution on all possible output strings with random input.[36]
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory,[clarification needed] Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity, was motivated by information theory and problems in randomness while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.[37]
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this is a variant of Leonid Levin's Search Algorithm,[38] which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.
Key people
editSee also
editReferences
edit- ^ scholarpedia.org/article/Algorithmic_probability
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference" (Nov. 1960 revision of the Feb. 4, 1960 report), Report V-131, Zator Co., Cambridge, Ma., Nov. 1960
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1, pp 1-22, March 1964, and Part II. "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2, pp 224-254, June 1964.
- ^ Kolmogorov, A.N., Three Approaches to the Quantitative Definition of Information, Problems of Information Transmission, Vol 1, No 1, pp 1-7, 1965.
- ^ Chaitin, G.J., Randomness and Mathematical Proof, Scientific American, Oct. 1974, pp. 47-52
- ^ Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008, pp 339 ff.
- ^ scholarpedia.org/article/Algorithmic_probability
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference" (Nov. 1960 revision of the Feb. 4, 1960 report), Report V-131, Zator Co., Cambridge, Ma., Nov. 1960
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1, pp 1-22, March 1964, and Part II. "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2, pp 224-254, June 1964.
- ^ Kolmogorov, A.N., Three Approaches to the Quantitative Definition of Information, Problems of Information Transmission, Vol 1, No 1, pp 1-7, 1965.
- ^ Chaitin, G.J., Randomness and Mathematical Proof, Scientific American, Oct. 1974, pp. 47-52
- ^ Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008, pp 339 ff.
- ^ Chaitin, Gregory (2006), The Limits Of Reason (PDF)
- ^ scholarpedia.org/article/Algorithmic_probability
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference" (Nov. 1960 revision of the Feb. 4, 1960 report), Report V-131, Zator Co., Cambridge, Ma., Nov. 1960
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1, pp 1-22, March 1964, and Part II. "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2, pp 224-254, June 1964.
- ^ Kolmogorov, A.N., Three Approaches to the Quantitative Definition of Information, Problems of Information Transmission, Vol 1, No 1, pp 1-7, 1965.
- ^ Chaitin, G.J., Randomness and Mathematical Proof, Scientific American, Oct. 1974, pp. 47-52
- ^ Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008, pp 339 ff.
- ^ a b JJ McCall. Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov – Metroeconomica, 2004 – Wiley Online Library.
- ^ a b D Stork. Foundations of Occam's razor and parsimony in learning from ricoh.com – NIPS 2001 Workshop, 2001
- ^ a b A.N. Soklakov. Occam's razor as a formal basis for a physical theory from arxiv.org – Foundations of Physics Letters, 2002 – Springer
- ^ a b Jose Hernandez-Orallo (1999). "Beyond the Turing Test" (PDF). Journal of Logic, Language and Information. 9.
- ^ a b M Hutter. On the existence and convergence of computable universal priors arxiv.org – Algorithmic Learning Theory, 2003 – Springer
- ^ a b Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008p 339 ff.
- ^ Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
- ^ Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability", Scholarpedia, 2(8):2572, 2007.
- ^ Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008, p 347
- ^ ibid, p. 341
- ^ ibid, p. 339.
- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I". Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
- ^ Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
- ^ Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
- ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973
Further reading
edit- Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
External links
edit{{DEFAULTSORT:Algorithmic Probability}} Category:Algorithmic information theory Category:Probability interpretations Category:Artificial intelligence
end of alp
Closely related is his idea of using this in a Bayesian framework. The universal prior is taken over the class of all computable sequences; this is the universal a priori probability distribution; no hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular sequence.
Algorithmic Probability uses a weighting based on the program length of each program that could produce a particular starting section of a sequence, x. The Universal Probability Distribution of that sequence functions by its sum to define the probability of the sequence, and by using the weight of individual programs to give a figure of merit to each program that could produce the sequence.
This is used with Bayes rule for the most accurate probability to predict what is most likely to come next as the start of the sequence is extrapolated.
This theory of prediction has since become known as Solomonoff induction. It is also called Universal Induction, or the General Theory of Inductive Inference.
He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability and Solomonoff Induction, presenting 5 different models, including the model popularly called the Universal Distribution.
of Information and Control.[12],[13]
In a letter in 2011, Marcus Hutter wrote: “Ray Solomonoff’s universal probability distribution M(x) is defined as the probability that the output of a universal monotone Turing machine U starts with string x when provided with fair coin flips on the input tape. Despite this simple definition, it has truly remarkable properties, and constitutes a universal solution of the induction problem.”(See also [7])
Algorithmic Probability combines several major ideas; of these, two might be considered more philosophical and two more mathematical. The first is related to the idea of Occam’s Razor: the simplest theory is the best. Ray’s 1960 paper states “We shall consider a sequence of symbols to be ‘simple’ and have high a priori probability if there exists a very brief description of this sequence — using of course some stipulated description method. More exactly, if we use only the symbols 0 or 1 to express our description, we will assign the probability of 2−N to a sequence of symbols, if its shortest possible binary description contains N digits.”[11][10]
The second idea is similar to that of Epicurus: it is an expansion on the shortest code theory; if more than one theory explains the data, keep all of the theories. Ray writes “Equation 1 uses only the ‘minimal binary description’ of the sequence it analyzes. It would seem that if there are several different methods of describing a sequence, each of these methods should be given some weight in determining the probability of that sequence.”[11][10] P(x)M = ∞ i=1 2 −|si(x)| This is the formula he developed to give each possible explanation the right weight. (The probability of sequence x with respect to Turing Machine M is the total sum of 2 to the minus length of each string s that produces an output that begins with x.)
Closely related is the third idea of its use in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. Using program lengths of all programs that could produce a particular start of the string, x, Ray gets the prior distribution for x, used in Bayes rule for accurate probabilities to predict what is most likely to come next as the start is extrapolated. The Universal Probability Distribution functions by its sum to define the probability of a sequence, and by using the weight of individual programs to give a figure of merit to each program that could produce the sequence. [11][12]
The fourth idea shows that the choice of machine, while it could add a constant factor, would not change the probability ratios very much. These probabilities are machine independent; this is the invariance theorem that is considered a foundation of Algorithmic Information Theory.[11][13]
current version wiki Jan 29 2015, some changes now mar 22 2015 Ray Solomonoff (July 25, 1926 – December 7, 2009)[1][2] , a founder of Artificial Intelligence and algorithmic information theory,[3] was the inventor of algorithmic probability,[4] his General Theory of Inductive Inference (also known as Universal Inductive Inference).[5] He invented the universal prior which enables the use of Bayes rule for predication. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.[6]
Solomonoff first described algorithmic probability in 1960, publishing the theorem that launched Kolmogorov complexity and algorithmic information theory. He first described these results at a Conference at Caltech in 1960,[7] and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[8] He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I[9] and Part II.[10]
Algorithmic probability is a mathematically formalized combination of Occam's razor,[11][12][13][14] and the Principle of Multiple Explanations.[15] It is a machine independent method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities.
Solomonoff founded the theory of universal inductive inference, which is based on solid philosophical foundations[5] and has its root in Kolmogorov complexity and algorithmic information theory. The theory uses algorithmic probability in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. This enables Baye's rule (of causation) to be used to predict the most likely next event in a series of events.[16]
Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.
Life history through 1964
editRay Solomonoff was born on July 25, 1926, in Cleveland, Ohio, son of the Russian immigrants Phillip Julius and Sarah Mashman Solomonoff. He attended Glenville High School, graduating in 1944. In 1944 he joined the United States Navy as Instructor in Electronics. From 1947-1951 he attended the University of Chicago, studying under Professors such as Rudolf Carnap and Enrico Fermi, and graduated with an M.S. in Physics in 1951.
From his earliest years he was motivated by the pure joy of mathematical discovery and by the desire to explore where no one had gone before. At age of 16, in 1942, he began to search for a general method to solve mathematical problems.
In 1952 he met Marvin Minsky, John McCarthy and others interested in machine intelligence. In 1956 Minsky and McCarthy and others organized the Dartmouth Summer Research Conference on Artificial Intelligence, where Ray was one of the original 10 invitees --- he, McCarthy, and Minsky were the only ones to stay all summer. It was for this group that Artificial Intelligence was first named as a science. Computers at the time could solve very specific mathematical problems, but not much else. Ray wanted to pursue a bigger question, how to make machines more generally intelligent, and how computers could use probability for this purpose.
Work history through 1964
editHe wrote three papers, two with Anatol Rapoport, in 1950-52,[17] that are regarded as the earliest statistical analysis of networks.
He was one of the 10 attendees at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. He wrote and circulated a report among the attendees: "An Inductive Inference Machine".[6] It viewed machine learning as probabilistic, with an emphasis on the importance of training sequences, and on the use of parts of previous solutions to problems in constructing trial solutions for new problems. He published a version of his findings in 1957.[18] These were the first papers to be written on probabilistic Machine Learning.
In the late 1950s, he invented probabilistic languages and their associated grammars.[19] A probabilistic language assigns a probability value to every possible string.
Generalizing the concept of probabilistic grammars led him to his discovery in 1960 of Algorithmic Probability and General Theory of Inductive Inference. As part of this work he also established the philosophical foundation that enables the use of Bayes rule for induction.
Prior to the 1960s, the usual method of calculating probability was based on frequency: taking the ratio of favorable results to the total number of trials. In his 1960 publication, and, more completely, in his 1964 publications, Solomonoff seriously revised this definition of probability. He called this new form of probability "Algorithmic Probability" and showed how to use it in a Bayesian framework for prediction in his theory of inductive inference.
The basic theorem of what was later called Kolmogorov Complexity was part of his General Theory. Writing in 1960, he begins: "Consider a very long sequence of symbols ...We shall consider such a sequence of symbols to be 'simple' and have a high a priori probability, if there exists a very brief description of this sequence - using, of course, some sort of stipulated description method. More exactly, if we use only the symbols 0 and 1 to express our description, we will assign the probability 2-N to a sequence of symbols if its shortest possible binary description contains N digits."[20]
The probability is with reference to a particular Universal Turing machine. Solomonoff showed and in 1964 proved that the choice of machine, while it could add a constant factor would not change the probability ratios very much. These probabilities are machine independent.
In 1965, the Russian mathematician Kolmogorov independently published similar ideas. When he became aware of Solomonoff's work, he acknowledged Solomonoff, and for several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was more concerned with randomness of a sequence. Algorithmic Probability and Universal (Solomonoff) Induction became associated with Solomonoff, who was focused on prediction - the extrapolation of a sequence.
Later in the same 1960 publication Solomonoff describes his extension of the single-shortest-code theory. This is Algorithmic Probability. He states: "It would seem that if there are several different methods of describing a sequence, each of these methods should be given some weight in determining the probability of that sequence."[21]
Closely related is his idea of how this can be used in a Bayesian framework. The universal prior is taken over the class of all computable sequences; this is the universal a priori probability distribution; no hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular sequence.
Algorithmic Probability uses a weighting based on the program length of each program that could produce a particular sequence, x: the shorter the program the more weight it is given.
In Inductive Inference, The universal probability distribution of that sequence functions by its sum to define the probability of the sequence, and by using the weight of individual programs to give a figure of merit to each program that could produce the sequence. The extrapolation of the next member in the sequence
Inductive inference, by adding up the weights of all models predictions of all models describing a particular sequence finds the probability of the sequence; using these weights based on the lengths of those models, gets the probability distribution for the extension of that sequence.
This is used with Bayes rule, to get the most accurate probability in predicting what is most likely to come next as the sequence, x, is extrapolated.
This theory of prediction has since become known as Solomonoff induction. It is also called Universal Induction, or the General Theory of Inductive Inference.
He then shows how this idea can be used to generate the universal a priori probability distribution and how it enables the use of Bayes rule in inductive inference. Inductive inference, by adding up the predictions of all models describing a particular sequence, using suitable weights based on the lengths of those models, gets the probability distribution for the extension of that sequence. This method of prediction has since become known as Solomonoff induction.
He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability and Solomonoff Induction, presenting 5 different models, including the model popularly called the Universal Distribution.
Later in the same 1960 publication Solomonoff describes his extension of the single-shortest-code theory. This is Algorithmic Probability. He states: "It would seem that if there are several different methods of describing a sequence, each of these methods should be given some weight in determining the probability of that sequence."[22] He then shows how this idea can be used to generate the universal a priori probability distribution and how it enables the use of Bayes rule in inductive inference. Inductive inference, by adding up the predictions of all models describing a particular sequence, using suitable weights based on the lengths of those models, gets the probability distribution for the extension of that sequence. This method of prediction has since become known as Solomonoff induction.
He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability, and Solomonoff Induction, presenting 5 different models, including the model popularly called the Universal Distribution.
Closely related is the idea of using this in a Bayesian framework. The universal prior is taken over the class of all computable sequences; no hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of a particular sequence. Using program lengths of all programs that could produce a particular start of the sequence, x, Ray gets the prior distribution for x, and uses it in Bayes rule for the most accurate probabilities to predict what is most likely to come next as the sequence is extrapolated. The Universal Probability Distribution
functions by its sum to define the probability of a sequence, and by using the weight of individual programs to give a figure of merit to each program that could produce the sequence. [11][12]
He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability, and Solomonoff Induction, presenting 5 different models, including the model popularly called the Universal Distribution.
to generate the universal a priori probability distribution and how it enables the use of Bayes rule in inductive inference. Inductive inference, by adding up the predictions of all models describing a particular sequence, using suitable weights based on the lengths of those models, gets the probability distribution for the extension of that sequence. This method of prediction has since become known as Solomonoff induction.
Closely related is the third idea of its use in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. Using program lengths of all programs that could produce a particular start of the string, x, Ray gets the prior distribution for x, used in Bayes rule for accurate probabilities to predict what is most likely to come next as the start is extrapolated. The Universal Probability Distribution functions by its sum to define the probability of a sequence, and by using the weight of individual programs to give a figure of merit to each program that could produce the sequence. [11][12]
He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability, and Solomonoff Induction, presenting 5 different models, including the model popularly called the Universal Distribution.
Work history from 1964 to 1984
editOther scientists who had been at the 1956 Dartmouth Summer Conference (such as Newell and Simon) were developing the branch of Artificial Intelligence which used machines governed by if-then rules, fact based. Solomonoff was developing the branch of Artificial Intelligence that focussed on probability and prediction; his specific view of A.I. described machines that were governed by the Algorithmic Probability distribution. The machine generates theories together with their associated probabilities, to solve problems, and as new problems and theories develop, updates the probability distribution on the theories.
In 1968 he found a proof for the efficacy of Algorithmic Probability,[23] but mainly because of lack of general interest at that time, did not publish it until 10 years later. In his report, he published the proof for the convergence theorem.
In the years following his discovery of Algorithmic Probability he focused on how to use this probability and Solomonoff Induction in actual prediction and problem solving for A.I. He also wanted to understand the deeper implications of this probability system.
One important aspect of Algorithmic Probability is that it is complete and incomputable.
In the 1968 report he shows that Algorithmic Probability is complete; that is, if there is any describable regularity in a body of data, Algorithmic Probability will eventually discover that regularity, requiring a relatively small sample of that data. Algorithmic Probability is the only probability system known to be complete in this way. As a necessary consequence of its completeness it is incomputable. The incomputability is because some algorithms - a subset of those that are partially recursive - can never be evaluated fully because it would take too long. But these programs will at least be recognized as possible solutions. On the other hand, any computable system is incomplete. There will always be descriptions outside that system's search space which will never be acknowledged or considered, even in an infinite amount of time. Computable prediction models hide this fact by ignoring such algorithms.
In many of his papers he described how to search for solutions to problems and in the 1970s and early 1980s developed what he felt was the best way to update the machine.
The use of probability in A.I., however, did not have a completely smooth path. In the early years of A.I., the relevance of probability was problematic. Many in the A.I. community felt probability was not usable in their work. The area of pattern recognition did use a form of probability, but because there was no broadly based theory of how to incorporate probability in any A.I. field, most fields did not use it at all.
There were, however, researchers such as Judea Pearl and Peter Cheeseman who argued that probability could be used in artificial intelligence.
About 1984, at an annual meeting of the American Association for Artificial Intelligence (AAAI), it was decided that probability was in no way relevant to A.I.
A protest group formed, and the next year there was a workshop at the AAAI meeting devoted to "Probability and Uncertainty in AI." This yearly workshop has continued to the present day.[24]
As part of the protest at the first workshop, Solomonoff gave a paper on how to apply the universal distribution to problems in A.I.[25] This was an early version of the system he has been developing since that time.
In that report, he described the search technique he had developed. In search problems, the best order of search, is time , where is the time needed to test the trial and is the probability of success of that trial. He called this the "Conceptual Jump Size" of the problem. Levin's search technique approximates this order,[26] and so Solomonoff, who had studied Levin's work, called this search technique Lsearch.
Work history — the later years
editIn other papers he explored how to limit the time needed to search for solutions, writing on resource bounded search. The search space is limited by available time or computation cost rather than by cutting out search space as is done in some other prediction methods, such as Minimum Description Length.
Throughout his career Solomonoff was concerned with the potential benefits and dangers of A.I., discussing it in many of his published reports. In 1985 he analyzed a likely evolution of A.I., giving a formula predicting when it would reach the "Infinity Point".[27] This Infinity Point is an early version of the "Singularity" later made popular by Ray Kurzweil.
Originally algorithmic induction methods extrapolated ordered sequences of strings. Methods were needed for dealing with other kinds of data.
A 1999 report,[28] generalizes the Universal Distribution and associated convergence theorems to unordered sets of strings and a 2008 report,[29] to unordered pairs of strings.
In 1997,[30] 2003 and 2006 he showed that incomputability and subjectivity are both necessary and desirable characteristics of any high performance induction system.
In 1970 he formed his own one man company, Oxbridge Research, and continued his research there except for periods at other institutions such as MIT, University of Saarland in Germany and the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. In 2003 he was the first recipient of the Kolmogorov Award by The Computer Learning Research Center at the Royal Holloway, University of London, where he gave the inaugural Kolmogorov Lecture. Solomonoff was most recently a visiting Professor at the CLRC.
In 2006 he spoke at AI@50, "Dartmouth Artificial Intelligence Conference: the Next Fifty Years" commemorating the fiftieth anniversary of the original Dartmouth summer study group. Solomonoff was one of five original participants to attend.
In Feb. 2008, he gave the keynote address at the Conference "Current Trends in the Theory and Application of Computer Science" (CTTACS), held at Notre Dame University in Lebanon. He followed this with a short series of lectures, and began research on new applications of Algorithmic Probability.
Algorithmic Probability and Solomonoff Induction have many advantages for Artificial Intelligence. Algorithmic Probability gives extremely accurate probability estimates. These estimates can be revised by a reliable method so that they continue to be acceptable. It utilizes search time in a very efficient way. In addition to probability estimates, Algorithmic Probability "has for AI another important value: its multiplicity of models gives us many different ways to understand our data;
A very conventional scientist understands his science using a single 'current paradigm' --- the way of understanding that is most in vogue at the present time. A more creative scientist understands his science in very many ways, and can more easily create new theories, new ways of understanding, when the 'current paradigm' no longer fits the current data".[31]
A description of Solomonoff's life and work prior to 1997 is in "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol 55, No. 1, pp 73–88, August 1997. The paper, as well as most of the others mentioned here, are available on his website at the publications page.
See also
edit- Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008, includes historical notes on Solomonoff as well as a description and analysis of his work.
- Marcus Hutter's Universal Artificial Intelligence
References
edit- ^ http://agi-conf.org/2010/2009/12/12/ray-solomonoff-1926-2009
- ^ Markoff, John (January 9, 2010). "Ray Solomonoff, Pioneer in Artificial Intelligence, Dies at 83". The New York Times. Retrieved January 11, 2009.
- ^ Vitanyi, P. "Obituary: Ray Solomonoff, Founding Father of Algorithmic Information Theory"
- ^ detailed description of Algorithmic Probability in Scholarpedia
- ^ a b Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
- ^ a b "An Inductive Inference Machine", Dartmouth College, N.H., version of Aug. 14, 1956. (pdf scanned copy of the original)
- ^ Paper from conference on "Cerebral Systems and Computers", California Institute of Technology, Feb 8-11, 1960, cited in "A Formal Theory of Inductive Inference, Part 1, 1964, p. 1
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960, revision, Nov., 1960.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov JJ McCall - Metroeconomica, 2004 - Wiley Online Library.
- ^ Foundations of Occam's razor and parsimony in learning from ricoh.com D Stork - NIPS 2001 Workshop, 2001
- ^ Occam's razor as a formal basis for a physical theory from arxiv.org AN Soklakov - Foundations of Physics Letters, 2002 - Springer
- ^ Beyond the Turing Test from uclm.es J HERNANDEZ-ORALLO - Journal of Logic, Language, and …, 2000 - dsi.uclm.es
- ^ Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008p 339 ff.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ "An Exact Method for the Computation of the Connectivity of Random Nets", Bulletin of Mathematical Biophysics, Vol 14, p. 153, 1952.
- ^ An Inductive Inference Machine," IRE Convention Record, Section on Information Theory, Part 2, pp. 56-62.(pdf version)
- ^ "A Progress Report on Machines to Learn to Translate Languages and Retrieve Information", Advances in Documentation and Library Science, Vol III, pt. 2, pp. 941-953. (Proceedings of a conference in Sept. 1959.)
- ^ "A Preliminary Report on a General Theory of Inductive Inference,", 1960 p. 1
- ^ "A Preliminary Report on a General Theory of Inductive Inference,",1960, p. 17
- ^ "A Preliminary Report on a General Theory of Inductive Inference,",1960, p. 17
- ^ "Complexity-based Induction Systems, Comparisons and convergence Theorems" IEEE Trans. on Information Theory Vol. IT-24, No. 4, pp.422-432, July,1978. (pdf version)
- ^ "The Universal Distribution and Machine Learning", The Kolmogorov Lecture, Feb. 27, 2003, Royal Holloway, Univ. of London. The Computer Journal, Vol 46, No. 6, 2003.
- ^ "The Application of Algorithmic Probability to Problems in Artificial Intelligence", in Kanal and Lemmer (Eds.), Uncertainty in Artificial Intelligence,, Elsevier Science Publishers B.V., pp 473-491, 1986.
- ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115-116, 1973
- ^ "The Time Scale of Artificial Intelligence: Reflections on Social Effects," Human Systems Management, Vol 5, pp. 149-153, 1985 (pdf version)
- ^ "Two Kinds of Probabilistic Induction," The Computer Journal, Vol 42, No. 4, 1999. (pdf version)
- ^ "Three Kinds of Probabilistic Induction, Universal Distributions and Convergence Theorems" 2008. (pdf version)
- ^ "The Discovery of Algorithmi Probability," Journal of Computer and System Sciences, Vol 55, No. 1, pp. 73-88 (pdf version)
- ^ "Algorithmic Probability, Theory and Applications," In Information Theory and Statistical Learning, Eds Frank Emmert-Streib and Matthias Dehmer, Springer Science and Business Media, 2009, p. 11
External links
edit- Ray Solomonoff's Homepage
- For a detailed description of Algorithmic Probability, see "Algorithmic Probability" by Hutter, Legg and Vitanyi in the scholarpedia.
- Ray Solomonoff (1926-2009) 85th memorial conference, Melbourne, Australia, Nov/Dec 2011 and Proceedings, "Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence", Springer, LNAI/LNCS 7070.
- Pioneer of machine learning celebrated 14 December 2011
{{DEFAULTSORT:Solomonoff, Ray}} Category:History of artificial intelligence Category:American information theorists Category:1926 births Category:2009 deaths Category:Artificial intelligence researchers Category:Theoretical computer scientists
ideas for changes:
Ray Solomonoff (July 25, 1926 – December 7, 2009)[1][2] was the inventor of algorithmic probability,[3] the General Theory of Inductive Inference (also known as Universal Inductive Inference)[4], and a founder of basic ideas of algorithmic information theory[5] He developed a general theory of inductive inference that included the universal probability distribution, and combination of simplicity (Occam's Razor) with the theory of multiple explanations to measure probabilities of theories, enabling Bayesian methods to be used in inductive inference and prediction.[6] He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.[7]
Solomonoff first described algorithmic probability in 1960, publishing the crucial theorem that launched Kolmogorov complexity and algorithmic information theory. He first described these results at a Conference at Caltech in 1960,[8] and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[9] He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I[10] and Part II.[11]
Algorithmic probability is a mathematically formalized combination of Occam's razor,[12][13][14][15] and the Principle of Multiple Explanations.[16] It is a machine independent method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities.
Solomonoff founded the theory of universal inductive inference, a theory of prediction which is based on solid philosophical foundations[17] and has its root in Kolmogorov complexity and algorithmic information theory. The theory uses algorithmic probability in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. This enables Baye's rule (of causation) to be used to predict the most likely next event in a series of events.[18]
Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.
Closely related is the third idea of its use in a Bayesian framework. The
universal prior is taken over the class of all computable measures; no hypothesis
will have a zero probability. Using program lengths of all programs that could
produce a particular start of the string, x, Ray gets the prior distribution for
x, used in Bayes rule for accurate probabilities to predict what is most likely to
come next as the start is extrapolated. The Universal Probability Distribution
functions by its sum to define the probability of a sequence, and by using the
weight of individual programs to give a figure of merit to each program that
could produce the sequence. [11][12]
The fourth idea shows that the choice of machine, while it could add a constant
factor, would not change the probability ratios very much. These probabilities
are machine independent; this is the invariance theorem that is considered a
foundation of Algorithmic Information Theory.[11][13]
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. In a theoretic sense, the prior is universal. It is used in inductive inference theory, and analyses of algorithms. Since it is not computable, it must be approximated.[19]
It deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can we evaluate the different hypotheses, and how can we predict future data?
Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; special coding methods from modern computing theory; the universal probability distribution. In Solomonoff's General Theory of Induction, the prior obtained from the formula is used with Baye's rule, for prediction.[20]
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.[21]
In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.[22]
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[23] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
a recent version:
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,[24] publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."[25] He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I[26] and Part II.[27]
He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The Universal Probability Distribution is the probability distribution on all possible output strings with random input.[28]
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, however, Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity, however, was motivated by information theory and problems in randomness while Solomonoff introduced algorithmic complexity earlier, for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.[29]
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this is a variant of Leonid Levin's Search Algorithm,[30] which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting search space include training sequences.
Suggested further reading
editRathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
References
edit- ^ http://agi-conf.org/2010/2009/12/12/ray-solomonoff-1926-2009
- ^ Markoff, John (January 9, 2010). "Ray Solomonoff, Pioneer in Artificial Intelligence, Dies at 83". The New York Times. Retrieved January 11, 2009.
- ^ detailed description of Algorithmic Probability in Scholarpedia
- ^ Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
- ^ Vitanyi, P. "Obituary: Ray Solomonoff, Founding Father of Algorithmic Information Theory"
- ^ [http://scholarpedia.org/article/Algorithmic_information_theory
- ^ "An Inductive Inference Machine" (pdf scanned copy of the original)
- ^ Paper from conference on "Cerebral Systems and Computers", California Institute of Technology, Feb 8-11, 1960, cited in "A Formal Theory of Inductive Inference, Part 1, 1964, p. 1
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960, revision, Nov., 1960.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov JJ McCall - Metroeconomica, 2004 - Wiley Online Library.
- ^ Foundations of Occam's razor and parsimony in learning from ricoh.com D Stork - NIPS 2001 Workshop, 2001
- ^ Occam's razor as a formal basis for a physical theory from arxiv.org AN Soklakov - Foundations of Physics Letters, 2002 - Springer
- ^ Beyond the Turing Test from uclm.es J HERNANDEZ-ORALLO - Journal of Logic, Language, and …, 2000 - dsi.uclm.es
- ^ Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008p 339 ff.
- ^ Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability", Scholarpedia, 2(8):2572, 2007.
- ^ Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008, p 347
- ^ ibid, p. 341
- ^ ibid, p. 339.
- ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
- ^ Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I". Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
- ^ Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
- ^ Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
- ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973
External links
edit{{DEFAULTSORT:Algorithmic Probability}} Category:Algorithmic information theory Category:Probability interpretations Category:Artificial intelligence
////////////////////////////////////////////////////////////////
Deduction, reasoning, problem solving
editEarly AI researchers developed algorithms that imitated the step-by-step reasoning that humans were often assumed to use when they solve puzzles, play board games or make logical deductions.[1] During the early years many researchers felt probability could not be used in AI, but in 1960 probability was redefined using program lengths rather than frequency for prediction.[2] By the late 1980s and '90s, AI research had developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[3]
See also Kolmogorov, A.N. (1965). "Three Approaches to the Quantitative Definition of Information". Problems Inform. Transmission 1 (1): 1–7.
in 1956, at the original Dartmouth summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine"[4].
It viewed machine learning as probabilistic, with an emphasis on the importance of training sequences, and on the use of parts of previous solutions to problems in constructing trial solutions for new problems. He published a version of his findings in 1957Cite error: A<ref>
tag is missing the closing</ref>
(see the help page).[5] was the founder of Algorithmic Information Theory,[6] and the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.[7]
He was the inventor of algorithmic probability,[8] publishing the crucial theorem that launched Kolmogorov complexity and Algorithmic Information Theory. He first described these results at a Conference at Caltech in 1960,[9] and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[10] He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I[11] and Part II.[12] it is a method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities. Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.
Introduction
editAlgorithmic (Solomonoff) probability[13], is a concept in theoretical computer science; it is a method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities. These probabilities form an a priori probability distribution for the observation that can then be used with Bayes theorem to predict the most likely continuation of that observation.
Around 1960, Ray Solomonoff invented the concept of algorithmic probability. He first described his results at a Conference at Caltech in 1960,[14] and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[15] He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I[16] and Part II.[17]
Algorithmic Probability is a unique melding of ideas about computing, a priori and conditional probability using Bayes Theorem, philosophical concepts about simplicity (Occam's Razor), and retaining multiple hypotheses (Epicurus). A universal a priori probability distribution, governed by algorithmic probability is generated. The practical goal of algorithmic probability is Solomonoff's General Theory of Inductive Inference: a universal theory of prediction.
Algorithmic probability is a mathematically formalized combination of Occam's razor,[18][19][20][21] and the Principle of Multiple Explanations.
Probability and Bayes theorem
editSuppose there is a set of observations of some data, and a set of hypotheses that are candidates for generating the data. What is the probability that a particular hypothesis is the one that actually generated the data? If there is enough prior data frequency theory can be used: the relative probabilities of the hypotheses are found by taking the ratio of the number of favorable outcomes to the total number of possible outcomes in the past. How then to adjust this likelihood when a new set of observations occur and you want to combine them all? The mathematician, Thomas Bayes [1702-1761], developed an elegant rule generalizing how to change an existing hypothesis in the light of new evidence -- it is a function of the new evidence and the previous knowledge (the prior probability). His formula says the probability of two events happening is equal to the conditional probability of one event occurring, given that the other has already occurred, multiplied by the probability of the other event happening. Bayes rule is probabilistic, but exact, and with more data, converges toward certainty. Many times, however, there is little prior data so the probabilities can't be used reliably. If there is no prior data at all, then there is no way to assign probability and no way to use Bayes rule. This is the problem that algorithmic probability treats. It provides a mathematically rigorous way of getting an a priori probability under all circumstances even when there is no data at all!
Occam's Razor and Epicurus' Theory of Multiple Explanations
editSolomonoff combined several ideas to find a solution to this problem. There are two main philosophical ideas at work. The first is the principle of Occam's Razor which is usually understood to mean that among all hypotheses that can explain the event choose the simplest. The second is Epicurus principle of multiple explanations which advocates keeping all hypotheses that can explain the event. Algorithmic probability combines these two ideas by keeping as many hypotheses as possible, while ordering their likelihood according to how simple each one is.
To do this Solomonoff used a new definition of simplicity based on computers and binary coding.
Turing Machines and Binary Coding
editSolomonoff's definition of simplicity derives from the binary coding used by computers. He uses a Universal Turing machine, a computing device which takes a tape with a string of symbols on it as an input, and can respond to a new given symbol by changing its internal state, writing a new symbol on the tape, shifting the tape right or left to the next symbol, or halting. He provides a randomly generated input program. The program computes some possibly infinite output.
The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.
Inductive Inference: Solomonoff Theory of Prediction
editAlgorithmic probability is the main ingredient of Ray Solomonoff's theory of inductive inference, the theory of prediction based on observations. Given a sequence of symbols, which will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, however, Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. The Kolmogorov complexity of any computable object is the length of the shortest program that computes it and then halts. The invariance theorem shows that it is not really important which computer we use.
Solomonoff's enumerable measure is universal in a certain powerful sense, but it ignores computation time. In order to deal with this problem Solomonoff developed a way to search for solutions by restricting the time allowed to search, and within that time frame, allowing the shorter programs more time to search than the longer one. This concept is called Levin's search, since it is similar to and partly based on the method Levin used for other computer problems.
Algorithmic Probability and Solomonoff Induction have many advantages for Artificial Intelligence. Algorithmic Probability gives extremely accurate probability estimates. These estimates can be revised by a reliable method so that they continue to be acceptable. It utilizes search time in a very efficient way. In addition to probability estimates, Algorithmic Probability "has for AI another important value: its multiplicity of models gives us many different ways to understand our data;
A very conventional scientist understands his science using a single 'current paradigm' --- the way of understanding that is most in vogue at the present time. A more creative scientist understands his science in very many ways, and can more easily create new theories, new ways of understanding, when the 'current paradigm' no longer fits the current data"[22] .
A description of algorithmic probability and how it was discovered is Solomonoff's "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol 55, No. 1, pp 73–88, August 1997. The paper, as well as most of the others mentioned here, are available on his website at the publications page.
See also
edit- Kolmogorov complexity
- Inductive inference
- Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 1997, includes historical notes on Solomonoff as well as a description and analysis of his work.
Introduction
editAlgorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" (see ref) as a side product to his invention of Algorithmic Probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control (see ref).
Andrey Kolmogorov later independently invented this theoren as a measure of information content, first describing it in 1965, Problems Inform. Transmission, 1, (1965), 1-7. Gregory Chaitin also invented it independently, submitting 2 reports on it in 1965, a preliminary investigation published in 1966 (J. ACM, 13(1966)) and a more complete discussion in 1969 (J. ACM, 16(1969)).
The theorem says that among algorithms that decode strings from their descriptions (codes) there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm, and the code lengths it allows, to define string's `universal probability' on which inductive inference of string's subsequent digits can be based. Kolmogorov used this theorem to define several functions of strings: complexity, randomness, and information."
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority (IEEE Trans. Inform Theory, 14:5(1968), 662-664). For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal a priori probability distribution.
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974).
"Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission, 1, (1965), 1-7. Gregory Chaitin also presents this theorem in J. ACM, 16 (1969). Chaitin's paper was submitting October 1966, revised in December 1968 and cites both Solomonoff's and Kolmogorov's papers."
- ^ Cite error: The named reference
Reasoning
was invoked but never defined (see the help page). - ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960. See also, Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1 pp 1-22, March 1964. and Part II.Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ Cite error: The named reference
Uncertain reasoning
was invoked but never defined (see the help page). - ^ (pdf scanned copy of the original) (version published in 1957, An Inductive Inference Machine," IRE Convention Record, Section on Information Theory, Part 2, pp. 56-62)
- ^ Markoff, John (January 9, 2010). "Ray Solomonoff, Pioneer in Artificial Intelligence, Dies at 83". The New York Times. Retrieved January 11, 2009.
- ^ Vitanyi, P. "Obituary: Ray Solomonoff, Founding Father of Algorithmic Information Theory"
- ^ (pdf scanned copy of the original)
- ^ detailed description of Algorithmic Probability in Scholarpedia
- ^ Paper from conference on "Cerebral Systems and Computers", California Institute of Technology, Feb 8-11, 1960, cited in "A Formal Theory of Inductive Inference, Part 1, 1964, p. 1
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960, revision, Nov., 1960.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ detailed description of Algorithmic Probability in Scholarpedia
- ^ Paper from conference on "Cerebral Systems and Computers", California Institute of Technology, Feb 8-11, 1960, cited in "A Formal Theory of Inductive Inference, Part 1, 1964, p. 1
- ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I" Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
- ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224-254, June 1964.
- ^ Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov JJ McCall - Metroeconomica, 2004 - Wiley Online Library.
- ^ Foundations of Occam's razor and parsimony in learning from ricoh.com D Stork - NIPS 2001 Workshop, 2001
- ^ Occam's razor as a formal basis for a physical theory from arxiv.org AN Soklakov - Foundations of Physics Letters, 2002 - Springer
- ^ Beyond the Turing Test from uclm.es J HERNANDEZ-ORALLO - Journal of Logic, Language, and …, 2000 - dsi.uclm.es
- ^ "Algorithmic Probability, Theory and Applications," In Information Theory and Statistical Learning, Eds Frank Emmert-Streib and Matthias Dehmer, Springer Science and Business Media, 2009, p. 11