What is the continuous analogue for this? Btyner 02:21, 13 May 2006 (UTC)Reply

There is no continuous analog for self-information. This could be mentioned in the article. 130.94.162.64 04:44, 16 May 2006 (UTC)Reply
Why isn't there? It wouldn't be too hard to define a continuous I(x) "information density function" so that i.e. integrating the idf from a to b would give you the self-information of the event corresponding to the interval (a,b), if f(x) is some probability density function. I'm not saying this has been done, but I'd be surprised if it hasn't. 68.221.39.112 00:23, 2 December 2006 (UTC)Reply

Should't Hartley be mentioned here? 139.179.137.234 (talk) 12:49, 15 April 2008 (UTC)Reply

The continuous analog doesnt need an integral as the discrete doesn't have a sum. It is simply the log of the likelihood instead of the probability. This may vary by application; at least in the case of scoring rules, the log of the likelihood is indeed the continuous analog.199.46.198.232 (talk) 21:14, 8 June 2011 (UTC)Reply

= Entropy = surprisal?

edit

Who put that log(1/p) is entropy? Isn't entropy an average information, H = sum p log (1/p)? --Javalenok (talk) 20:42, 16 August 2014 (UTC)Reply

Incomprehensible

edit

In information theory, self-information is a measure of the information content[clarification needed] associated with the outcome of a random variable. It is expressed in a unit of information, for example bits, nats, or hartleys, depending on the base of the logarithm used in its calculation. The term self-information is also sometimes used as a synonym of entropy, i.e. the expected value of self-information in the first sense, because  , where   is the mutual information of X with itself.[1] These two meanings are not equivalent, and this article covers the first sense only. For the other sense, see Entropy.

Hard to understand. It explains the difference between two meanings by defining the second meaning in terms of the first meaning before the first meaning has been adequately explained. Then, it motivates something about these meanings by using symbols that haven't been explained yet, and aren't even explained in the article.

It looks like someone's notes to themselves...I'll try to fix it. 89.217.13.75 (talk) 21:57, 3 February 2015 (UTC)Reply

References

  1. ^ Thomas M. Cover, Joy A. Thomas; Elements of Information Theory; p. 20; 1991.

Recursive (n): see recursive. Not the same as recursive.

edit

The current version manages to pack both an infinite recursion (surprisal ... is the expected value of the 'surprisal' of a random event) and a contradiction (surprisal ... is not the same as the surprisal.) into the same opening sentence. It seems to be a misquote of the suggested sentence in http://bactra.org/weblog/1146.html, which uses those clauses to define self-information as distinct from surprisal. Roystgnr (talk) 16:36, 4 January 2017 (UTC)Reply