Talk:Hewitt–Savage zero–one law

Latest comment: 4 years ago by Skinnerd in topic Example 2

Example edit

The example seems nearly vacuous, since the only way a random variable taking values in [0, ∞) could fail to have strictly positive expectation is for it to be almost surely zero. Michael Hardy 22:51, 21 August 2006 (UTC)Reply

The emphasis of that example, though, isn't the point about the expectation but rather the application to the Hewitt-Savage zero-one law to see that the probability of the random series diverging is either zero or one. I agree that the example is very simple (and is, indeed, the same one as used for Kolmogorov's zero-one law), but that's not necessarily a bad thing. I would welcome a better showcase example for this result. Sullivan.t.j 23:12, 21 August 2006 (UTC)Reply

I have added a comment on the example being rather simple and the distinction between being able to apply a zero-one law and being able to work out which of the two possible values is the correct one. I have also added a similar warning to Kolmogorov's zero-one law. Sullivan.t.j 23:32, 21 August 2006 (UTC)Reply

The definition given here of exchangeable event collapses to that of event in the algebra generated by the X_i. In fact if we assume the X_i to be independent and identically distribuited then we have, taken B_i borel sets P{X_j \in B_i}P{X_i \in B_i}=P{X_i \in B_i,X_j \in B_j)=P{X_j \in B_i}P{X_i \in B_i}, for each i,j \in N; now we can choose the permutation (i,j) on the niddle event and the equality between the two sides still holds. What i mean is that the distribution of a sequence (X_1,X_2,...) of iid is ALWAYS independent of their order so every event genrated by these variables should be "exchangeable" according to the definition; but we don't want this. What we could do is either remove the iid hypotheses or define an exchangeable event to be a set in \sigma(X_1.....) that remains fixed under the action of a finite permutation of the variables.(Loeve does the first, Shiryaev the latter) The Hewitt-Savage can be proved by using both definitions.

Sorry the joint distribution was completely misprinted. Should be

P{X_j \in B_j}P{X_i \in B_i}=P{X_i \in B_i,X_j \in B_j)=P{X_j \in B_i}P{X_i \in B_j} --213.140.19.113 12:29, 18 April 2007 (UTC)Reply


The statement of the theorem assumes that (X_n) is iid, then goes on to assume that the probability of an event is unchanged by permutations of the probability space. This second assumption is implied by the iid assumption. —Preceding unsigned comment added by 99.54.67.143 (talk) 05:08, 12 November 2008 (UTC)Reply

No, e.g. the event X_1 = 0 provides a counterexample. - Saibod (talk) 22:51, 3 December 2008 (UTC)Reply

Another example edit

Many thanks to the writers of this article for enriching wikipedia. I added another example that I recently saw in Shiryaev's book to the text. I thought it was a simple addition so I just went ahead with it without asking for permission. I hope it is ok. best wishes Lee Carla (talk) 14:15, 17 December 2008 (UTC)Reply

Finite permutation edit

The text mentions "finite permutations", but it does not define them. The link to permutation does not define them either. Albmont (talk) 16:14, 11 December 2008 (UTC)Reply

Example 2 edit

You say example 2 is a continuation of example 1. However if you use the same sequence of random variates as you used in example 1 i.e., the sequence   taking on values in  , then the only way the sum could ever return to the origin is for all the   to take on the value 0. Otherwise the sum will always be greater than zero. And therefore the probability the sum will ever be zero after the first non-zero   is zero. Is that right? Skinnerd (talk) 13:38, 6 May 2020 (UTC)Reply