Miscellaneous edit

I question the phrase 'Random Variables' - surely if they were truly random variables, there could be no question of any meaningful correlation, and the whole concept would lose all utility - <email removed> 12-July-2005 00:59 Sydney Earth :)


not sure about my edit what are ì and í ? — Preceding unsigned comment added by 61.88.219.66 (talk) 15:03, 11 July 2005 (UTC)Reply

I agree that the phrase 'Random Variables' is a suboptimal naming construct. It tries to illude that Statistics is an exact science, when in fact it should be named 'Observations' to recognize its empirical heritage. The name should honor its real root: Analysis of data by means of rules made up for the practical purpose of getting useful information from the data :) 85.164.125.248 (talk) 17:55, 18 December 2011 (UTC)Reply

Please don't be a total crackpot. If you want tutoring or instruction on these concepts, ask someone. Don't use the word "would", as if you were talking about something hypothetical. Admit your ignorance. Simple example: Pick a married couple randomly from a large population. The husband's height is a random variable, the randomness coming from the fact that the couple was chosen randomly. The wife's height, for the same reason, is a random variable. The two are correlated, in that the conditional probability that the wife is more than six feet tall, given that the husband is more than six-and-a-half feet tall, is different from the unconditional probability that the that the wife is more than six feet tall. See the Wikipedia article on statistical independence of random variables; some pairs of random variables are independent; some are not. Another example: the square of the husband's age is a random variable; the husband's age is another random variable. And they are correlated. As for utility, just look at all of the practical applications of these concepts in statistical analysis. Michael Hardy 21:04, 11 July 2005 (UTC)Reply

The converse however is not true ? Do you mean than Covariance could be 0 with dependant variables ?

Yes. example: y=x² ( where -1<=x<=1 ). Cov=0. Dependant? Yes, it's a function!

It is not clear what 'E' means in the equation E(XY).

E(XY) = (1/N)*sum((X_i*Y_i), i=1..N), perhaps there should be the sum definition explicitly stated in this article as well as the expectation value article? --dwee

That formula is correct only in the case in which the number of possible values of X and Y is some finite number N. More generally, the expectation could be an infinite series or an integral. At any rate, E(X) is the expected value of the random variable X. Michael Hardy 01:59, 7 October 2004 (UTC)Reply
Regardless of the usages of the expected value in higher level theoretical statistics, the finite sum version should be included since it is the formula that most people learn when they are first taught covariance. It's also in common practical useage, unlike the expected value formula, sample sizes aren't infinite (mostly). Either put both formulas in or just the finite sum formula. Every other source I can find on the internet has the finite sum definition, for example: http://mathworld.wolfram.com/Covariance.html.

-Trotsky — Preceding unsigned comment added by 165.214.14.22 (talk) 20:22, 8 June 2012 (UTC)Reply


Just to note that the second equation is not rendered.

It is on the browser I am using. Michael Hardy 20:26, 6 January 2004 (UTC)Reply

the covarioance definition for vector valued random variables looks very sophisticated...

That seems like a completely subjective statement. Michael Hardy 21:51, 3 January 2005 (UTC)Reply

what is the actual reason for defining a thing like this ? Why not put the E(X_iY_j) entries into a table,

That's exactly what it says is done! Michael Hardy 21:51, 3 January 2005 (UTC)Reply

why a matrix is needed ?

The only difference between a "table" and a matrix is that one has a definition of matrix multiplication. And one often has occasion to multiply these matrices. Michael Hardy 21:51, 3 January 2005 (UTC)Reply

your "explanation" does not explain anything as any table can be treated as matrix and multiplied with other tables. This of course does not make much sense in general. So the actual question is : why the definition (and multiplication) of a matrix with entries E(X_iY_j) makes sense ?

Your original question was completely unclear, to say the least. Maybe I'll write something on motivation of this definition at some point. Michael Hardy 20:23, 4 January 2005 (UTC)Reply
"For column-vector valued random variables X and Y with respective expected values μ and ν, and n and m scalar components respectively, the covariance is defined to be the n×m matrix"
I don't get it either; how do you get a matrix as the cov(X,Y) when normally it is a scalar? Probably I am not understanding how you are defining X and Y. To me it sounds like the m or n components are just sample values of random variables X and Y? --Chinasaur 02:00, 1 April 2005 (UTC)Reply

___

I'm sorry but this page makes UTTERLY NO SENSE, could someone please add in a paragraph explaining things for those of us that aren't super mathmaticians? 203.112.19.195 16:24, 25 July 2005 (UTC)Reply

___

The covariance of two column vectors is stated to generate a matrix. Is there an similar function to covariance which generates a single scalar instead of a matrix by instead multiplying the transpose of the first term against the unaltered column vector? Is there a reference where we could find the derivations of these terms? Ben hall 15:09, 16 September 2005 (UTC)Reply

If X and Y are both n × 1 random vectors, so that cov(X,Y) is n × n, then the trace of the covariance may perhaps be what you're looking for. Michael Hardy 18:29, 16 September 2005 (UTC)Reply
Sorry, but I think I may have expressed myself poorly. I was thinking in terms of a list of variables which could be described as a vector, or as a matrix. For example if I have the cartesian coordinates of particles in a box over a period of time, I see how I can find the covariance matrix based on each of the components for each of the particles but I cannot see how I might find a covariance matrix based solely on the motions of the particles with respect to one another (ie if they are moving in the same direction or opposing directions). For this would it be suitable to take the inner product of the differences between the cartesian coordinates and their averages? Also, how could I show that this is a valid approach? Thanks for your suggestions so far. Ben hall 20:02, 17 September 2005 (UTC)Reply

___

In probability theory and statistics, the covariance between two real-valued random variables X and Y, with expected values E(X) = μ and E(Y) = ν is defined as: -- it is unclear that we speak about n sets of variables {X} and {Y}. I suggest starting with In probability theory and statistics, the covariance between two real-valued random variables X and Y in the given sets {X} and {Y}, with expected values E(X) = μ and E(Y) = ν is defined as:. This is unclear too, but more explanatory why do you ever speak about E(X) and E(Y) without noting they are sets of variables, not just 'variables'. Also the speech about mean values as well as expected values in th same meaning would be preferrable. I would also suggest adding a link to http://mathworld.wolfram.com/Covariance.html.

Please comment on my comments :) I will start reorganizing the article if there will be no comments within a month. --GrAndrew 13:07, 21 April 2006 (UTC)Reply

Having a PhD in statistics somehow fails to enable me to understand what you're saying. What is this thing you're calling "n"?? Why do you speak of X and Y as being "in the given sets {X} and {Y}, and what in the world does that mean? If you change the sentence to say that, I will certainly revert. The first sentence looks fine to me, and your proposed change to it looks very bad in a number of ways, not only because it's completely cryptic. The covariance is between random variables, not between sets of random variables. And to refer to "n" without saying what it is would be stupid. Michael Hardy 18:20, 21 April 2006 (UTC)Reply

I've just looked at that article on mathworld. It's quite confused. This article is clearer and more accurate. I'm still trying to guess what you mean by saying "they are sets of variables, not just 'variables'". In fact, they are just random variables, not sets of random variables. Michael Hardy 21:01, 21 April 2006 (UTC)Reply

I would guess that he is coming at the problem from a time series analysis point of view and thinking of covariance in terms of a statistic calculated using a time series of sampled data. To someone from that background it can seem confusing to think of covariance calculated for a single variable when, in practice, you calculate it from a set of data. This is not to say I think the article should be changed though perhaps if I found time to add something about calculating sample covariance on data it would clarify. --Richard Clegg 12:45, 24 April 2006 (UTC)Reply

unbiased estimation edit

For variance, there is a biased estimation,and an unbiased estimation. Is there any unbiased estimation for covariance? Jackzhp 17:55, 2 February 2007 (UTC)Reply

Yes. See sample covariance matrix. Prax54 (talk) 19:33, 10 August 2012 (UTC)Reply

Add ? edit

It is customary to use  XY to represent covariance(x,y) and  2 to represent variance. Contrary to the above author's opinion that use of symbols adds nothing to the article, I believe that adding customary symbol usage will help people coming to this page while viewing material containing those symbols.Tedtoal (talk) 22:38, 12 March 2011 (UTC)Reply

In the current article the notation   is used to define covariance and then the notation   is used later in the "Properties" section without explaining it. If the   notation is going to be used, then it should be defined before it is used. Tashiro~enwiki (talk) 01:41, 3 October 2016 (UTC)Reply

I think the notation the notation   should be avoided since the symbol   can be interpreted as standard deviation, I've already changed some of the equations in the properties section using the notation  . However, I've not changed all of them, further replacements are needed. Cristtobal (talk) 18:11, 7 October 2017 (UTC)Reply

A simpler definition exists edit

Hello, I don't think this page is all that great for the lay person; as a matter of fact, I think the Definition section over-complicates things. Wolfram MathWorld has us beat here: http://mathworld.wolfram.com/Covariance.html. Anyone object to me incorporating this information into this page? I'm particularly looking for feedback from folks who are significantly more stats-minded than myself. :) Thanks, dmnapolitano (talk) 18:17, 27 September 2011 (UTC)Reply

Anyway: the present definition in math world many seem simpler, but is not correct. Nijdam (talk) 10:28, 9 June 2012 (UTC)Reply

Better Formula? edit

The article states the last formula may cause catastrophic cancellation if used in computing, but the first formula clearly can't be computed directly in that form. So what equation should we use? 24.38.10.6 (talk) 14:34, 29 May 2013 (UTC)Reply

IS this correct? edit

The following was in the section "Properties", is it correct? (if so - we can add it again)

For sequences x1, ..., xn and y1, ..., ym of random variables, we have

 

Tal Galili (talk) 06:39, 29 September 2013 (UTC)Reply

Hoeffding Covariance Identity edit

I added the Hoeffding Covariance Identity to the article. I was not able to find a proof online though, how should I reference the result in this case? — Preceding unsigned comment added by 46.227.2.125 (talk) 16:47, 2 August 2016 (UTC)Reply

Okay, done! Someone check the formatting ;-) — Preceding unsigned comment added by 46.227.2.125 (talk) 16:58, 2 August 2016 (UTC)Reply

Values given in the Example edit

Hi all!! I think there is an error calculating de standard deviation for the first example, before calculating the value for the covariance between X and Y: For the X variable, which is taking values in {1,2}, N=2 and mean equal 3/2, I am getting, in fact, 1/2. However, for the variable Y taking values in {1,2,3}, applying the formula with N=3 and mean equal 2, the value I get for the standard deviation is the square root of 2/3, not the square root of 1/2, as it says in the article. Can anyone tell me where I am wrong? [posted 1 October 2017 by User:Mmatth] —Preceding undated comment added 21:29, 1 October 2017 (UTC)Reply

Y takes on values 1 with probability 1/4, 2 with probability 1/2, and 3 with probability 1/4, giving a mean of 2. So
 
Loraof (talk) 20:26, 4 October 2017 (UTC)Reply
I came here exactly for that. I also got   whereas the article claims  . Unless both mine and Loraof calculations are wrong, there still seem to be an error in the article, in   part.
(Erithion (talk) 14:24, 8 January 2020 (UTC))Reply
@Deacon Vorbis: if it is not broken, then you may be able to clear this up? HelpUsStopSpam (talk) 22:56, 14 January 2020 (UTC)Reply
The problem was that there was a pattern of people who kept changing from 1/2 to 2/3. I missed one, and then when someone else changed it back, I got confused and reverted them without realizing it had been changed without me noticing. I have since put it back to the correct value, as per the notes above. –Deacon Vorbis (carbon • videos) 23:00, 14 January 2020 (UTC)Reply

No reference to physics??? edit

I wonder why there is no link (at the very least) to covariance in physics, e.g. general covariance in relativity. — Preceding unsigned comment added by 42.111.162.152 (talk) 02:47, 10 July 2019 (UTC)Reply

This article is not about the same definition of covariance. The "about" template at the beginning states, The Covariance (disambiguation) page links to several articles which mention covariance in mathematics and physics.—Anita5192 (talk) 04:44, 10 July 2019 (UTC)Reply

COV definition & example use are different (aren't they?) edit

Under the heading "Definition", COV(X,Y) = E[(X-E[X])(Y-E[Y])] = E[XY] - E[X]E[Y]

Under the heading "Discrete random variables", COV(X,Y) = SUM( i=1:n; P_i*(x_i-E[X])*(y_i*E[Y]) ); I'll refer to this definition below as COV1. For discrete r.v's, this is the same as the defn in terms of expected value. All good.

Under the heading "Example", COV(X,Y) = SigmaXY = SUM( (x,y) elementof S; f(x,y)*(x-MuX)*(y-MuY) ); I'll refer to this sum as COV2.

My understanding is that COV2 can be written as COV2(X,Y) = SUM( j=1:n; SUM( k=1:m; f(x_j, y_k)*(x_j-MuX)*(y_k-MuY) ) ); this is how the example calculates the COV from the table.

It would be helpful if the article could clear up how COV1 is equivalent to COV2 (I've not been able to nor can I see how to do so).

This ambiguity is widespread (and is not confined solely to this article); any number of stats books interchangeably use COV1 and COV2 to calculate the correlation coefficient, often using both within a breath of each other.

Dig a bit more and the defn of expected value is ambiguous. Is it (a) E[X] = SUM( j=1:n; x_j*f(x_j) ) OR is it (as given for joint distns) (b) E[XY] = SUM( j=1:n; SUM( k=1:m; f(x_j, y_k)*x_j*y_k ) )? Does E[XY] (a) equal E[XY] (b) ?

There's surely a gap in my stats knowledge but shouldn't these books and articles like this fill this gap ?

Thanks. DP3285 (talk) 04:28, 6 November 2022 (UTC)Reply

I believe that the definition presents the covariance for the population (by dividing by N), whereas the discrete subsection gives the covariance of the sample (dividing by N-1). There should be a clarification on this, and maybe the addition of the sample covariance equation. I would do it myself, but my stats knowledge is not so good and I'm afraid of making a mistake. — Preceding unsigned comment added by Lacuerdaloca (talkcontribs) 12:43, 26 May 2023 (UTC)Reply