Wikipedia:Reference desk/Archives/Mathematics/2009 January 16

Mathematics desk
< January 15 << Dec | January | Feb >> January 17 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 16 edit

Probability and the universe edit

If space is infinite (or ∞ ) then what is the probability of the being intelligent life in the remaining cosmos, more specificly how would one express this in mathematical/statistical format? —Preceding unsigned comment added by 166.189.133.93 (talk) 03:45, 16 January 2009 (UTC)[reply]

We don't know. It's not even clear that it's a meaningful question. Algebraist 03:49, 16 January 2009 (UTC)[reply]
You might be looking for Drake equation. JackSchmidt (talk) 03:53, 16 January 2009 (UTC)[reply]
If any given segment of space (say, a cube a billion light-years to a side) has a positive probability of containing intelligent life, then an infinite space almost surely contains intelligent life. But space isn't believed to be infinite in that sense. **CRGreathouse** (t | c) 04:19, 16 January 2009 (UTC)[reply]
Assuming such a probability is meaningful, and disjoint volumes of space have independent chances of containing intelligence. Algebraist 04:24, 16 January 2009 (UTC)[reply]
Why wouldn't it be meaningful? --Tango (talk) 04:30, 16 January 2009 (UTC)[reply]
For the same reason 'given that Alpha Centauri has a planet made of blue cheese, what is the probability that God exists?' might not be meaningful: it asks for a probability for a one-off statement that is either true or false, and it conditions it on another statement which is quite possibly false. Some approaches to the philosophy of probability allow sense to be made of this, but others do not. Algebraist 12:41, 16 January 2009 (UTC)[reply]
Well sure, for something like that you need Bayesian probability, but people rarely seem to have a problem with that. You don't even strictly need it for this problem since you have lots of large regions of space to consider and can talk about the frequency at which they contain life (obviously we can't actually observe a large enough number of them to make a useful estimation of the probability that way, but mathematically it is possible). --Tango (talk) 15:36, 16 January 2009 (UTC)[reply]
Sure, Bayesianism works here, but the frequency interpretation doesn't. We aren't talking about the probability of life in large regions of space, but the probability of intelligence in the universe at all. To be a frequentist about that, we'd need a lot of spare (infinite) universes. Algebraist 15:40, 16 January 2009 (UTC)[reply]
In my answer, I was positing a constant probability, which would mean independence; the meaning would be definitional. But how that applies to reality is unclear, since defining "intelligent life" is difficult, positing constant probability seems cray, even as a heuristic, and assuming infinite (not merely unbounded) extent seems wrong (though see below). **CRGreathouse** (t | c) 04:51, 16 January 2009 (UTC)[reply]
Who says space isn't believed to be infinite in that sense? I thought it was still very much an open question. --Tango (talk) 04:30, 16 January 2009 (UTC)[reply]
I withdraw the statement, then; that's outside my field of expertise. I had thought that Olbers' paradox ruled out an essentially constant distribution of matter, and quantum effects don't allow a fractal distribution that would escape infinite gravity. But I really don't know what the present state of knowledge is. **CRGreathouse** (t | c) 04:51, 16 January 2009 (UTC)[reply]
Olbers' paradox is avoided because the observable universe is finite (which is a consequence of the age of the universe and the speed of light both being finite), but it is unknown whether the universe extending beyond what is currently observable is finite or not. Dragons flight (talk) 06:04, 16 January 2009 (UTC)[reply]
Once again, I'm inexpert here, but doesn't gravity still cause a version of Olber's paradox? (Infinite mass causing infinite gravity causing all matter to recede to a point at nearly c?) **CRGreathouse** (t | c) 06:13, 16 January 2009 (UTC)[reply]
Gravity propagates at c, just as light does, so the same explanation applies. Algebraist 12:41, 16 January 2009 (UTC)[reply]
I don't know about space, but there is supposed to be a large but finite number of galaxies, stars, and planets in the observable universe, which means a finite chance that one of them will host intelligent life (or two of them, if we consider intelligent life to exist on Earth). StuRat (talk) 05:57, 16 January 2009 (UTC)[reply]
The observable universe is definitely finite (by current understanding), but the whole universe is almost certainly larger than the observable universe. --Tango (talk) 15:36, 16 January 2009 (UTC)[reply]
As an obvious remark I would recall that such a probability is (as it is always a probability) just a subjective measure of (un)certainity, expecially because we are not talking of a reproducible fact. That is, it's just the degree of likelyhood we assign to the existence of intelligent life somewhere, given the information we have about the universe and the formation and evolution of life. Since our information today on these matters is still quite poor, we have to base the extimation on assumptions largely due to the indifference principle, and it's well possible that different people have different probabilities. --PMajer (talk) 12:58, 16 January 2009 (UTC)[reply]

If a coefficient is significantly different from 1.00 edit

I have to run the following regression on a set of data and find out whether 'beta1' coefficient is significantly different from 1.00.

y = alpha + beta1*x + beta2*z + noise

Statistical packages (I use SPSS) by default test whether it is different from 0. How can I set up a program in SPSS so that I test whether it is statistically different from 1.00. I will appreciate your help.--24.214.202.118 (talk) 04:28, 16 January 2009 (UTC)[reply]

You could just subtract the x-value from each y-value and then test if the coefficient is significantly different from 0... **CRGreathouse** (t | c) 04:52, 16 January 2009 (UTC)[reply]

Arclength edit

Given a curve, perhaps in the plane, it's easy to get a sensible lower bound on its length by using the principle that a line segment is the shortest curve connecting its endpoints. Simply divide the original curve into many small pieces, replace them with line segments, and compute the sum of their lengths. Is there a similarly straightforward way of getting an upper bound on its length? Black Carrot (talk) 04:58, 16 January 2009 (UTC)[reply]

Not such a direct way. If your curve u:[a,b]->R2 is absolutely continuous you may work directly on  . Alternatively: if un is a sequence of curves of length less or equal than L and converging uniformly to u then by the lower semicontinuity of length,   as well. Additional information would depend on how your curve has been given. For instance if it is given as a fixed point ( i.e. u=T(u) ) of some transformation T, which is a contraction wrto the uniform distance, and for some L you prove that   implies   for all  , then you can conclude  . --PMajer (talk) 08:27, 16 January 2009 (UTC)[reply]
I doubt there's anything quite so simple. Here's a suggestion in the case of a parametric plane curve (parametrized by a nice enough function f(t)), but I'm not 100% sure it's correct yet. Break your curve into small pieces with constant concavity (either to the right or to the left - this amounts to saying that the determinant det(f′(t), f″(t)) has constant sign on the relevant interval, or that the angular coordinate of f′(t) varies monotonically there), and such that the tangent vector changes direction by less than 180° in the relevant interval (this only makes sense in an obvious way if f′ ≠ 0, but it should be possible to adapt it to some cases in which f′ is allowed to be 0 at the endpoints). Say the endpoints of one of these pieces are A and B. Draw the tangents to the curve at A and B and let them meet at C. The length of that piece of the curve is probably less than AC + BC - maybe someone can prove this. This method may not work if the concavity changes direction infinitely many times. 67.150.252.232 (talk) 08:44, 16 January 2009 (UTC)[reply]
Sorry, some of my primes and double primes didn't come through above, at least on my browser. Bear in mind that they may be missing. 67.150.252.232 (talk) 08:47, 16 January 2009 (UTC)[reply]
A simple answer is: there's no simple method to do that. The Peano curve and various fractal figures demonstrate that you can fit an arc of any length in an arbitrarily small region (say an open ball) of the space — so, testing a curve in any finite set of points is not enough to get an upper bound of its length. You will have to use some global information, as others said above. --CiaPan (talk) 10:57, 16 January 2009 (UTC)[reply]
If it's a continuously differentiable parametric curve, then the arc length is finite. If it's twice differentiable, and the modulus of the second derivative is bounded by M, then you can obtain an upper bound in terms of M for the difference between the arc length of the curve and that of a polygonal line approximating it. In fact, the length of the curve obtained between times t = a and t = b differs from the length of the single line segment approximating it by at most M(b - a)2 / 2. If the same interval is broken into N equal subintervals, the difference between the arc length of the curve and that of the corresponding polygonal line will be bounded by M(b - a)2 / 2N.67.150.254.73 (talk) 15:25, 16 January 2009 (UTC)[reply]

Just an interesting note: Given any two points in the plane, a line segment is certainly the shortest path between the two points but what about the punctured plane (that is, a plane with the origin deleted)? In this case, there is no shortest path between (1,0) and (-1,0). Although the distance between them is 2, it is intuitively clear that there is not curve of length 2 connecting them. Why the punctured plane you may ask? One can speak of the sphere and the question as to whether one can find the shortest path between two points on the sphere. Because the sphere with a point deleted (assume the north pole for convenience) can be "unravelled" to form the plane, there is always a shortest path between two points on the "punctured sphere". We can get a nice theory out of this by considering more complex spaces such as the torus or spaces having higher dimensions. In particular, there can be flaws with the method described above when we consider the plane with finitely (or countably) many points deleted. For example, if we delete all points having integer co-ordinates from the plane, the method of joining two points by a line segment and using this to approximate lengths of curves between these two points, will always work but will be a little less non-standard: we have to make the partitions very fine. A more complex example is the plane with all points with rational co-ordinates deleted. This space is path connected (any two points can be joined by a continuous (not necessarily differentiable) curve) although this method will most certainly fail. So basically what I am trying to say is that this problem is rather deep for more complex spaces (not necessarily manifolds). Hope I didn't bore you... --PST 21:06, 16 January 2009 (UTC)[reply]

I like the responses so far, but I think it would help if I changed the question. Let's say I'm just asking it for a sufficiently "well-behaved" curve, whatever that might need to mean. An arc of a circle, for instance. Is there a convincing geometric argument for an upper bound on the length of this? The closest I've found is to take the corresponding wedge of the circle, enclose it in a triangle, and argue that the area of one is more than the area of the other. From the formulas for the area and perimeter of a circle, and for the area of a triangle, a bound on the arclength is immediate. The perimeter formula is derived using only the limit of lower bounds, though, so it's cheating a bit. I can get a very loose upper bound by wrapping a string around an actual circle and approximating the perimeter that way, as less than 4 times the diameter for instance, but I don't see any way to make that geometric. I know that any polygon circumscribing a circle has greater perimeter than the circle, but I don't know why that's true. It seems like something's missing. Black Carrot (talk) 21:24, 16 January 2009 (UTC)[reply]

Consider a line segment A, and two curves B and C with the same endpoints as A. Assume also that the shape enclosed by A and B is convex, and that the shape enclosed by A and C is convex, and that the first is contained in the second. Is it always true that C is at least as long as B? Black Carrot (talk) 21:31, 16 January 2009 (UTC)[reply]

Yes. Consider the metric projection from C to B, mapping a point of C into the point of B closest to it. This is a Lipschitz map of constant 1, shortly 1-Lipschitz (it is a general fact that the metric projection onto a closed convex subset of an Hilbert space is 1-Lipschitz). Images of curves via 1-Lipschitz maps have not bigger length (this is an easy consequence of the definition of length as classical total variation). Hence, more in general, C can be any arc projecting onto B, but not necessarily convex, nor on the boundary of a domain. Is it OK? --PMajer (talk) 22:23, 16 January 2009 (UTC)[reply]

Matrix edit

What is a matrix??? For example a scalar is a magnitude and a vector is a magnitude with a direction but what is a Matrix???----The Successor of Physics 13:46, 16 January 2009 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs) [reply]

Well there's the article Matrix (mathematics). if the start of an article like that isn't good enough for someone who knows about vectors then it should be improved. Dmcq (talk) 14:05, 16 January 2009 (UTC)[reply]
Maybe you are wondering about how to visualize a matrix geometrically. One way is to try to visualize the linear map corresponding to the matrix (e.g. considering how the basis vectors are moved around). Or in some cases, an   matrix may be considered to be a vector in  -dimensional space. Aenar (talk) 15:35, 16 January 2009 (UTC)[reply]

A matrix is by definition and ordered chain of vectors. So if we have an n x m matrix, it can be thought of as m column vectors in Rn (order matters). Note however, that we may not be dealing with Rn. We might just be dealing with matrices over any field K. This means that entries in the matrix are simply elements of K. Since we can add, subtract, multiply and divide by non-zero elements in a field, we can multiply matrices over a particular field. This is why matrices are generalized in this manner. Note also that matrices may be thought of as linear transformations: every non-singular matrix induces a linear isomorphism. Basically, linear transformations of the plane, for example, simply preserve addition of vectors in the plane (as well as other properties). You may find the article on affine transformation of interest. Smooth maps between smooth manifolds induce linear isomorphisms of tangent spaces. These linear isomorphisms can be represented as a matrix; when the dimensions of the manifolds in question equal, this matrix is called the Jacobian matrix. In general, matrices are the most useful when you deal with vector spaces (this article is really good; take a look). You may also find Hermitian matrix interesting. I can go on all day but I guess I should stop here. --PST 16:30, 16 January 2009 (UTC)[reply]

In fact, you can have matrices over any ring, not just a field. Although, if it isn't a field, a non-zero determinant is no longer sufficient for a matrix to be invertible. --Tango (talk) 17:06, 16 January 2009 (UTC)[reply]
That is exactly why matrices over fields are more common (but I agree that you can have matrices over any ring). --PST 18:09, 16 January 2009 (UTC)[reply]
Superwj5!! From your query I understand you are seeking a physical interpretation of a Matrix, in analogy to what you are saying of scalars and vectors. But first notice that as the other answers explained, mathematically a matrix is simply a finite family of numbers (or other objects) with two indexes, so that you can put them nicely into a rectangle: it's nothing more than that! In the same way a vector is a just finite list, that you can write in a row or in a column. Warning: expecially in abstract math, "vector" is also used in a slightly different sense: as "element of a vector space". The reason is that Rn itself is a vector space, any real vector space V of dimension n can be identified with Rn once you choose a base in V, even if the elements of V are not really lists of numbers. Anyway, the immediate answer to your query is very easy as you see! Then you are asking: which objects are represented by means of a matrix, in the same way that an element of an abstract vector space is represented by means of a list of numbers? A lot, both in maths and in physics!! A linear map, a bilinear or a quadratic form, more generally any 2-tensor, are all examples of objects than you can describe quite well using a matrix! And each of them have of course a wide use in all branches of physics. --PMajer (talk) 12:12, 17 January 2009 (UTC)[reply]

Well, no, it's not an ordered chain of vectors, since, for one thing, you have a choice to look at row vectors or column vectors, and neither has any priviledged status that the other lacks.

But it's not just a doubly indexed array of scalars either: the way you multiply matrices is essential (just as the difference between a complex number and a mere ordered pair of real numbers is the peculiar way the former get multiplied). Michael Hardy (talk) 00:20, 18 January 2009 (UTC)[reply]

Let me add that any ordered chain of column vectors (having the same length) uniquely determines an ordered chain of row vectors so I am not convinced that I am incorrect. --PST 08:31, 18 January 2009 (UTC)[reply]
Well, you know, not everybody agrees on a definition, but I let you keep yours... which I find quite bizarre though: the name changes if I put a structure? With this philosophy an element of   needs different names provided I'm seeing   as additive group, ordered group, field, Lie group, Banach space, measure space... --PMajer (talk) 01:08, 18 January 2009 (UTC)[reply]
That's an entirely correct philosophy. What matters is what operations/relations we are considering, not the essential nature of the objects involved. For example, if we consider R as an additive group, then no two nonzero elements are distinguishable, while all elements are distinguishable in R-as-a-field. Algebraist 01:14, 18 January 2009 (UTC)[reply]
  is shorthand for many things including the additive group   and the ring/field   (some people like to specify the identities as well, but I don't see the need). That said, it's the whole object that is different in each case, the elements are the same - they are just members of the underlying set, which is the same. --Tango (talk) 02:01, 18 January 2009 (UTC)[reply]
Correct philosophy, sure, and bizarre. Of course, the algebraic structure of the algebra of matrices (nxn) is essential to its nature of algebra: which is isomorphic to the corresponding algebra of endomorphisms: which is not an algebra of matrices. I would conclude that the structure of algebra is not characteristic of the nature of a matrix. And the algebra of matrices without multiplication is no longer an algebra, nor a ring. So what? It's just a vector space of matrices, maybe. And without it's vector space operations is just a set of matrices. And a poor matrix, isolated from the gang, is just a matrix: a singleton of a matrix, which is singleton-isomorphic to any other poor singleton. Still if I meet her I can identify her as a matrix, just from her shape, with no need of embarassing questions (have you ever been with another matrix, and what did you do together, just sums or did you also multiply). Of course my definition of matrices denotes exactly the same set, so this is really a theoretical distinction. It's clear that matrices are relevant because they have an algebraic structure, but the algebraic structure, in order to define what is a single matrix, is neither sufficient nor necessary, it's totally irrelevant. The example of a complex number is different. The complex field is defined up to isomorphisms as algebraic closure of R. We have several representation of it, and we just fix one, which one being not relevant: what Algebraist's says here applies perfectly. Thus a complex number is essentially just an element of C; it is not necessarily a pair of real numbers, nor a real polynomial mod (1+x2), nor a special conformal real matrix of order 2 etc, because these are just particular representations. --PMajer (talk) 03:22, 18 January 2009 (UTC)[reply]
A geometrical interpretation of a matrix is given by the SVD: A matrix is a rotation, followed by a scaling, followed by a rotation. In other words, a matrix is a way of writing down how you change the direction and magnitude of a vector. JackSchmidt (talk) 03:29, 18 January 2009 (UTC)[reply]

Let me add that any ordered chain of column vectors (having the same length) uniquely determines an ordered chain of row vectors so I am not convinced that I am incorrect. --PST 08:31, 18 January 2009 (UTC)[reply]

Basic math elements edit

In Physics the basic elements are (depending on the perspective) mass, length, time, electric charge, and temperature. Everything can be derived from these elements. What are the basic math element? Series, unit, function?--Mr.K. (talk) 18:45, 16 January 2009 (UTC)[reply]

I'm not sure there really is anything equivalent. The closest you get would be sets - there are axiomatic approaches to mathematics in which everything is built up from sets, although those approaches are rarely used in practice (eg. if you want to use relations you just define them in terms of how they behave and don't worry about how they can be constructed from simpler objects, even though they can be defined in terms of sets if you really want to). --Tango (talk) 18:52, 16 January 2009 (UTC)[reply]
See also measure (mathematics). Physics has the notion of units because physics is often involved with measurements (especially experimental physics). On the other hand, mathematics is not the same so technically speaking, there is no analagous concept in mathematics. --PST 19:02, 16 January 2009 (UTC)[reply]
Have you tried reading the Mathematics article? It might answer some of your question. Is there a bit there which seems closest to what you are thinking of? Dmcq (talk) 09:22, 17 January 2009 (UTC)[reply]

Yes, and thanks for the answers so far. Restricted the question to geometry: could we perhaps define all elements in terms of points, curves and planes? Mr.K. (talk) 14:53, 21 January 2009 (UTC)[reply]

Measureable sets edit

I have heard it said that there is no way to assign measure to the real line to allow every possible subset to be measurable. Is there some intuitive explanation for this (or proof)? I'm somewhat unclear on if this is a property of the real line, or some more general property. Any insight would be helpful Thanks, 134.114.20.216 (talk) 19:16, 16 January 2009 (UTC)[reply]

Our article Vitali set gives a construction of such a set, with proof, but I don't think it's very intuitive. Unfortunately, I don't think it's possible to give a much more intuitive explanation. Hopefully someone here will prove me wrong. Algebraist 19:20, 16 January 2009 (UTC)[reply]
Our articles Non-measurable set and Banach–Tarski paradox may also be of interest. Algebraist 19:23, 16 January 2009 (UTC)[reply]
The outer measure is a certain set function that is defined on all subsets of a given set X. For instance, the outer measure on R is defined on all subsets of R, such that it equals the Lebesgue measure for measurable sets. However, the outer measure, when defined on all subsets of R, does not satisfy the countable additivity condition: that is, the measure of the union is not the sum of the measures, for countable unions (when the series in question converges). We must restrict the domain of the outer measure so that on that domain, any countable subcollection will satisfy the countable additivity condition. The new domain, is simply the collection of all measurable sets. This is probably the most intuitive way to understand why not all sets are measurable. The construction of the Vitali set shows where the outer measure fails to satisfy the countable additivity condition (gives the countable collection). Hope this helps. --PST 19:56, 16 January 2009 (UTC)[reply]
I would add that the existence of non measurable Lebesgue sets, rather than a sort of defect of Lebesgue theory, is a measure of the power of the axioms of set theory (ZFC). That is, these axioms allow highly non-constructive existence proofs like this one. Of course, about the Vitali set one can say few more than that it "exists". One can build a Vitali set   as a direct summand of   in  , seen as  -vector space, that is,  : any such a rational vector subspace is necessarily not measurable. --PMajer (talk) 22:09, 16 January 2009 (UTC)[reply]


Responding to the OP: It's not quite true that there is no way to assign a measure on the reals that measures every subset. Here's a trivial counterexample: For a set of reals A, let μ(A) equal 1 if 2.713 is an element of A, and 0 otherwise.
What you can't do is assign such a measure in a translation-invariant way; that is, so that the measure of a set will not change if you shift every element of the set by the same amount. However it might be possible to extend Lebesgue measure (which is translation-invariant) to a measure that measures every set of reals (though, of course, you lose the translation invariance when you pass to the extension). Specifically this is possible if there is a real-valued measurable cardinal less than or equal to the cardinality of the continuum, a proposition which is independent of ZFC. --Trovatore (talk) 08:50, 18 January 2009 (UTC)[reply]
Good remark; in fact, actually you can define a translation-invariant measure on all subsets of the reals... there are plenty of such measures: can you find some?--PMajer (talk) 12:16, 18 January 2009 (UTC)[reply]
(For instance, the zero measure is one;   for all   is another...) --PMajer (talk) 20:46, 19 January 2009 (UTC)[reply]
Note that there is only one Haar measure on the topological group (R, +, T) where T is the standard topology and + is usual addition on the real numbers. This is precisely the Lebesgue measure. --PST 13:41, 18 January 2009 (UTC)[reply]
Up to scaling. Algebraist 20:26, 18 January 2009 (UTC)[reply]
Ooops! Most certainly yes. --PST 21:20, 18 January 2009 (UTC)[reply]

check edit

I've just spent a bunch of time trying to solve this, but I can't figure out any solutions except x=x or y=y, etc. Just for clarification, it's not "homework." I saw this in a friend's textbook and tried to solve it, but I couldn't, and now it's going to annoy me until I solve it.

A man cashes a check for x dollars and y cents, but the cashier accidentally gives him y dollars and x cents. After purchasing a newspaper for k cents, the remaining amount is twice as much as the amount of the original check. If k=50, what are x and y? If k is 75, why is the problem unsolvable?

I worked it out pretty far, but I ended up with   as the original check amount,   as the problem, and   as the only thing that they didn't directly give you that I figured out myself. Can someone walk me though this problem? flaminglawyerc 22:14, 16 January 2009 (UTC)[reply]

Well, first of all, you should decide if you want to calculate the money in dollars, or in cents. In either case,   cannot be the intended amount: 2 dollars and 5 cents are neither 7 dollars nor 7 cents. And if you are going to assume that cents are given by a fraction, it will be harder to notice the unsaid condition:   - both x and y are natural numbers and you are going to have a Diophantine equation (so you can have one equation, two variables and one solution). --Martynas Patasius (talk) 23:05, 16 January 2009 (UTC)[reply]
And, by the way, there will probably be even more unsaid conditions. --Martynas Patasius (talk) 23:49, 16 January 2009 (UTC)[reply]
If the check were made out for x dollars and y cents (100x + y) and was paid as y dollars and x cents (100y + x), then the other conditions give us (100y + x) - k = 2(100x + y). This simplifies to 98y - 199x = k.
Now when the newspaper was purchased for k cents, you either has sufficient loose change, or you had to break one (or more) dollars. Let i be the number of dollars broken (where i is a non-negative integer). Breaking up the equation into dollars and cents portions, you have 100y - 200x - 100i = 0 and 100i - 2y + x = k. The first simplifies to y - 2x - i = 0 or y = 2x + i. Substituting this into the second gives 100i - 2(2x + i) = k. Solving for x gives x = (98i - k) / 3.
Now, for k = 50, x will be a positive integer for i in {1, 4, 7, ...}. For i = 1, x = 16, and y = 33. For larger values of i, the calculated y will be outside the implied range of 0 to 99 cents. So we have one solution – $16.33.
However, for k = 75, x will be a positive integer for i in {3, 6, 9, ...}, but none of these gives us y between 0 and 99. -- 07:56, 17 January 2009 (UTC) —Preceding unsigned comment added by Tcncv (talkcontribs)
I ran a simple program to find k for all two-digit x and y. Interestingly, the smallest k divisible by 3 is 150. —Tamfang (talk) 06:30, 18 January 2009 (UTC)[reply]