Wikipedia:Reference desk/Archives/Mathematics/2018 March 12

Mathematics desk
< March 11 << Feb | March | Apr >> March 13 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 12

edit

Rarity of coincidence of cycles

edit

A is foo for the central 19% of every 115.88 day cycle.

B is foo for the central 7% of every 583.9 day cycle.

C is foo for the central 9% of every 779.9 day cycle.

D is foo for the central 30% of every 398.9 days.

E is foo for the central 37% of every 378.1 days.

F is foo for the central 41% of each 369.7 days.

G is foo for the central 43% of each 367.5 days.

H is foo for the central 44% of each 366.7 days.

The longest uninterrupted stretch of 1+ foos in a random century is how long on average? Is the median about the same? Sagittarian Milky Way (talk) 05:20, 12 March 2018 (UTC)[reply]

This question cannot be answered without a definition of "average" or "median", unless you refer to the distribution of the random variable of "longest gap" as a function of a uniformly chosen at random century.--Jasper Deng (talk) 09:36, 12 March 2018 (UTC)[reply]
Correct, I'm referring to randomly chosen windows of 100 years and the distribution of their "longest gap" variable. Does it seem likely to change the answer much that the actual numbers are irrational and oscillate? (i.e. Mars goes from ~7%/~812 to ~11%/~764 and back every ~7+7/18th cycles (~15.7 years), thus only approaching 812 and 764 most times. This rational approximation takes 284 years to repeat. Mercury's twice as elliptical, Pluto the worst and the rest are ≤half Mars. Venus is 14 times less elliptical than Mars) Sagittarian Milky Way (talk) 19:07, 12 March 2018 (UTC)[reply]
Also OP could've been misinterpreted so I clarified. Sagittarian Milky Way (talk) 20:00, 12 March 2018 (UTC)[reply]
For those who are wondering about the numbers, these are respectively the synodic periods of Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. 86.152.81.89 (talk) 13:16, 12 March 2018 (UTC)[reply]
..and foo = apparent retrograde motion --catslash (talk) 18:23, 12 March 2018 (UTC)[reply]

Finding the multiplier n for 2q·5p·n=10^r

edit

I was told that for every number of the form 2q·5p (p,q are integers) there exists an integer n so that 2q·5p·n equals 10^r where r is an integer. My question is - how do I calculate r? Everything I tried always leave me with a single equation with 2 variables. — Preceding unsigned comment added by אילן שמעוני (talkcontribs) 06:06, 12 March 2018 (UTC) For example 20=2·2·5, 20·5=10^2, so n=5. — Preceding unsigned comment added by אילן שמעוני (talkcontribs) 06:09, 12 March 2018 (UTC)[reply]

From what I can tell, this is false. If p and q have prime factors other than 2 and 5, the product of it with any other (nonzero) integer cannot be an integer power of 10. If you meant exponentiation, i.e.   then there are infinitely many such n.--Jasper Deng (talk) 06:10, 12 March 2018 (UTC)[reply]
 , hence  
For   to be equal a power of ten:   you must have all p, q and n of the form   such that the sum of all three i exponents equals the sum of all three j exponents.
Otherwise, for example if   and   no such natural (non-zero)   exists which makes   a power of ten. --CiaPan (talk) 07:56, 12 March 2018 (UTC)[reply]
Both of you are absolutely right. I'll rephrase the question as it should be:
Find integer   so that for each   אילן שמעוני (talk) 08:56, 12 March 2018 (UTC)[reply]
There's no value of n that will work for every p and q. (I took your "for each" as a universal quantification over the pairs of natural numbers (p, q)). If your question is to find such an n for a given p, q, then there are infinitely many solutions; multiplying any value of n by 10 produces another.--Jasper Deng (talk) 09:10, 12 March 2018 (UTC)[reply]
So the statement is false? Can a counter example be given, with specific p and q? אילן שמעוני (talk) 09:14, 12 March 2018 (UTC)[reply]
btw, the requirement is to find a single instance of n that complies. It is obvious even to me that any   will comply as well. אילן שמעוני (talk) 09:17, 12 March 2018 (UTC)[reply]
(edit conflict) And, for any specific p and q, there are still infinitely many pairs of n and r. For example, if p = 1 and q = 2 then we have the equation   allowing solutions like n = 2, r = 2 or n = 20, r = 3, or in fact any solution of the form   for  . EdChem (talk) 09:24, 12 March 2018 (UTC)[reply]
(edit conflict) For any given   where   and  ,   furnishes a counterexample because the resulting expression has too many powers of 2. If both i and j are zero then any (p, q) with p not equal to q suffices; if exactly one of j or i is zero then (respectively)   or   are counterexamples.
For a fixed (p, q), the infinitude of solutions is reflected by how you got an underdetermined system. We can get a unique solution by requiring our value of r to be as small as possible, in which case n is easily seen to be 1 if p and q are equal,   if  , or   if  .--Jasper Deng (talk) 09:25, 12 March 2018 (UTC)[reply]
[ec] Logical quantifiers are not commutative. "For each x there exists a y" is not the same as "There exists a y such that for every x".
In our case, your statement "there is an integer   so that for each  " is false.
Meanwhile, the statement "for each   there is an integer   so that  " is true. You can take  .
And no, you can't take n to be a generic multiple of a power of 10, since then you would get a multiple of a power of 10, not a power of ten. -- Meni Rosenfeld (talk) 09:29, 12 March 2018 (UTC)[reply]
Thanks! How did you deduce  ? אילן שמעוני (talk) 12:07, 12 March 2018 (UTC)[reply]
You want to turn   in to the form  . To do this, you need to multiply by either a power of 2, or a power of 5, until the exponents are equal. If  , then you need to multiply by 5   times to get your result, and if  , then you multiply by 2   times. This then gives you a solution of either   or  .   puts those 2 cases together in to one equation. IffyChat -- 12:34, 12 March 2018 (UTC)[reply]
Thanks! אילן שמעוני (talk) 14:14, 12 March 2018 (UTC)[reply]

Efficient algorithm for the knapsack problem with large capacity

edit

Consider the unbounded knapsack problem with capacity C and k objects (given a set of k objects with weights w_i and value v_i, find the maximal value (for n_i integers) of   subject to the constraint  ). The standard dynamic programming solution, described in our article, is in   in time and   in space. Notice that this is a pseudo-polynomial time complexity since   is actually  .

The question is to evaluate the following argument, which I just made up. It is extremely simple and I cannot believe it has not been made before, yet I cannot find anything on the internet, so either I failed at searching or the argument is faulty. If it is indeed correct, there are a few holes where I would appreciate an exact formula.

Assume without loss of generality that the k objects are sorted by nondecreasing order of value-per-weight (vpw) (defined by  ). If C is "very large", the asymptotic solution is to fill "most" of the knapsack with only object #k (with the highest vpw), and fill the rest as in the classic knapsack problem. More precisely, I have a feeling that filling a knapsack of any capacity C' larger than   with only objects of type k always yields more value than filling it with any combination of the first k-1 objects - basically, assume that the k-filling will fall on the extremely unlucky case where you lack a small bit of space to fit a new object in, whereas the (k-1) filling fills the whole knapsack, then still at some point the k-filling wins because of the better bulk value.

This yields an algorithm in polynomial time of the number of digits of C: fill C-M of the backpack (rounded up or down to w_k) purely with the k-th object (which requires a few multiplications, divisions etc. on C, polynomial in the number of digits) then fill the rest by the usual knapsack algorithm, in  . Now of course that involves   which is absolutely not guaranteed to be a smaller number than C (and is even infinite if two objects have the same maximal vpw), but in some cases in can be. In particular, I am currently looking at a problem where C>10^10 while the v_i, w_i are integers smaller than 1000 (which puts a limit M<1000 - different vpw must vary by at least 1/1000, and there are tricks to eliminate equality cases in my particular problem) so it looks totally worth it.

Actually, even when there are equalities or close values at the top of the list of vpw, I have a feeling that there could still be improvements over the original complexity - if you know roughly where to split the bulk-case and fine-case, with the bulk-case being filled only with objects whose vpw is near the maximum, maybe it turns out that the weights of such objects are so large that the number of possible weight sums (hence states in the dynamic programming) for the bulk subproblem is much less than  . For instance if the set of objects is (weight, value)=(1,0.5), (10000,10000), (10001, 10001) and the capacity is 10^10, you can have a program in about 10^5 time and memory rather than 10^10 (fit all combinations of the two largest objects that leave no place for any more large object first (there are about 10^5 such sums which can be found in about 10^5 operations) then add the fine-grain of the small object (again 10^5 operations and space) - this example works well because the weights are about sqrt of the capacity so the bulk and fine parts are about the same complexity). TigraanClick here to contact me 23:20, 12 March 2018 (UTC)[reply]

Asymptotically, the knapsack problem is indeed easy: for every epsilon, for all sufficiently large C, the stupid "use only the best vpw object" procedure gets within a factor (1 - epsilon) of the truth. And if you know the exact optimal solution for some bounded number of cases then there will be periodicity afterwards (by some version of the argument you gave), so you can give an exact version. But "knowing exact optimal solutions for some bounded number of cases" is where the computational complexity of the knapsack problem actually lies. --JBL (talk) 15:42, 14 March 2018 (UTC)[reply]