Wikipedia:Reference desk/Archives/Mathematics/2020 May 8

Mathematics desk
< May 7 << Apr | May | Jun >> May 9 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 8 edit

Secretary problem variations with costs edit

I was reading up on the secretary problem and thinking about how, in real life, there are costs (time, space, travel, etc.) incurred with each additional interview, which led me to the following variations of the problem: Consider a machine that has a cost each time it is run. Each time this machine is run, it generates a random integer between 1 and a known maximum M, inclusive, and then asks the user whether the user wants to accept that random value. If the user does not accept the value, then nothing happens. However, if the user accepts the value, then that machine gives that random integer as a payout and then self-destructs. What is the optimal strategy to maximize payout...:

(1) ...if the cost is a constant C each time the machine is run?

(2) ...if the cost is linearly increasing, starting at $1? That is, the machine costs $1 the first time it is run, $2 the second time it is run, $3 the third time it is run, and so forth.

(3) ...if the cost is a general function c(t) of the number of times t that the machine is run? (The previous two questions would then just be the c(t) = C and c(t) = t cases of this problem.)

SeekingAnswers (reply) 20:24, 8 May 2020 (UTC)[reply]

I assume that you mean, maximize the expected value of payout minus cumulative cost. Any strategy for maximizing that will be an optimal strategy. For case (1) I got a result which I need to verify, but that will have to wait till another day. Obviously, if C ≥ M, the player should always accept in the very first round, regardless of the number drawn. They cannot gain but only lose by playing further.  --Lambiam 23:21, 8 May 2020 (UTC)[reply]
Ah, yes, I meant maximize payout less the costs. Also, to clarify, one is allowed to not play. So for case (1), if C ≥ M, then not playing at all would be optimal in that case. —SeekingAnswers (reply) 02:45, 9 May 2020 (UTC)[reply]
In considering this, it is easy to succumb to the sunk cost fallacy, in which the cost that has already been incurred is weighed in decisions about future actions. This fallacy can work in two directions : a suboptimal decision to stay the course because otherwise the past cost will have been in vain, or a suboptimal decision to stop early because the cost incurred already exceeds possible future gain, although the expected value of future gain is positive so that at least some of the prior cost can be recouped. (I model a loss as a negative gain.) The decision whether to accept or continue should disregard the cost of earlier rounds. This applies to all versions of the game.
A strategy should tell the player at each round whether to accept the payout   offered by the machine, or to reject it and continue. If payout   is acceptable, then so is obviously any payout   with  . Let   stand for the lowest acceptable payout. So the strategy will be: accept when  , otherwise reject and continue.
In version (1), the cost for each round is constant, so the value of   is the same for each round, only depending on the values of   and  . Note that in versions with a constant maximal payout, including this version,   does not make sense – here it would mean that the player always keeps playing, only incurring cost, never a payout. So we know that  . We also assume that  ; otherwise we already know that  . Let   stand for the expected gain, given the  -based strategy. If the machine offers a payout   such that  , the player takes the loss   of this round but continues to gain   in future rounds. The probability of rejection (assuming the machine is fair) equals  , the fraction of payouts to be rejected. Then with probability   the payout offered will be acceptable. Each of the values   being equally likely, the expected value of   is then  , from which   is to be subtracted if we want to compute its contribution to  . Combining this, we have :
 
Solving this equation for   results in:
 
We need to determine what value of   maximizes  . If   was a continuous quantity, we could just solve   for  , picking the appropriate root. In this discrete case, we reason as follows. If   and  , then   is unacceptable, since the player has to gain more by using   instead. So the acceptable payouts are characterized by   or  . We need to find the least value of   satisfying the inequation. After simplification, the numerator of   is
 
where
 
The difference   is nonnegative when  , so, since   is a whole number, we find that the optimal strategy is given by the least integer   in the range from   to   satisfying this inequation, which is:
 
where   denotes the ceiling function. (When   happens to be a whole number, it does not matter whether we choose   to be equal to this number, as in the formula with the ceiling function, or its successor; both result in the same optimal value for  . The lower choice has the advantage, only expressed implicitly in the mathematical model, that the player can go home sooner.) If the formula for   results in a value less than  , use   instead.  --Lambiam 17:59, 9 May 2020 (UTC)[reply]
For variant (2), linearly increasing cost, both the least acceptable payout in a round and the corresponding expected gain function depend on the value   of the cost for that round. We incorporate this into model by adding subscripts   to   and  , so the strategy in the round with cost   is to accept when the payout is at least  . We abbreviate  , the expected gain under the optimal strategy, by  . As before, when  , we have  , so, in particular,  , and  . For  , we have, using the same line of reasoning as before,
 
Then the numerator of   is
 
The difference is nonnegative when  , so the least acceptable payout is now given by:
 
with a lower bound of  , as before, and
 
This allows to calculate   and   backwards for  . Because of the ceiling function, there is no pretty closed formula. Here are the computed values for  :
   c  Ac  Gc
  10  1  -4.500
   9  1  -3.500
   8  1  -2.500
   7  1  -1.500
   6  1  -0.500
   5  1   0.500
   4  1   1.500
   3  2   2.550
   2  3   3.710
   1  4   5.013
 --Lambiam 18:49, 11 May 2020 (UTC)[reply]
For the third variant, let the round-depended cost be given as a sequence   of positive integers. The strategy will be determined by a corresponding sequence   of integers in the range   to  , denoting the least acceptable payout in each round. The sequence of functions   denotes the expected gain given a proposed acceptance threshold, assuming all later rounds will be played with an optimal strategy. We abbreviate   by  . Completely analogous to before,
 
and
 
again with a minimum of  .
If, for any round  , we have  , we know (as above) that   and  . Then we can successively compute backwards as before for rounds  .
If the costs remain below the maximum, pick some large index   (h for horizon). We know limits on   and  . For the  :
 
in which the upper bound corresponds to the most favourable case for the player, namely   for all  . Then also
 
We can then compute lower and upper bounds backwards. With some luck, they will coincide after a number of steps. Since  , when these bounds coincide at index  , we have the optimal strategy for rounds   up to but not including the earliest point of divergence. If the bounds did not coincide yet when the index   is reached, or a longer initial stretch is needed, repeat with a more distant horizon.  --Lambiam 19:04, 12 May 2020 (UTC)[reply]