In computer science, anytime A* is a family of variants of the A* search algorithm. Like other anytime algorithms, it has a flexible time cost, can return a valid solution to a pathfinding or graph traversal problem even if it is interrupted before it ends, by generating a fast, non-optimal solution before progressively optimizing it. This ability to quickly generate solutions has made it attractive to Search-base sites and AI designs.

Background and history edit

Running the optimal A* algorithm to completion is too expensive for many purposes. A*'s optimality can be sacrificed in order to reduce the execution time by inflating the heuristic, as in weighted A* from 1970.[1] Iteratively reducing the degree the heuristic is "inflated" provides a naive anytime algorithm (ATA*, 2002), but this repeats previous work.[2] A more efficient and error-bounded version that reuses results, Anytime Repairing A* (ARA*), was reported in 2003.[3][4] A dynamic (in the sense of D*) modification of ARA*, Anytime Dynamic A* (ADA*) was published in 2005. It combines aspects of D* Lite and ARA*.[5] Anytime Weighted A* (1997, 2007) is similar to weighted A*, except that it continues the search after finding the first solution. It finds better solutions iteratively and ultimately finds the optimal solution without repeating previous work and also provides error bounds throughout the search. [6] [7] Randomized Weighted A* (2021) introduced randomization into Anytime Weighted A* and demonstrated better empirical performance.[8]

Difference from A* edit

 
Weighted A* search that uses a heuristic that is w = 5.0 times a consistent heuristic and obtains a suboptimal path.

A* search algorithm can be presented by the function of f(n) = g(n) + h(n), where n is the last node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic that estimates the cost of the cheapest path from n to the goal. Different than the A* algorithm, the most important function of Anytime A* algorithm is that, they can be stopped and then can be restarted at any time. Anytime A* family of algorithms typically build upon the weighted version of the A* search: where one uses fw(n) = g(n) + w × h(n), w > 1, and performs the A* search as usual, which eventually happens faster since fewer nodes are expanded.[1]

ATA* involves running weighted A* multiple times with w gradually lowering each time until w=1 when the search just becomes plain A*. Although this could work by calling A* repeatedly and discarding all previous memory, ARA* does this by introducing a way of updating the path.[4] The initial value of w determines the minimal (first-time) runtime of ATA*.

Anytime weighted A* turns weighted A* into an anytime algorithm as follows.[6][7] Each goal test is done at node generation instead of node expansion, and the current solution becomes the best goal node so far. Unless the algorithm is interrupted, the search continues until the optimal solution is found. During the search, the error bound is the cost of the current solution c minus the least f-value in the open list. The optimal solution is detected when no more nodes with f(n) ≤ c remain open, and at that point the algorithm terminates with c as the cost of optimal solution. In 2021, it was discovered that given a search time-limit, Anytime weighted A* yields better solutions on average by simply switching the weight randomly after each node expansion than by using a fine-tuned static weight or by gradually lowering the weight after each solution.[8]

References edit

  1. ^ a b Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence 5. Edinburgh University Press. pp. 219–236. ISBN 978-0-85224-176-9. OCLC 1067280266.
  2. ^ Zhou, R.; Hansen, E.A. (2002). Multiple sequence alignment using A* (PDF). Eighteenth national conference on Artificial intelligence. pp. 975–976. ISBN 978-0-262-51129-2.
  3. ^ Likhachebv, Maxim; Gordon, Geoff; Thrun, Sebastian. ARA*: formal analysis (PDF) (Technical report). School of Computer Science, Carnegie Mellon University. Retrieved 24 July 2018.
  4. ^ a b Likhachebv, Maxim; Gordon, Geoff; Thrun, Sebastian. ARA*: Anytime A* with Provable Bounds on Sub-Optimality (PDF) (Technical report). School of Computer Science, Carnegie Mellon University. Retrieved 24 April 2018.
  5. ^ Krause, Alex (2005). "Anytime Dynamic A*: An Anytime, Replanning Algorithm". Proceedings of the Fifteenth International Conference on International Conference on Automated Planning and Scheduling. pp. 262–271. ISBN 978-1-57735-220-4.
  6. ^ a b Hansen, E.A.; Zilberstein, S.; Danilchenko, V.A. (1997). Anytime Heuristic Search: First Results (Technical report). Computer Science Department, University of Massachusetts Amherst. 97-50.
  7. ^ a b Zhou, R.; Hansen, E.A. (2007). "Anytime Heuristic Search". Journal of Artificial Intelligence Research. 28: 267–297. arXiv:1110.2737. doi:10.1613/jair.2096. S2CID 9832874.
  8. ^ a b Bhatia, Abhinav; Svegliato, Justin; Zilberstein, Shlomo. "On the Benefits of Randomly Adjusting Anytime Weighted A*". Proceedings of the Fourteenth International Symposium on Combinatorial Search. Retrieved 21 July 2021.