SEARCH
You are in browse mode. You must login to use MEMORY

   Log in to start

level: Level 1 of Lecture 3 - Informed search

Questions and Answers List

level questions: Level 1 of Lecture 3 - Informed search

QuestionAnswer
Iterative improvement algorithmsIn many optimization problems, path is irrelevant; the goal state itself is the solution
Types of Informed searchBest first search - Greedy search - Beam search - A* search Local search algorithms - Hill-climbing - Simulated Annealing - Local Beam Search
Greedy searchEvaluation function h(n) (heuristic) is an estimate of cost from n to the closest goal Greedy search expands the node that appears to be closest to goal
Properties of Greedy searchComplete in finite space with repeated-state checking Time complexity: O(b^m), but a good heuristic can give dramatic improvement Space complexity: O(b^m) keeps all nodes in memory Optimal? No. *where b is the maximum branching factor of the tree and m is the maximum depth of the state space
Beam searchProblem: best-first greedy search might get stuck Solution: keep track of k best alternative solutions Algorithm summary: Execute greedy best-first search with the following extension 1 Sort the fringe and keep only the best k nodes 2 Remove all nodes from the fringe and expand the best k nodes 3 If the goal is found – quit; otherwise 4 Add resulting nodes to the fringe, depending on the search variant used (tree, graph, etc.) and continue from the first step
A*searchIdea: avoid expanding paths that are already expensive Evaluation function f (n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost to goal from n f (n) = estimated total cost of path through n to goal
Conditions for optimality IAn admissible heuristic never overestimates the costs to reach the goal, i.e. is optimistic For every node n: h(n) ≤ h*(n) where h*(n) is the true cost for n. Also require h(n) ≥ 0, so h(G) = 0 for any goal G
Conditions for optimality IIA consistent heuristic satisfies the triangle inequality relation
Optimality of A*Theorem:The tree-search version of A* is optimal if h(n) is admissibleThe graph-search version of A* is optimal if h(n) is consistent
Proving optimality of A* employing graph-search IStep 1 : If h(n) is consistent, then the values of f (n) along any path are nondecreasing h(n) ≤ c(n, a, n′) + h(n′) If h is consistent and n′ is a successor of n, we have f (n′) = g(n′) + h(n′) = g(n) + c(n, a, n′) + h(n′) ≥ g(n) + h(n) = f (n)
Proving optimality of A* employing graph-search II2. Step: Whenever A* selects a node n for expansion, the optimal path to that node has been foundIt follows that f (n) ≤ f (n′) because n was selected for expansionBecause costs are nondecreasing, the path through n′ cannot have less costs then f (n). Contradiction
Proving optimality for tree-searchSuppose some suboptimal goal G2 has been generated and is in the queue2 . Let n be an unexpanded node on a shortest path to an optimal goal G. f (G2) = g(G2) + h(G2) = g(G2) since h(G2) = 0 f (G2) > g(G) = g(n) + h*(n) since G2 is suboptimal f (G2) > g(n) + h(n) = f (n) since h(n) ≤ h*(n) (admissible) Since f (G2) > f (n), A* will never select G2 for expansion
Which nodes are expanded?If C*is the cost of the optimal solution path then: A*expands all nodes with f (n) < C* A* might expand some nodes with f (n) = C* A*expands no nodes with f (n) > C*
Properties of A*Complete? Yes. unless there are infinitely many nodes with f ≤ f (G)Time Exponential O((b^?)^d), where ? := (h* − h)/h*and h*is the cost of an optimal solution; ? is the relative errorSpace Exponential, graph-search keeps all nodes in memory, even the size of the fringe can be exponentialOptimal? Yes, depending on the properties of the heuristic andthe search method (see above)A* usually runs out of space long before it runs out of time
Heuristics: Euclidian distanceEuclidian distance: straight line distance On a plane dS (A, B) = √︀ (xA − xB ) 2 + (yA − yB ) 2
Heuristics: Manhattan distanceManhattan distance: number of valid moves required to place a tile in a correct position On a plane dM(A, B) = |xA − xB | + |yA − yB |
Admissible heuristicsExample: the 8-puzzle h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e. no. of squares from desired location of each tile)
DominanceIf h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 and is better for search
Relaxed problemsAdmissible heuristics can be derived from the exact solution cost of a relaxed version of the problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution Key points: The optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem Relaxed problem can be solved efficiently
Memory-bounded Heuristic SearchTask: Solve A* space problems, but maintain completeness and optimality Iterative-deepening A* (IDA*) Here cutoff information is the f-cost (g+h) instead of depth Cutoff is smallest f-cost of any node that exceeded cutoff on previous iteration Recursive best-first search (RBFS) Recursive algorithm that attempts to mimic standard best-first search with linear space. (simple) Memory-bounded A* ((S)MA*) Drop the worst-leaf node when memory is full Back up if necessary
RBFS EvaluationRBFS is a bit more efficient than IDA* Like A*, optimal if h(n) is admissible Space complexity is O(bd) Time complexity difficult to characterize, worst case O(b^2d)Depends on accuracy of h(n) and how often best path changes IDA* and RBFS suffer from too little memory. IDA* retains only one single number (the current f-cost limit)
(Simplified) Memory-bounded AUse all available memory Expand best leafs until available memory is full When full, SMA* drops worst leaf node (highest f-value) Like RBFS backs up f-value of forgotten node to its parent Same node could be selected for expansion and deletion. SMA* solves this by expanding newest best leaf and deleting oldest worst leaf. SMA* is complete if solution is reachable, i.e. path must fit in memory Optimal if heuristic h(n) is admissible
Iterative improvement algorithmsIn many optimization problems, path is irrelevant; the goal state itself is the solution
Hill-climbing VariationsRandom-restart hill-climbing Tries to avoid getting stuck in local maxima Stochastic hill-climbing Random selection among the uphill moves The selection probability can vary with the steepness of the uphill move First-choice hill-climbing Stochastic hill climbing by generating successors randomly until a better one compared to current state is found Good strategy if there are many successors
Simulated AnnealingEscape local maxima by allowing “bad” moves But gradually decrease their size and frequency Origin: adaptation of the Metropolis-Hastings algorithm
Local Beam SearchKeep track of k states instead of one Initially: k random states Next: determine all successors of k states If any of successors is goal, then finished Else select k best from successors and repeat Major difference with k random-restart search - Information is shared among k search threads Can suffer from lack of diversity Stochastic variant: choose k successors at random, with the probability of choosing a given successor being an increasing function of its value
Genetic AlgorithmsVariant of local beam search in which a successor state is generated by combining two parent states Start population – k randomly generated states A state is represented by a chromosome – a string over a finite alphabet (often a string of 0s and 1s), e.g. positions of queens in 8-queens Evaluation function (fitness function): higher values for better states, e.g. number of non-attacking pairs of queens Produce the next generation of states by crossover (recombination) and mutation
Crossover and mutationCrossover Select a pair of chromosomes for crossover Generate children according to a crossover strategy: cut point(s) are defined and chromosomes are swapped Mutation Idea is similar to a random jump in simulated annealing Use randomness to maintain genetic diversity Change a randomly selected bit of a chromosome to a random value to avoid local extremes
Online SearchUntil now all algorithms were offline Offline – solution is determined before executing it Online – interleaving computation and action: observe, compute, take action, observe, . Online search is necessary for dynamic or stochastic environments It is impossible to take into account all possible contingencies Used for exploration problems: Unknown states and/or actions, e.g. any robot in a new environment
SummaryHeuristic functions estimate costs of shortest paths Good heuristics can dramatically reduce search cost Greedy best-first search expands lowest h Incomplete and not always optimal A*search expands lowest g + h Complete and optimal Admissible heuristics can be derived from exact solution of relaxed problems Local search algorithms can be used to find a reasonably good local optimum after a small number of restarts