CS5811: Homework 1
SEARCH
Assigned: Monday, September 12, 2005.
Due: Monday, September 26, 2005, beginning of class.
Reminders:
-
This homework emphasizes experimental design rather than programming. I
expect a scholarly report for the analysis of the results. The report
should contain the complete analysis and results, programs or execution
traces should be in the Appendices.
- Include very clear instructions on how to run
your programs in order to get the same results you report.
-
Please submit your programs and report both using "submit" and
as a hardcopy. For longer hardcopies except the report, I prefer enscript with two columns to
save paper
(enscript -2Gr -P<printer-name> <file-name>).
I do the grading on the hardcopy, and refer to the programs as needed.
Please do not print lengthy programs, print only the code that was
implemented by you.
-
Include some sample dribble files that show the results for your test runs.
Avoid printing long and unreadable search traces though---keep it concise
and to-the-point: I need to see the setup of your experiments and a
feeling of how the search proceeds. Please do not print lengthy traces.
-
This is an individual assignment. All the work should be the author's in
accordance with the university academic integrity policies. You are
not allowed to work in groups.
However, you are allowed to get help on Lisp.
-
In this assignment you'll make use of a library of Lisp code provided by
the authors of your textbook (if you'd like to use another language's
package, please see me).
I've downloaded the search subdirectory into
/classes/cs5811/common/aima-code/search/.
The other directories needed are:
The full library is available too: /classes/cs5811/common/aima-code/.
(Alternatively, you can access the full library from the textbook's web site.)
Tasks and experiments:
-
Your first task is to implement iterative broadening search. You may
use any part of the code as a model.
Once you implement and test iterative broadening search,
implement the "hard-coded" search tree below by writing the successor
function. Print a trace the order of nodes visited (expanded) and the
nodes generated for each of the following search techniques:
- BFS (breadth first search)
- DFS (depth first search)
- IDS (iterative deepening search)
- IBS (iterative broadening search)

Assume that nodes V and J are the goal nodes.
-
Your second task is to compare the performance of BFS, DFS, IDS, A*, and IDA*
on a set of at least twenty randomly generated problems in the 8-puzzle (with
misplaced tiles, and with Manhattan
distance). Make sure that
you use the same set for every run of the experiments. Discuss your
results. Also make sure to include problems of varying complexity.
The design of the experimental setup will be graded.
If a result is as expected, explain the reason for your
expectation. If you see a result that surprises you, explain what might
have caused it.
-
Thirdly, explore what happens to the performance of IDA* when a small (<1)
random number is added to the heuristic values in the 8-puzzle domain.
Explain your findings.
All the above search methods except iterative broadening are
implemented in AIMA code. You need iterative broadening for only the
first part. The 8-puzzle domain is also implemented. There are
more examples including the travelling salesperson problem (TSP)
in search/domains.
Turn in your summary of results for each
problem including your interpretation of the results. You may use either
tables or graphs to depict your results. They should minimally include:
- the search algorithm used
- the start state(s) and the goal state(s)
- the solution(s) (if solved), and the cost of the solution (depth)
- the time required to complete the search or the time the search was
terminated without success
- the number of nodes expanded and the number of nodes generated.
- the maximum number of nodes in memory
Do not wait for hours for a solution, define a reasonable time limit
and terminate the search afterwards (especially for DFS). Remember to
use a suitable version of DFS so that infinite loops are not
encountered.
Note that the software can generate random instances for problems,
but you should be careful to run the comparisons using the same
set of problems. Present a summary of the problems used in your report.
To run the predefined test cases for search, do the following:
-
load AIMA software loader (load "aima.lisp")
-
the search part of AIMA software (aima-load 'search) (this will
automatically load the utilities part and the agents part)
-
run the test function (test 'search).
To run your experiments, load
-
AIMA software loader (load "aima.lisp")
-
the search part of AIMA software (aima-load 'search) (this will
automatically load the utilities part and the agents part)
You will find examples for solving problems in /classes/cs5811/common/aima-code/search/test-search.lisp.
Here is a quote from this file:
"For the full 3 missionary and 3 cannibal problem, breadth-first-search"
"is very inefficient. Better to use something that handles repeated
"states, like A*-search or no-duplicates-breadth-first-search:"
((solve (make-cannibal-problem) 'A*-search) *)
((solve (make-cannibal-problem) 'no-duplicates-breadth-first-search) *)
The code also provides a compare function:
(compare-search-algorithms
#'8puzzle-problem
'(A*-search SMA*search IDA*-search))
However, it will not proceed to the next method if you kill a search
because it does not seem to make progress. My advise is to run each
search individually. Here is a transcript that might be helpful (
hw01-dribble.txt)
( hw01-dribble.ps)