A lot of computer problems (and some puzzles) simply involve backtracking over the solution space and checking constraints, yet these can be fairly clunky to describe in imperative languages. Compounding this, the high cost of method calls in most VM and dynamic languages make any reasonably sane-looking implementation (read: recursive) rather slow. A much better idea is to use a language that makes this really easy, like Prolog:
I have a Compiler Design exam in less than a weeks time, and a common question involves being given a number of definitions of compilers in terms of their source and target languages, along with the language the compiler itself is implemented in. The task is simply to work out whether it is possible to translate from a specified language to another. This is just two stanzas in Prolog:
can_translate(Src, Tgt, [L]) :- t_diagram(Src, Tgt, _, L). can_translate(Src, Tgt, [H|T]) :- t_diagram(Src, Via, _, H), can_translate(Via, Tgt, T).
Compilers can be then just be defined with
t_diagram(source, target, implemented_in, name).
and querying the translation pipeline is done with
?- can_translate(source, target, Result)
(Modifying it so that it works with cycles isn't very tricky, either.)