Pro Prolog is pro.

A lot of computer problems (and some puzzles) simply involve backtracking over the solution space and checking constraints, yet these can be fairly clunky to describe in imperative languages. Compounding this, the high cost of method calls in most VM and dynamic languages make any reasonably sane-looking implementation (read: recursive) rather slow. A much better idea is to use a language that makes this really easy, like Prolog:

I have a Compiler Design exam in less than a weeks time, and a common question involves being given a number of definitions of compilers in terms of their source and target languages, along with the language the compiler itself is implemented in. The task is simply to work out whether it is possible to translate from a specified language to another. This is just two stanzas in Prolog:

can_translate(Src, Tgt, [L])   :- t_diagram(Src, Tgt, _, L).
can_translate(Src, Tgt, [H|T]) :- t_diagram(Src, Via, _, H),
   can_translate(Via, Tgt, T).

Compilers can be then just be defined with

t_diagram(source, target, implemented_in, name).

and querying the translation pipeline is done with

?- can_translate(source, target, Result)

(Modifying it so that it works with cycles isn't very tricky, either.)

Comments (2)

mt

Here's a shorter version:
<code>
can_translate(L, L, []).
can_translate(L0, L, [T|Ts]) :- t_diagram(L0, L1, _, T), can_translate(L1, L, Ts).
</code>

June 14, 2007, 1:23 p.m. #

If you still care about this stuff, then check out GHC, and Simon PJ's stuff on compiler optimization. Its all defined in terms of rewriting rules, though its not as cool as your prolog thingy.

June 19, 2007, 8:43 a.m. #