Please excuse any mis-statements of current theory since I'm primarily applied, previously in Computer Graphics, currently mostly in GIS.
@Article{sutherland-1974-characterization,
author = "I. E. Sutherland and R. F. Sproull and R. A. Shumacker",
title = "A characterization of ten hidden surface algorithms",
journal = "ACM Comput. Surv.",
volume = "6",
year = "1974",
pages = "1--55",
oldlabel = "geom-590",
}
In addition, techniques such as horizon lines date back about as far, e.g.:
@Article{wright-1973-two-space,
author = "Thomas J. Wright",
title = "A Two-Space Solution to the Hidden Line Problem for
Plotting Functions of Two Variables" ,
year = "1973",
month = "January",
journal = "IEEE Trans. Computers",
keywords = "height field",
}
They then start publishing papers, as if they'd invented the area, asymptotically analyzing complicated, unimplementable, worst-case algorithms w/o ever acknowledging that the earlier work even exists. One such paper introduced its literature search approximately as follows, previous papers in this field, meaning previous, theoretical, papers in only Computational Geometry.
Although some of these old algorithms are only approximate, but so are some of the new papers that ignore them.
A variant of this is the opinion that even implementation papers at good conferences should not mention implementation details, such as the computer system and SW used. Contrast this with experimental physics, where the experimental setup is considered quite relevant.
In addition, applications conferences often have papers not far from, How I designed and implemented a GIS to count bicycle traffic in Nfld w/o ever having to read a GIS or CG book. Those papers have nothing to teach anyone who doesn't want to use that specific system.
Nevertheless, things have now gone too far the other way.
They are also implementable, which is saying something, and take a reasonable time on reasonable problems, regardless of their asymptotic complexity.
Also, after getting the algorithm implemented correctly, there is time to play with the program to understand its behavior and improve it. Simple algorithms also often parallelize more easily and better. Finally simple algorithms also tend to work very fast on large datasets since their constant factors in their times are often small.
A worthwhile question is this: Why do some simple data structures work so well in practice? What does in practice mean? It's unfairly optimistic to assume the data to be uniform and uncorrelated, but unfairly pessimistic to make no assumptions at all.
So far as I know, there is no good theoretical, asymptotic algorithm. Nevertheless, a bucketing, uniform grid approach may find them all in O(size_in+size_out) time, w/o finding any red-red or blue-blue intersections, if the input is not too badly distributed.
Not every anomaly leads to interesting theory; there were several other 19th century orbital oddities, which had classical explanations and therefore are forgotten today. Nevertheless, every so often general relativity results.
As another example, trying to explain the photoelectric effect motivated quantum mechanics (and got Einstein his Nobel).
One place where knowledge of theory could have helped was Edison's observation of the Edison effect, which is that the flow of electrons can be controlled in a vacuum tube. Edison, knowing little theory, didn't realize that he was seeing something significant, and ignored it. Later, Lee de Forest used this effect to invent the triode, the first electrical amplifier.
Might theoreticians in Computational Geometry likewise both help applications by studying simpler algorithms, and help themselves to discover deep new theory by studying real applications?