Ha! There weren't any recognized SEO's to sidekick for back then! Nor any books to read on the subject. And frankly we were all "dummies" to some degree.
Back then it was done exactly the way it should probably be done today, but isn't very often.
We would each set up hundreds of little tests of different things, trying to determine what had positive effect and what didn't. Then we would congregate at a forum somewhere (usually several) to share not only the results of our tests, but the methodology we used when constructing and conducting the test. Others would see if they could repeat these tests in different markets --though honestly back in the very early days there was only one or two markets that was competitive enough to give decently quick and meaningful results-- sometimes tweaking the test methodology a bit or constructing the test a bit differently to see if we could produce opposing results. The idea being that if the results of one test directly contradicted the results of another, neither could be trusted to be 100% true.
It was sort of an unspoken but understood rule that we would try come up with contrary results someone else had gotten. If we got opposite results but our testing methodology was sound, more testing was definitely needed. Conversely if we tried our best to get contrary results but ended up with the same results, then the first test gained more acceptance.
In other words, it wasn't a very quick process. It would often take months to compile enough evidence --from multiple tests by multiple people in multiple markets-- for a theory to gain acceptance and move the knowledge bar. Which is why we all shared data. Because the only way to truly prove or disprove anything was to test it as many ways as could be imagined.