Benchmarking

Good benchmarks are like good laws. They lay the foundation for civilized (fair) competition. If we have good benchmarks, why do we need all the overhead process of reviewing and monitoring the benchmark results? Similarly, you might ask if we have good laws, why do we need police, lawyers and judges? The answer to both questions is the same. Laws and benchmarks are not, in of themselves, enough.

No matter how clear-cut the rules appear when the benchmark specifications are written, there are always gray areas and loopholes left in the benchmark law. There must be a way of addressing and resolving these gray areas and loopholes in a fair manner. Even good laws, if they are not obeyed, do not constitute good government. Therefore, there must be a means for stopping those who would break or bend the rules.



The Need of Benchmarking

In the early 1980s, the industry began a race that has accelerated over time : automation of daily end-user business transactions. As opposed to the batch-computing model that dominated the industry in the 1960s and 1970s, this new online model of computing had relatively unsophisticated clerks and consumers directly conducting simple update transactions against on-line database system. The first application that received wide-spread focus was automated teller transactions (ATM), but we've seen this automation trend ripple through almost every area of business, from grocery stores to gas stations. Thus, the on-line transaction processing industry was born.

Given the stakes over who could claim the best OLTP system, the competition among computer vendors was intense. But, how to prove who was the best? The answer, of course, was a test, a benchmark.

Beginning in the mid-1980s, computer system and database vendors began to make performance claims based upon the TP1 benchmark, a benchmark originally developed within IBM that then found its way into the public domain. This benchmark purported to measure the performance of a system handling ATM transactions in a batch mode without the network or user interaction components of the system workload. This benchmark was poorly defined and there was no supervision or control of the benchmark process. As a result, the TP1 marketing claims, not surprisingly, had little credibility with the press, market researchers (among them Omri Serlin), or users. The situation also deeply frustrated vendors who felt was their competitors' marketing claims (based upon flawed benchmark implementations) were ruining every vendor's credibility.

Better Benchmarking

In the April 1, 1985 issue of Datamation, Jim Gray in collaboration of 24 others from academy and industry, publish (anonymously) an article titled "A Measure of Transaction Processing Power". This article outlined a test for on-line transaction procesing which was given the title of "DebitCredit". Unlike the TP1 benchmark, Gray's DebitCredit benchmark specified a true system-level benchmark where the network and user interaction components were included.

Benchmarking Supervision System

While Gray's DebitCredit ideas were widely praised by industry opinion makers, the DebitCredit benchmark had the same success in curbing bad benchmarking as the prohibition did in stopping excessive drinking. In fact, the situation only got worse. Without a standards body to supervise the testing and publishing, vendors began to publish extraordinary marketing claims on both TP1 and Debit Credit. They often deleted key requirements in DebitCredit to improve their performance results. From 1985 through 1988, vendors used TP1 and DebitCredit (or their own interpretation of these benchmarks) to muddy the already murky performance waters.

Omri Serlin had had enough. He spearheaded a campaign to see if this mess could be straigthened out. On August 10, 1988, Serlin had successfully convinced eight companies to form the Transaction Processing Performance Council.

The TPC published its first benchmark, TPC Benchmark A (TPC-A) within one year (November 1989). TPC-A specified that all benchmark testing data should be publicly disclosed in a Full Disclosure Report. The first TPC-A results were announced in July 1990. Four years later, at the peak of its popularity, in total, about 300 TPC-A benchmark results were published.

Technical Advisory Board

As soon as vendors began to publish TPC results, complaints from rival vendors began to surface. Every TPC result had to be accompanied by a Full Disclosure Report (FDR). But, what happened when people reviewed the FDR and didn't like what they read? How could protest be registered and how would be adjudicated? Even if a vendor representative were, so to speak, make a citizen's arrest of a benchmark violator, there was no police or court system to turn the perpetrator over to for futher investigation or, if need be, prosecution. It became apparent to the Council that without an active process for reviewing and challenging benchmark compliance, there was no way that the TPC could guarantee the level playing field that the TPC had promised the industry.

Throughout 1990 and 1991, the TPC embarked on a political journey to fix this hole in its process. The Technical Advisory Board (TAB) became the arm of the TPC where the public could challenge published TPC benchmarks. Once the TAB has throughly researched and reviewed a challenge, the TAB makes a recommendation to the full Council. If the Council finds that the result non-compliant in a significant or major way, the result is immediately removed as an official TPC result.

Fair Use policies

By the spring of 1991, the TPC was clearly a success. Dozens of companies were running multiple TPC-A and TPC-B results. Not surprisingly, these companies wanted to capitalize on the TPC prestige and leverage the investment they had made in TPC benchmarking. Several companies launched aggressive advertising and public relations campagins based around their TPC results.

In many ways, this was exactly why the TPC was created, to provide objective measures of performance. What was wrong with companies wanting to brag about their good results? What was wrong is that there was often a large gap between the objective benchmark results and their benchmark marketing claims.

The TPC had poured an enormous amount of time and energy into creating good benchmark and even a good benchmark review process. However, the TPC had no means to control how those results were used once they were approved. The resulting problems generated intense debates within the TPC. Out of these Council debates emerged in the TPC's Fair Use policies adopted in June 1991.

Have the TPC's Fair Use policies worked? By and large, they have been effective in stopping blatantly misuse or misappropriation of the TPC's trademark and good name. At times, the TPC has acted strongly, issuing cease and retraction orders, or levying fines for major violations.

Auditors

September and December 1993, the creation of a group of TPC certified auditors who would review and approve every TPC benchmark test and results before it was even submitted to the TPC. The TPC auditing system has been very effective in preventing most of the bad horses from ever leaving the barn.


References

  1. Kim Shanley (1998) Origins of the TPC and the first 10 years Transaction Processing Performance Council
Edit
Pub: 13 Aug 2022 03:48 UTC
Views: 170