Department of Computer Science McGill University Montreal, Canada
Systems Group Department of Computer Science, ETH Zurich Zurich, Switzerland
Replication is a key mechanism to achieve scalability and fault-tolerance in databases. Its importance has recently beenfurther increased because of the role it plays in achieving elasticity at the database layer. In database replication, the biggest challenge lies in the trade-oﬀ between performance and consistency. A decade ago, performance could only be achieved through lazy replication at the expense of transactional guarantees. The strong consistency of eager approaches came with a high cost in terms of reducedperformance and limited scalability. Postgres-R combined results from distributed systems and databases to develop a replication solution that provided both scalability and strong consistency. The use of group communication primitives with strong ordering and delivery guarantees together with optimized transaction handling (tailored locking, transferring logs instead of re-executing updates, keepingthe message overhead per transaction constant) were a drastic departure from the state-of-the-art at the time. Ten years later, these techniques are widely used in a variety of contexts but particularly in cloud computing scenarios. In this paper we review the original motivation for Postgres-R and discuss how the ideas behind the design have evolved over the years.
email@example.com due to the role it plays in achieving elasticity when combined with virtualization in cloud computing environments (the most recent example being the architecture of SQl Azure  or proposals to support multi-tenancy in databases ). In this paper we review the original results of the VLDB 2000 paper that presented Postgres-R . Postgres-R was the ﬁrst database replication system thatwas both scalable and provided strong consistency guarantees. It fulﬁlled these two design goals by taking a completely diﬀerent approach to implementing replication than the commercial systems and research proposals available at the time. The main contributions of Postgres-R covered a wide range of aspects but complemented each other very well: dealing with consistency outside the database;considering alternative correctness criteria that reduced overhead but still guaranteed consistency; and showing how to make these new ideas work within a database engine by providing a full implementation. These innovations were possible only by stepping out of the constrained thinking of databases and distributed systems at the time. Postgres-R emerged from the combination of results from both areas,with the ideas and concepts from Postgres-R that have survived the test of time being mainly those that resulted from such synergy. The most important innovation in Postgres-R was the use of ideas from distributed computing (group communication) to solve the problem of coordinating updates at several copies while maintaining overall consistency . Nowadays, when protocols like Paxos  are acore component in many distributed systems, the idea may seem obvious. At the time, it was quite a counterintuitive design choice in both the database as well as in the distributed systems communities. On the one hand, group communication was developed almost exclusively for fault tolerant purposes and Postgres-R used it mainly for performance reasons (as a way to reduce the cost of achievingconsistency). On the other hand, understanding how group communication interacted with transaction management in a database required to go away from the fully synchronous model of replicated databases (based on distributed locking and 2 Phase Commit) and adopting a less tightly coupled approach based on ordering guarantees. Today, group communication (or some form of agreement protocol) is widely...