KDnuggets Home » News » 2014 » Dec » Opinions, Interviews, Reports » Interview: Peter Alvaro, UC Berkeley, on Consistency Challenge in Distributed Systems ( 14:n35 )

Interview: Peter Alvaro, UC Berkeley, on Consistency Challenge in Distributed Systems


We discuss the performance limitations caused by treating datastore as black box, consistency as an application-level property, Dedalus and LDFI approach for testing.



Peter AlvaroPeter Alvaro is a doctorate candidate at the University of California, Berkeley, where he is working with Joe Hellerstein and the BOOM team. His research interests lie at the intersection of databases, distributed systems and programming languages: in particular, he wants to know how the lessons from the first may be incorporated into the third, and to what degree this mitigates the difficulties of the second. To this end, his team is designing a collection of declarative languages for specifying and implementing complex distributed systems in a data-centric manner. Before joining the PhD program, Peter worked at Ask.com.

Recently, Peter delivered a keynote at RICON 2014 on Outwards from the Middle of the Maze: Using Lineage to find Bugs at Scale (Video, Slides).

Here is my interview with him:

Anmol Rajpurohit: Q1. As per the transaction concept, when application programmers write code to interact with data stores, should they treat the data store as a black-box with application-level guarantees or should they have an insight into the data store to develop more effective and robust code?

Peter Alvaro: In an ideal world, yes, ACID transactions should allow the programmer to treat the datastore as a black box.  Constraining the programmer to interact with the underlying data in an abstract, declarative fashion has a variety of benefits, one of the most important of which is *data independence* -- the application code, which is likely to have a much longer lifetime than physical storage layout or even data store implementations, is decoupled and protected from low-level details and changes.  In such an environment the programmer need not worry about making code robust to failure.  Transactions expose a universal and relatively simple failure mode: some transactions may (atomically) abort.  The programmer need not worry about efficiency, since (in principle) an optimizer can adapt the query plan to what is known about the data distribution and environment.
Transactions
Now, as I indicated in my talk, at large scale this facade begins to weaken.  I relied heavily on optimizer hints when I ran ad-hoc queries against our data warehouse at Ask.com.  I often “chopped” massive updates or deletions into multiple sub-transactions to improve performance, but this violated the atomicity of the updates.  Any weakening of the transactional facade -- whether by apply tricks like these, or running at a lower isolation level, or by abandoning a relational database in favor of a NoSQL store with weak or nonexistent guarantees -- means that programmers are *forced* to peek inside the black box, so that they can anticipate and guard against system-specific semantics (e.g., concurrency and recovery in relational databases, asynchrony and partial failure in distributed NoSQL stores).

I have mostly talked about what programmers are forced to do in practice.  As for what programmers “should” do, I believe we don’t have the right tools yet to give a satisfactory answer to that question.  I’ll discuss some of my thoughts on this later in the interview.

AR: Q2. It is generally believed that Consistency is a required property of database systems (along with Atomicity and Durability, as defined for the Transaction concept), whereas Correctness is something that an application programmer need to ensure. However, you insist that "Consistency is an application-level property". Why so? Why should we manage consistency across multiple layers: object-level, flow-level and language-level?

PA: The ACID acronym, while a fine mnemonic, is not rigorously defined -- the “C” is perhaps the most ambiguous term. The usual definition is the following: a database is consistent if none of its constraints are violated. We then assume that each transaction, if run in isolation, takes the database from one consistent state to another (otherwise the constraint violations would cause it to abort).  Serializable execution assures that interleaved execution of multiple transactions produces the same effects as a serial execution.  It is often possible to directly encode application-level correctness properties as database constraints.  Even if we cannot do so directly, the transaction abstraction goes a long way towards bridging application- and storage-level guarantees.
ACID
Consider my example from the talk: an application programmer wishes to ensure that a particular bank account balance is non-negative.  This is an application-specific property, but it can be expressed via a check constraint.  Then serializable transactions *automatically* guarantee that the application-level property is upheld despite failure and non-determinism in the schedule.  Even if the database cannot single-handedly uphold the correctness property (perhaps due to expressivity issues in the constraint system), it nevertheless makes it trivial for the application programmer to do so: he or she needs merely to ensure that their transaction, if run in isolation, leaves the database in a consistent (application-level correctness properties hold) state -- and abort otherwise. 

In the absence of transactions, by contrast, the programmer must manually ensure that the invariant is upheld by anticipating and remediating the various failures and schedules that can occur.  This not merely adds complexity to the application programmer’s task, but requires them to reason about the failure modes of the *particular* data store, which could change over time.

I am not arguing that application programmers should reason about consistency simultaneously at the levels of objects, flow and language!  Rather, I argue that each of these various levels represents an improvement over the current state of the art -- storage-level weak consistency models -- by reducing the impedance mismatch between application-level correctness properties and system semantics.  Enforcing consistency at a particular level requires navigating other tradeoffs: for example, language-level consistency is closest to the application, but requires systems to be rewritten in new languages; object-level consistency provides an incremental path to adoption, but composition remains a challenge (see my answer to question #6). Flow-level consistency -- which enables us to combine object- and language-level approaches -- presents an interesting middle ground.  I discuss these various levels in detail in our SOCC vision paper (http://www.cs.berkeley.edu/~palvaro
/a23-alvaro.pdf
).

AR: Q3. What inspired you to work on Dedalus, a language for programming and reasoning about distributed systems? What makes Dedalus so different from Overlog and Datalog?

PA: I have been interested in using query languages to do general-purpose programming for some time.  When I joined the BOOM team at UC BOOMBerkeley, the Declarative Networking project begun by Boon Thau Loo (now at Penn) and carried on by senior graduate student Tyson Condie (now at UCLA) had already demonstrated that NDLog and Overlog (Datalog-based languages enhanced with communication primitives) could concisely express networking protocols.  My colleagues and I then set out to implement a large-scale distributed application -- the Hadoop/HDFS stack -- in Overlog.

The project validated our hypothesis that data-centric languages were a good fit for programming implementing data-intensive systems (http://db.cs.berkeley.edu/papers/eurosys10-boom.pdf), but Overlog presented significant semantic challenges when we began using it to implement complex protocols, including atomic commit and consensus (http://netdb09.cis.upenn.edu/netdb09papers/netdb09-final17.pdf).  Overlog (like Datalog) is a set-oriented language: rules express relationships among records in the “database,” but lack the ability to express fine-grained relationships *between states* in time -- relationships such as mutation, atomicity, sequentiality and mutual exclusion.  Worse still, Overlog’s semantics failed to account for uncertainty in distributed executions arising from nondeterministic message ordering, delay and component failure.

What was missing, it seemed to me, was a notion of time.  Dedalus extends Datalog dedaluswith a notion of process-local logical time, and with a minimal set of temporal constructs that allow programmers to precisely express (when necessary)  distributed systems details such as time-varying state and uncertain communication (http://www.eecs.berkeley.edu/Pubs/
TechRpts/2009/EECS-2009-173.html
).  Dedalus allows us to associate distributed executions with a clean, model-theoretic semantics that simplifies formal analysis (LDFI is an example).

AR: Q4. What are the current challenges of top-down testing approaches? How does Lineage-Driven Fault Injection (LDFI) help improve the testing?

PA: In my RICON talk, I contrasted “bottom-up verification” approaches (e.g., using a model checker to verify individual components such as protocols) with “top-down testing” approaches (e.g., using a combination of integration testing and fault injection,  as do systems like Chaos Monkey (and other monkeys) at Netflix).   In practice, the latter approach seems far more common, for a variety of reasons that I discuss in the talk.  Top-down approaches can be very effective in finding bugs in complex, large-scale systems that might be intractable to verify formally.  Unfortunately, while such approaches are trivially sound (they only report “true” bugs, since they actually *run* the system rather than reason about its executions in the abstract) they cannot guarantee completeness (finding *all* of fault-tolerance violations, or (most desirably) certifying that no violations exist). LDFI
LDFI is a top-down approach that provides a valuable completeness guarantee: if some combination of faults can prevent a known good outcome, LDFI identifies that collection of faults and produces a representative execution in which the violation occurs.  It does so by reasoning formally about the lineage (the “why”) of correct outcomes, and using it to inject only those failures that it can prove might have prevented a known good outcome.

The second and last part of the interview is here.

Related: