DataStax DSE, spark, and HA

DataStax DSE spark is integrated in such a way to be highly available, meaning that (for example) if the master goes down, jobs can continue. However, you need to use the right options in order to make use of this.

DSE in an analytics DC keeps information in memory about the current spark master. If you’re running spark-submit on a node in the analytics DC, it will connect and determine the current master. If you want to submit from a remote host, you need to export the cluster configuration and import it to the remote host. Details are at https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/spark/sparkRemoteCommands.html .

For HA, don’t specify –master. DSE will determine that for you when you submit. (That way, if that master is down, it will find the current one. If you specify it, that won’t happen.)

And, in case the master goes down while the job is running, specify –supervise. This tells the spark job to supervise, and handle any failures in-flight.

There is no cross-DC HA. Normally, there is a dedicated Analytics (spark) DC on which jobs are run. You could have two of these, but jobs would have to pointed at the second DC if the first failed. We don’t have a way to do this automatically. From the spark perspective, an analytics DC is a separate cluster, and the master is not DC-aware.

Also, as an aside while we’re at it, you probably want to specify ‘–deploy-mode cluster’. This tells the job to deploy on the cluster, and not on the local host (the default).

Docs are a bit limited on this, from what I can see, so hopefully this is helpful!

bootstrapping and consistent range movement in Cassandra

If you’re adding multiple nodes to a cluster, DataStax docs tell you to “Make sure you start each node with consistent.rangemovement property turned off”.   What is “consistent range movement”, why does Cassandra have it, and why should it be turned off?

What consistent range movement means is that, when you’re bootstrapping, get your data from the replica you’re taking it over from, and not from any secondary replicas.  Why would we normally want this?  Without it, consistency guarantees can be broken.

Take a worst-case scenario: nodes (a,b,c) have data for a token, and you add 3 new nodes that take that token range over. If you wrote at quorum, and nodes (b,c) have the data, but not (a), the 3 new nodes could theoretically stream from (a), and you’ve now lost the data. This is, of course, a worst-case scenario and fairly unlikely, but if you write at CL ONE or ANY, for example, similar things could happen more easily.  Or, another less extreme version would be that 2 out of 3 nodes originally had the data – say (a,b), so reads at QUORUM would see the update, but during range movement, 2 of the new nodes stream from (c), so now only one node has the update.  QUORUM reads could then fail to read the update.  (In this case, a repair will fix the issue.)

So, consistent range movement says to get the token from the node you’re taking it over from.  In that case, if we added 3 new nodes and they replaced (a,b,c), they’d have to stream from (a,b,c) respectively, and preserve the data replication.

Why disable it if you add multiple nodes at once, then?  This is to avoid failed bootstraps due to timeouts.  If you add three nodes, and one of them takes a token from (a), but then the next one gets assigned to take it over from that new node, it will try to stream from the (still bootstrapping) new node, find it unavailable, and may error out and fail to bootstrap.

In summary, disabling consistent range movement allows bootstrapping nodes to get their data from any available node, so they won’t fail if they are trying to take over data from an unavailable (possibly bootstrapping) node.

The safest way to add new nodes is to bootstrap one (with consistent range movement enabled – the default), allow the bootstrap to fully complete, including all streaming, then bootstrap the next, etc. This takes longer and requires a bit more attention, but is safer. In practice, you might want to save the time, bootstrap with the 2-minute pause between nodes, then repair, and possibly do a full (cross-DC) repair to resolve any data issues, if they’ve arisen.  (And by the way, why the two-minute pause between adding nodes rule?  To allow each node to start up, feed gossip, negotiate and announce which tokens it’s now responsible for, to avoid race conditions.)

Blocking and Non-Blocking Read Repair in Apache Cassandra

When are read repairs blocking or non-blocking in Cassandra?  There’s a lot of confusion and misinformation about this, even in the best sources.  It’s actually pretty simple.

Read repairs due to read_repair_chance are non-blocking.  They are done in the background after results are returned to the client.

On any consistency level that involves more than one node (i.e., all except ANY and ONE), if the read digests don’t match up, read repair is done in a blocking fashion before returning results.  (This means that if the repair doesn’t complete in time, the read request can fail.)

Read repairs caused by digest mismatches are blocking in order to ensure monotonic quorum reads.  (See https://issues.apache.org/jira/browse/CASSANDRA-10726 .)  However, that they are blocking does not only apply when CL=QUORUM or ALL, as some believe.  They are blocking for all consistency levels (except ANY and ONE, which don’t compare any digests).

One other detail is that background read repair operates on all replicas of the data.  Blocking read repairs only act on the nodes touched to satisfy the consistency level.  (Ie, where the digest mismatches were found, and not on nodes that weren’t queried.)

The relevant code is in /src/java/org/apache/cassandra/service/StorageProxy.java .