I am running a Scala application that uses the datastax java driver library to write data rows into multiple AWS Keyspaces tables. However, for one table specifically, INSERT or UPDATE operations fail occasionally. My table DDL looks like this
CREATE TABLE my_keyspace.my_table (
field1 TEXT,
field2 TEXT,
timestamp_field1 TIMESTAMP,
field3 TEXT,
field4 TEXT,
field5 TEXT,
field6 TEXT,
field7 DOUBLE,
field8 DOUBLE,
timestamp_field2 TIMESTAMP,
list_field1 frozen<list<frozen<map<TEXT, TEXT>>>>,
timestamp_field3 TIMESTAMP,
field9 TEXT,
PRIMARY KEY ((field1), timestamp_field1, field2, field3)
) WITH CLUSTERING ORDER BY (timestamp_field1 DESC, field2 DESC, field3 DESC)
The error says something like this
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query
at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
The rate of written rows is not too high, but I have seen that there is a correlation when it increases a little, for example in this case, the write fails at minute 7, we can see from the table that it increases the number of writes in minutes 6 and 7, but I am not sure if two hundred writes should be a problem
minute, count
0 79
1 170
2 154
3 98
4 105
5 161
6 252
7 264 <- In this time interval the error occurs
8 160
9 143
I have tried increasing the write retrying mechanism, adding an exponential delay, however the error persists. I have also seen that I can set up the connection pool parameters but, I am not sure how this can affect the app performance or how to test if this could make a positive change. Have some have faced the same problem?
I am running a Scala application that uses the datastax java driver library to write data rows into multiple AWS Keyspaces tables. However, for one table specifically, INSERT or UPDATE operations fail occasionally. My table DDL looks like this
CREATE TABLE my_keyspace.my_table (
field1 TEXT,
field2 TEXT,
timestamp_field1 TIMESTAMP,
field3 TEXT,
field4 TEXT,
field5 TEXT,
field6 TEXT,
field7 DOUBLE,
field8 DOUBLE,
timestamp_field2 TIMESTAMP,
list_field1 frozen<list<frozen<map<TEXT, TEXT>>>>,
timestamp_field3 TIMESTAMP,
field9 TEXT,
PRIMARY KEY ((field1), timestamp_field1, field2, field3)
) WITH CLUSTERING ORDER BY (timestamp_field1 DESC, field2 DESC, field3 DESC)
The error says something like this
com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query
at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
The rate of written rows is not too high, but I have seen that there is a correlation when it increases a little, for example in this case, the write fails at minute 7, we can see from the table that it increases the number of writes in minutes 6 and 7, but I am not sure if two hundred writes should be a problem
minute, count
0 79
1 170
2 154
3 98
4 105
5 161
6 252
7 264 <- In this time interval the error occurs
8 160
9 143
I have tried increasing the write retrying mechanism, adding an exponential delay, however the error persists. I have also seen that I can set up the connection pool parameters but, I am not sure how this can affect the app performance or how to test if this could make a positive change. Have some have faced the same problem?
Share Improve this question edited Feb 7 at 1:16 Erick Ramirez 16.3k2 gold badges21 silver badges31 bronze badges asked Feb 6 at 23:37 Ivan Feliciano AvelinoIvan Feliciano Avelino 711 gold badge1 silver badge4 bronze badges1 Answer
Reset to default 1When a node is unresponsive or goes offline for whatever reason, the Java driver marks it as "down".
In situations where the whole cluster is busy or the client is unable to reach any node because of a network partition, the driver runs out of nodes to contact because all nodes have been marked as down. At this point, the driver throws a NoNodeAvailableException
because there are no nodes available to try.
There is a good chance that your application is exceeding the capacity of your cluster so I would check your Keyspaces instance metrics for clues. Some of the metrics that you might want to look at include (1) the connection request rate, (2) throughput capacity, and (3) write capacity for the problematic table. You need to verify that the limits on the Keyspaces instance are not getting exceeded. Cheers!