You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We develop a backend system using AWS ElastiCache and Lettuce 5.1.6.RELEASE. ElastiCache cluster condiguration is 1 shard with 3 nodes.
We also use connection pooling via GenericObjectPool. We came across with a situation that Lettuce and/or connection pool didn't
take into account the fact that one of the nodes became unavailable.
Test case 1. On a host with our service one of the ElastiCache nodes was isolated using iptables. As expected a lot of RedisCommandTimeoutExceptions happened. But what was not expected it's that Lettuce continue opening connections to this unavailable node. And no TopologyChangeEvent was received.
Test case 2. Delete one Redis node via AWS console. TopologyChangeEvents was received and Lettuce didn't throw any exceptions and everything was fine.
It looks like Lettuce uses cluster view from the point of view of the cluster side and not of the client side.
So the question: What is the Lettuce best practice for mitigating situations described in test case 1?
Or maybe somebody have similar experience with mitigating such situations ? Any suggestions are welcomed!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We develop a backend system using AWS ElastiCache and Lettuce 5.1.6.RELEASE. ElastiCache cluster condiguration is 1 shard with 3 nodes.
We also use connection pooling via GenericObjectPool. We came across with a situation that Lettuce and/or connection pool didn't
take into account the fact that one of the nodes became unavailable.
Test case 1. On a host with our service one of the ElastiCache nodes was isolated using iptables. As expected a lot of RedisCommandTimeoutExceptions happened. But what was not expected it's that Lettuce continue opening connections to this unavailable node. And no TopologyChangeEvent was received.
Test case 2. Delete one Redis node via AWS console. TopologyChangeEvents was received and Lettuce didn't throw any exceptions and everything was fine.
It looks like Lettuce uses cluster view from the point of view of the cluster side and not of the client side.
So the question: What is the Lettuce best practice for mitigating situations described in test case 1?
Or maybe somebody have similar experience with mitigating such situations ? Any suggestions are welcomed!
Beta Was this translation helpful? Give feedback.
All reactions