Spring / Spring Retry Interview Questions
Spring Retry is a framework module that provides declarative and programmatic support for retrying failed operations in Spring-based applications. It is used to automatically re-execute a block of code when a transient failure occurs — such as a network timeout, a temporary database connection drop, or an intermittent third-party API error — without requiring the developer to write manual retry loops.
At its core, Spring Retry intercepts method calls (via AOP when using annotations) and applies a configurable retry policy that defines how many times to retry, how long to wait between attempts, and which exceptions should trigger a retry versus which should be skipped entirely.
The primary motivations for using Spring Retry are:
- Resilience: Transient errors are common in distributed systems. Retry logic prevents a momentary failure from causing a full operation failure.
- Clean code: Instead of embedding try-catch-loop boilerplate, developers declare retry behavior through annotations or simple API calls.
- Configurability: Retry policies, backoff strategies, and recovery handlers can be tuned per method without changing business logic.
Spring Retry integrates naturally with Spring Boot and Spring Batch. It is commonly used when calling REST endpoints, message brokers, or databases that can occasionally be temporarily unavailable.
To add Spring Retry to a Spring Boot project, you include the spring-retry dependency in your build file and enable retry support with an annotation on your configuration class.
Maven dependency:
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aspects</artifactId>
</dependency>The spring-aspects dependency is required because Spring Retry uses AOP proxies under the hood when annotation-based retrying is enabled.
Enable retry in configuration:
@Configuration
@EnableRetry
public class AppConfig {
}Once @EnableRetry is placed on a configuration class, Spring scans for @Retryable and @Recover annotations in your beans and wraps them automatically. Without this annotation, the retry annotations are silently ignored.
In Spring Boot, it is common to place @EnableRetry directly on the main application class alongside @SpringBootApplication for simplicity.
The @Retryable annotation is placed on a Spring-managed bean method to declare that it should be retried automatically when it throws a specified exception. Spring Retry wraps the method in an AOP proxy and applies the configured retry policy transparently to the caller.
Key attributes of @Retryable:
| Attribute | Default | Description |
|---|---|---|
| value / retryFor | Exception.class | Exception types that trigger a retry |
| noRetryFor | – | Exception types that must NOT trigger a retry |
| maxAttempts | 3 | Total number of attempts including the first try |
| backoff | no delay | A nested @Backoff for delay between retries |
| listeners | – | Names of RetryListener beans to notify |
| recover | – | Name of a specific @Recover method to use |
Example:
@Retryable(
retryFor = { HttpServerErrorException.class },
maxAttempts = 4,
backoff = @Backoff(delay = 1000, multiplier = 2)
)
public String callExternalService() {
return restTemplate.getForObject(url, String.class);
}This retries up to 4 times on HttpServerErrorException with an exponential backoff starting at 1 second.
The @Recover annotation marks a fallback method that Spring Retry calls when all retry attempts for a @Retryable method have been exhausted without success. It provides a graceful degradation path instead of allowing the exception to propagate to the caller.
Rules for a valid @Recover method:
- It must be in the same Spring bean as the
@Retryablemethod. - Its return type must match the
@Retryablemethod's return type. - Its first parameter must be the exception type being recovered from.
- It can optionally accept the same additional parameters as the
@Retryablemethod.
Example:
@Retryable(retryFor = RuntimeException.class, maxAttempts = 3)
public String fetchData(String id) {
return externalService.get(id);
}
@Recover
public String recoverFetchData(RuntimeException ex, String id) {
log.error("All retries failed for id: {}", id, ex);
return "default-value";
}If fetchData fails all 3 attempts, Spring Retry automatically routes execution to recoverFetchData. The exception is passed as the first argument so the recovery method can log or react to the specific failure. If no matching @Recover method is found, the final exception is re-thrown.
The @Backoff annotation is a nested annotation used within @Retryable to define the waiting strategy between consecutive retry attempts. Without a backoff, retries happen immediately one after another, which can overwhelm a struggling resource. @Backoff introduces controlled delays to give the failing resource time to recover.
Key attributes:
| Attribute | Default | Description |
|---|---|---|
| delay | 1000ms | Initial delay in milliseconds before the first retry |
| maxDelay | 0 (no cap) | Maximum allowed delay in milliseconds |
| multiplier | 0 (no multiplier) | Multiplies the delay after each attempt (exponential) |
| random | false | Adds jitter to the delay to avoid thundering herd |
Example — exponential backoff with a cap:
@Retryable(
retryFor = TimeoutException.class,
maxAttempts = 5,
backoff = @Backoff(delay = 500, multiplier = 2, maxDelay = 8000)
)
public void syncData() { ... }With this configuration, delays will be 500ms → 1000ms → 2000ms → 4000ms → capped at 8000ms. Setting random = true adds slight variation to each interval, which is important in high-concurrency scenarios to prevent all retry clients from hitting the same resource simultaneously.
RetryTemplate is the programmatic API for Spring Retry. It allows you to define retry behavior as code rather than annotations, giving you full control over when, how, and around which blocks of code retry logic is applied — including lambda expressions or any non-Spring-managed code.
When using @Retryable, the retry logic is applied by AOP proxies at method call interception time. This means the annotated method must be called through a Spring bean proxy — calling it from within the same class bypasses the proxy and the retry logic entirely. RetryTemplate has no such limitation because it wraps an explicit callback, not a method invocation through a proxy.
Example:
RetryTemplate retryTemplate = RetryTemplate.builder()
.maxAttempts(3)
.fixedBackoff(2000)
.retryOn(HttpServerErrorException.class)
.build();
String result = retryTemplate.execute(context -> {
return restTemplate.getForObject(url, String.class);
}, context -> {
return "fallback-response"; // recovery callback
});The first lambda is the retryable operation; the second is the recovery callback invoked if all attempts fail. This pattern is useful in service classes where you want retry logic around a specific block that is not easily isolated into its own method.
Spring Retry ships with several built-in RetryPolicy implementations. Each controls the conditions under which a retry is attempted. Choosing the right one depends on whether you want to limit by count, time, exception type, or a combination of these.
| Policy | Description |
|---|---|
| SimpleRetryPolicy | Retries up to a fixed maximum number of attempts. Default policy used by @Retryable (maxAttempts=3). |
| TimeoutRetryPolicy | Keeps retrying until a wall-clock timeout expires regardless of attempt count. |
| ExceptionClassifierRetryPolicy | Maps specific exception types to different sub-policies, allowing fine-grained control. |
| CircuitBreakerRetryPolicy | Wraps another policy and opens a circuit after repeated failures, temporarily blocking retries. |
| NeverRetryPolicy | Never retries; used as a no-op placeholder or in testing. |
| AlwaysRetryPolicy | Always retries indefinitely; use only with a separate termination condition. |
| CompositeRetryPolicy | Combines multiple policies using optimistic (any allows) or pessimistic (all must allow) logic. |
In programmatic use, you set the policy on a RetryTemplate:
SimpleRetryPolicy policy = new SimpleRetryPolicy(5,
Collections.singletonMap(IOException.class, true));
retryTemplate.setRetryPolicy(policy);
A BackOffPolicy defines how long Spring Retry waits between consecutive retry attempts. It is applied by the RetryTemplate after each failed attempt and before the next. The goal is to give the failing resource time to recover and to avoid hammering it with rapid successive requests.
Available BackOffPolicy implementations:
| Implementation | Behavior |
|---|---|
| NoBackOffPolicy | No delay between retries (immediate) |
| FixedBackOffPolicy | A constant delay between every retry attempt |
| UniformRandomBackOffPolicy | Random delay within a min/max range |
| ExponentialBackOffPolicy | Delay doubles (or multiplies) after each attempt |
| ExponentialRandomBackOffPolicy | Exponential delay with added random jitter |
Programmatic example using ExponentialBackOffPolicy:
ExponentialBackOffPolicy backOff = new ExponentialBackOffPolicy();
backOff.setInitialInterval(500);
backOff.setMultiplier(2.0);
backOff.setMaxInterval(10000);
RetryTemplate template = new RetryTemplate();
template.setBackOffPolicy(backOff);The @Backoff annotation is essentially declarative sugar on top of these implementations. When you set multiplier in @Backoff, Spring Retry creates an ExponentialBackOffPolicy under the hood.
RetryContext is an object created by Spring Retry at the start of each retry sequence and passed through every retry attempt. It acts as a stateful record of the current retry operation, storing information that policies, listeners, and recovery callbacks can inspect or modify.
Key information available in RetryContext:
- Retry count:
context.getRetryCount()returns the number of attempts completed so far (0 on the first try). - Last exception:
context.getLastThrowable()returns the exception that caused the most recent failure. - Exhausted flag:
context.isExhaustedOnly()indicates that retries were exhausted by external signal rather than policy. - Attribute store:
context.setAttribute(key, value)/context.getAttribute(key)let you attach custom data for use across attempts or in listeners.
Accessing RetryContext in a recovery callback:
retryTemplate.execute(context -> {
log.info("Attempt: " + context.getRetryCount());
return callService();
}, context -> {
Throwable ex = context.getLastThrowable();
log.error("Giving up after " + context.getRetryCount() + " attempts", ex);
return "fallback";
});Custom retry listeners also receive the RetryContext on open, close, and onError callbacks, making it the central coordination object for cross-cutting retry concerns.
RetryListener is a callback interface that lets you hook into retry lifecycle events without modifying the retried method or recovery logic. It is useful for metrics collection, structured logging, and alerting.
The RetryListener interface defines three methods:
public interface RetryListener {
<T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback);
<T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable);
<T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable);
}- open: Called before the first attempt. Returning
falsesuppresses all retries for this operation. - onError: Called after each failed attempt with the thrown exception and current RetryContext.
- close: Called after the retry sequence ends — whether by success, exhaustion, or exception.
Registering a listener on RetryTemplate:
retryTemplate.setListeners(new RetryListener[] { new MyMetricsListener() });For annotation-based retry, register the listener bean by name in the listeners attribute of @Retryable:
@Retryable(listeners = {"metricsRetryListener"})
Spring Retry includes a CircuitBreakerRetryPolicy that implements the circuit breaker pattern on top of its standard retry infrastructure. The circuit acts as a gate that opens when failure rates exceed a threshold, temporarily blocking retries to prevent further load on a failing downstream system.
Circuit states:
- Closed: Normal operation. Requests flow through and failures are counted.
- Open: The failure threshold has been reached. All calls fail immediately without attempting the operation.
- Half-Open: After a reset timeout, one trial request is allowed through to test if recovery has occurred.
Configuration via RetryTemplate:
CircuitBreakerRetryPolicy policy = new CircuitBreakerRetryPolicy(
new SimpleRetryPolicy(3));
policy.setOpenTimeout(5000); // open after 5s of failures
policy.setResetTimeout(20000); // try again after 20s
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(policy);Spring Retry's circuit breaker is stateful — it requires a stateful RetryTemplate (using RetryState keys per operation) so that circuit state persists across multiple independent calls to the same operation. This is different from stateless retry where each call sequence is independent.
Stateful retry in Spring Retry means that the retry context and failure count are preserved across separate, independent method invocations for the same logical operation — typically identified by a key. This is in contrast to stateless retry, where each call to execute() begins a fresh retry sequence.
Stateful retry is needed when the retried operation has side effects or when its transactional boundary means a failed attempt cannot be reattempted within the same thread or method call. The most common use case is Spring Batch item processing: if a batch item fails, the transaction is rolled back, and the next call to process that item should count as a retry — not a fresh attempt.
How it works:
- A
RetryStateobject with a unique key (typically the item or operation identity) is passed toretryTemplate.execute(). - Spring Retry stores the retry count and context externally (in a
RetryContextCache). - On subsequent calls with the same key, the stored state is retrieved and updated.
DefaultRetryState state = new DefaultRetryState(itemKey);
retryTemplate.execute(ctx -> processItem(item), ctx -> recover(item), state);The default RetryContextCache is a thread-safe in-memory map. For clustered environments, a custom distributed cache implementation can be plugged in.
Spring Retry itself does not directly read application.properties — its configuration is annotation or bean-driven. However, Spring Boot's auto-configuration for retry integrates with property placeholders, allowing you to externalize retry parameters using Spring Expression Language (SpEL) references in annotations.
Externalizing retry parameters via properties:
# application.properties
retry.maxAttempts=5
retry.delay=2000
retry.multiplier=1.5@Retryable(
retryFor = Exception.class,
maxAttemptsExpression = "${retry.maxAttempts}",
backoff = @Backoff(
delayExpression = "${retry.delay}",
multiplierExpression = "${retry.multiplier}"
)
)
public void callApi() { ... }This approach lets you change retry behavior per environment without recompiling. The attributes that accept SpEL expressions are: maxAttemptsExpression, delayExpression, maxDelayExpression, multiplierExpression, and randomExpression.
Note that when using SpEL expressions in annotations, the annotation attributes ending in Expression must be used — not the plain numeric ones. Mixing both (e.g., setting both maxAttempts and maxAttemptsExpression) will result in a compilation or runtime error.
In older versions of Spring Retry, @Retryable had include and exclude attributes to specify which exception classes should or should not trigger a retry. In newer versions (Spring Retry 1.3+), these were renamed to retryFor and noRetryFor respectively for clarity, though the old names are still supported as aliases.
| Attribute | New Name | Effect |
|---|---|---|
| include | retryFor | Retry ONLY when this exception type is thrown |
| exclude | noRetryFor | Do NOT retry when this exception type is thrown |
Priority and interaction rules:
- If both
retryForandnoRetryForare specified, the exception must matchretryForAND not matchnoRetryForfor a retry to occur. - If neither is specified, all exceptions trigger a retry (defaulting to
Exception.class). - If only
noRetryForis specified, all exceptions except those listed are retried.
@Retryable(
retryFor = { IOException.class },
noRetryFor = { FileNotFoundException.class }
)
public void readFile(String path) { ... }This retries IOException but not the subclass FileNotFoundException, which is useful when a missing file is a permanent error and retrying it is pointless.
Spring Batch has first-class integration with Spring Retry at the step level. When configuring a Step, you can specify retry behavior on the StepBuilder so that item processing failures trigger automatic retries before the item is sent to the skip list or causes the step to fail.
Step-level retry configuration:
@Bean
public Step processStep(JobRepository jobRepository,
PlatformTransactionManager txManager,
ItemReader<Order> reader,
ItemProcessor<Order, Invoice> processor,
ItemWriter<Invoice> writer) {
return new StepBuilder("processStep", jobRepository)
.<Order, Invoice>chunk(10, txManager)
.reader(reader)
.processor(processor)
.writer(writer)
.faultTolerant()
.retry(TransientDataAccessException.class)
.retryLimit(3)
.build();
}Key points about Spring Batch + Spring Retry integration:
.faultTolerant()must be called to enable retry and skip features on the step.- Retry in Spring Batch is stateful: each chunk transaction rolls back on failure, and the failed item is retried in a subsequent transaction.
- You can combine retry and skip: after
retryLimitfailures on one item, it moves to the skip list (if skip is also configured). - Spring Batch uses
RetryTemplateinternally and exposes configuration through the fluent StepBuilder API.
Several mistakes are frequently made when developers first start using @Retryable. Being aware of these pitfalls saves significant debugging time.
1. Self-invocation problem
Calling a @Retryable method from within the same class bypasses the AOP proxy, and the retry logic is silently ignored:
// WRONG — retry will not fire
public void doSomething() {
this.fetchData(); // same class, proxy is bypassed
}
@Retryable
public String fetchData() { ... }Fix: inject the bean into itself (@Autowired ApplicationContext lookup) or move the retryable method to a separate Spring bean.
2. Retrying non-transient failures
Retrying a 400 Bad Request or a constraint violation is pointless because the same input will always produce the same error. Only transient exceptions warrant retry.
3. Not setting maxAttempts explicitly
The default is 3 attempts with no backoff delay. In production, rapid-fire retries against a struggling service can worsen the situation.
4. Missing @EnableRetry
Without @EnableRetry on a configuration class, all @Retryable annotations are ignored and no error is thrown — the method simply executes once and propagates the exception.
5. @Recover method signature mismatch
If the return type or exception parameter type does not match, Spring Retry cannot find the recovery method and will rethrow the exception instead of recovering.
Testing a @Retryable method requires a Spring context because retry logic is applied by the AOP proxy. A plain unit test that calls the method on a new instance will not trigger retries. The recommended approach uses @SpringBootTest or a focused @SpringJUnitConfig with @EnableRetry.
Test setup:
@SpringBootTest
class OrderServiceRetryTest {
@Autowired
private OrderService orderService;
@MockBean
private ExternalApi externalApi;
@Test
void shouldRetryThreeTimesOnFailure() {
when(externalApi.fetch()).thenThrow(new RuntimeException("down"));
assertThrows(RuntimeException.class, () -> orderService.getOrder());
verify(externalApi, times(3)).fetch(); // maxAttempts = 3
}
@Test
void shouldSucceedOnSecondAttempt() {
when(externalApi.fetch())
.thenThrow(new RuntimeException())
.thenReturn("OK");
String result = orderService.getOrder();
assertEquals("OK", result);
verify(externalApi, times(2)).fetch();
}
}To avoid slow tests due to backoff delays during testing, override the backoff with a zero delay in a test profile:
@Retryable(backoff = @Backoff(delayExpression = "${retry.delay:0}"))Setting retry.delay=0 in application-test.properties eliminates wait time during test runs.
When an exception is thrown that is not listed in retryFor (or include in older versions) and is not an Exception.class default match, Spring Retry treats it as a non-recoverable failure and immediately rethrows it without retrying, regardless of how many attempts remain.
This behavior is intentional: Spring Retry assumes that if a specific exception type is thrown and it was not declared as retryable, it likely represents a permanent error (like a validation failure or a programming bug) where retrying would be pointless or harmful.
Example:
@Retryable(retryFor = { IOException.class }, maxAttempts = 5)
public void processFile(String path) throws IOException {
// If IllegalArgumentException is thrown here,
// Spring Retry will NOT retry — it rethrows immediately
}If you want to retry all exceptions, you can explicitly declare:
@Retryable(retryFor = Exception.class)Or use the noRetryFor attribute to block only specific types while letting all others retry. The exception hierarchy matters: if retryFor = IOException.class, then subclasses like SocketTimeoutException are also retried since they extend IOException.
RetryOperationsInterceptor is an AOP method interceptor provided by Spring Retry that allows you to apply retry behavior to beans programmatically without using @Retryable annotations. It bridges Spring AOP (MethodInterceptor) with Spring Retry (RetryOperations/RetryTemplate).
This is useful when you want to apply retry to third-party beans that you cannot annotate, or when you need more dynamic retry configuration than annotations allow.
Programmatic setup with a custom advisor:
@Bean
public RetryOperationsInterceptor retryInterceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(4)
.backOffOptions(1000, 2.0, 10000)
.recoverer(new ItemRecoverer())
.build();
}
@Bean
public BeanFactoryPostProcessor retryAdvisor(RetryOperationsInterceptor interceptor) {
// Apply interceptor to specific beans via Advisor
}Spring Retry also provides StatefulRetryOperationsInterceptor for stateful retry scenarios (common in Spring Batch). Both interceptors can be built using the RetryInterceptorBuilder factory:
RetryInterceptorBuilder.stateless()— creates a stateless interceptorRetryInterceptorBuilder.stateful()— creates a stateful interceptor with key generator support
Spring Retry can wrap calls made through RestTemplate or WebClient to handle transient HTTP failures automatically. The approach differs slightly between the two due to their synchronous vs reactive nature.
With RestTemplate (synchronous) — using @Retryable:
@Service
public class OrderClient {
@Retryable(
retryFor = { HttpServerErrorException.class, ResourceAccessException.class },
maxAttempts = 3,
backoff = @Backoff(delay = 1000, multiplier = 2)
)
public OrderDto getOrder(Long id) {
return restTemplate.getForObject("/orders/" + id, OrderDto.class);
}
@Recover
public OrderDto recover(HttpServerErrorException ex, Long id) {
return OrderDto.fallback(id);
}
}With WebClient (reactive) — using reactor-retry or Mono.retryWhen:
webClient.get()
.uri("/orders/" + id)
.retrieve()
.bodyToMono(OrderDto.class)
.retryWhen(Retry.backoff(3, Duration.ofMillis(500))
.filter(ex -> ex instanceof WebClientResponseException.ServiceUnavailable));Note: @Retryable does not integrate naturally with reactive streams because it is designed for synchronous, blocking method calls. For reactive code, use Project Reactor's native Mono.retryWhen or Flux.retryWhen with reactor.util.retry.Retry.
ExceptionClassifierRetryPolicy allows you to map different exception types to different RetryPolicy instances. This is valuable when your retryable operation can fail with multiple exception types that require different retry strategies — for example, transient network errors should be retried several times, while resource not found errors should fail fast.
Configuration example:
Map<Class<? extends Throwable>, RetryPolicy> policyMap = new HashMap<>();
policyMap.put(IOException.class, new SimpleRetryPolicy(5));
policyMap.put(HttpServerErrorException.class, new SimpleRetryPolicy(3));
policyMap.put(IllegalArgumentException.class, new NeverRetryPolicy());
ExceptionClassifierRetryPolicy policy = new ExceptionClassifierRetryPolicy();
policy.setPolicyMap(policyMap);
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(policy);The classifier checks the thrown exception type and routes it to the matching sub-policy. If no exact match is found, it walks up the exception class hierarchy to find the nearest matching ancestor. If no match is found at all, the default policy (usually NeverRetryPolicy) is applied.
This policy is ideal when integrating with multiple downstream services in a single method, where the caller needs fine-grained control over which failures are transient and which are permanent.
RetryCallback is a functional interface that wraps the operation to be retried when using RetryTemplate programmatically. It represents the unit of work that may fail and should be retried.
@FunctionalInterface
public interface RetryCallback<T, E extends Throwable> {
T doWithRetry(RetryContext context) throws E;
}The generic type T is the return type of the operation, and E is the exception type it may throw. The context parameter gives access to retry count, last exception, and custom attributes.
Usage with lambda:
String result = retryTemplate.execute((RetryCallback<String, IOException>) context -> {
log.info("Attempt #" + (context.getRetryCount() + 1));
return remoteService.fetchData();
});When using the two-argument form of execute(), the second argument is a RecoveryCallback — a companion interface that defines the fallback to invoke when all retry attempts are exhausted:
String result = retryTemplate.execute(
ctx -> remoteService.fetchData(), // RetryCallback
ctx -> "default-value" // RecoveryCallback
);The RecoveryCallback receives the same RetryContext, which includes getLastThrowable() so you can log or inspect the final failure reason.
Both Spring Retry and Resilience4j Retry solve the same problem — retrying failed operations — but they differ significantly in design philosophy, feature set, and integration style.
| Feature | Spring Retry | Resilience4j Retry |
|---|---|---|
| Design focus | Spring-native, AOP-first | Functional, library-first, framework-agnostic |
| Circuit Breaker | Basic CircuitBreakerRetryPolicy | Full-featured, event-driven circuit breaker |
| Metrics | Manual via RetryListener | Built-in Micrometer integration |
| Reactive support | None (blocking only) | First-class Reactor/RxJava support |
| Rate Limiter | Not included | Built-in |
| Bulkhead | Not included | Built-in |
| Configuration | Annotations + Java config | Java config + application.properties (Spring Boot starter) |
| Best for | Spring Batch, simpler services | Reactive apps, microservices needing full resilience suite |
The choice depends on your stack: if you are building reactive microservices and need circuit breaking, rate limiting, and bulkheads, Resilience4j is the stronger fit. For traditional Spring MVC apps or Spring Batch processing, Spring Retry is simpler and better integrated.
RetryContextCache is the storage mechanism used by stateful retry to persist RetryContext objects between separate method invocations. In stateful retry, when a retryable operation fails and a transaction is rolled back, the retry state (attempt count, last exception) must survive the rollback so the next invocation can continue the retry sequence rather than starting fresh.
Spring Retry's default implementation is MapRetryContextCache, which stores retry contexts in a thread-safe in-memory ConcurrentHashMap:
RetryTemplate template = new RetryTemplate();
template.setRetryContextCache(new MapRetryContextCache(1000)); // capacity 1000When you need a custom RetryContextCache:
- Clustered/distributed environments: If your application runs on multiple nodes and a retryable job item can be picked up by a different node on the next attempt, the in-memory cache will be empty on the new node. A Redis- or database-backed cache solves this.
- Cache eviction concerns: The default map has no TTL. Items for permanently failed operations that were never cleaned up accumulate over time. A custom cache with TTL prevents memory leaks.
- Very large workloads: Spring Batch jobs processing millions of items may exhaust the default map capacity.
Implement the RetryContextCache interface (two methods: get(Object key) and put(Object key, RetryContext context)) to provide a distributed or TTL-aware alternative.
Yes, Spring Retry integrates with Spring Cloud OpenFeign to add retry capability to Feign client calls. When Spring Retry is on the classpath and spring.cloud.openfeign.okhttp.enabled or Feign defaults are in use, you can configure retry through Feign's own Retryer mechanism or delegate to Spring Retry's RetryTemplate.
Approach 1 — Feign native Retryer bean:
@Bean
public Retryer feignRetryer() {
return new Retryer.Default(
100, // period (ms)
1000, // maxPeriod (ms)
3 // maxAttempts
);
}Approach 2 — Wrapping the Feign interface with @Retryable:
@Service
public class OrderService {
@Autowired
private OrderFeignClient feignClient;
@Retryable(retryFor = FeignException.ServiceUnavailable.class, maxAttempts = 3)
public OrderDto getOrder(Long id) {
return feignClient.getOrder(id);
}
}The second approach is more flexible because you can mix @Recover and backoff strategies from Spring Retry while Feign handles HTTP communication. Note: Feign's built-in Retryer and Spring Retry should not both be active for the same operation to avoid double-retry issues. Disable the default Feign retryer (Retryer.NEVER_RETRY) if using Spring Retry.
You can implement a custom RetryPolicy by implementing the RetryPolicy interface, which gives you full control over whether a retry should be allowed based on any criteria — not just exception type or count. Custom policies are useful when retry decisions depend on application-specific state, such as response content, quota availability, or external configuration.
RetryPolicy interface contract:
public interface RetryPolicy {
boolean canRetry(RetryContext context);
RetryContext open(RetryContext parent);
void close(RetryContext context);
void registerThrowable(RetryContext context, Throwable throwable);
}Custom policy example — retry on specific error codes in the exception message:
public class ErrorCodeRetryPolicy implements RetryPolicy {
private final int maxAttempts;
public ErrorCodeRetryPolicy(int maxAttempts) {
this.maxAttempts = maxAttempts;
}
@Override
public boolean canRetry(RetryContext context) {
Throwable last = context.getLastThrowable();
if (last == null) return true; // first attempt
boolean isRetryableCode = last.getMessage() != null
&& last.getMessage().contains("RETRY_CODE");
return isRetryableCode && context.getRetryCount() < maxAttempts;
}
@Override
public RetryContext open(RetryContext parent) {
return new SimpleRetryContext(parent);
}
@Override
public void close(RetryContext context) {}
@Override
public void registerThrowable(RetryContext context, Throwable throwable) {
((SimpleRetryContext) context).registerThrowable(throwable);
}
}Register it on a RetryTemplate with template.setRetryPolicy(new ErrorCodeRetryPolicy(4)).
Spring Kafka provides its own retry and error-handling abstractions, but Spring Retry integrates naturally through the SeekToCurrentErrorHandler (or DefaultErrorHandler in newer versions) combined with FixedBackOff or ExponentialBackOff.
Configuring retry in a Kafka listener container:
@Bean
public DefaultErrorHandler kafkaErrorHandler() {
ExponentialBackOff backOff = new ExponentialBackOff(500L, 2.0);
backOff.setMaxAttempts(4);
DefaultErrorHandler handler = new DefaultErrorHandler(backOff);
handler.addNotRetryableExceptions(SerializationException.class);
return handler;
}
@Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<?, ?> cf, DefaultErrorHandler errorHandler) {
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(cf);
factory.setCommonErrorHandler(errorHandler);
return factory;
}Spring Kafka's ExponentialBackOff is from the core Spring Framework utilities, not Spring Retry, but the retry behavior is conceptually the same. For more complex scenarios, you can publish failed messages to a dead-letter topic (DLT) using DeadLetterPublishingRecoverer as the recovery action after all retries are exhausted.
The proxyTargetClass attribute of @EnableRetry controls whether Spring Retry uses subclass-based CGLIB proxies or interface-based JDK dynamic proxies to intercept methods annotated with @Retryable.
Default behavior:
@EnableRetry // proxyTargetClass = false by default
public class AppConfig { }When proxyTargetClass = false (default), Spring uses JDK dynamic proxies. This requires the bean to implement at least one interface. The @Retryable method must be declared on the interface for the proxy to intercept it.
When proxyTargetClass = true, Spring uses CGLIB to create a subclass proxy of the concrete class, so no interface is needed:
@EnableRetry(proxyTargetClass = true)
public class AppConfig { }When to use proxyTargetClass = true:
- Your service classes do not implement interfaces
- You want retry on methods that are not declared on any interface
- You encounter proxy-related class cast exceptions at runtime
In Spring Boot applications that already use CGLIB for @Configuration classes, setting proxyTargetClass = true is consistent and avoids mixed proxy types. The CGLIB approach has a slight startup cost for proxy generation but no runtime overhead difference.
CompositeRetryPolicy combines multiple RetryPolicy instances into one. It supports two composition modes: optimistic and pessimistic. This is useful when you need retry behavior that satisfies multiple independent conditions simultaneously.
Optimistic mode (default): A retry is allowed if any of the composed policies permit it. This is a logical OR across policies.
Pessimistic mode: A retry is allowed only if all of the composed policies permit it. This is a logical AND across policies.
SimpleRetryPolicy byCount = new SimpleRetryPolicy(5);
TimeoutRetryPolicy byTime = new TimeoutRetryPolicy();
byTime.setTimeout(10000); // max 10 seconds total
CompositeRetryPolicy composite = new CompositeRetryPolicy();
composite.setPolicies(new RetryPolicy[]{ byCount, byTime });
composite.setOptimistic(false); // pessimistic: both must agree
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(composite);In this example (pessimistic), both the count limit (5 attempts) and the time limit (10 seconds) must permit the retry. The retry stops when either limit is reached first — whichever comes first wins. This is a clean way to add a time-bound safety net on top of a count-based policy for long-running retries.
Retry and idempotency are closely related but distinct concepts that must be understood together when designing resilient systems.
Retry is the mechanism of re-executing a failed operation. Idempotency is the property of an operation that guarantees the same result regardless of how many times it is executed with the same input.
The relationship matters because retrying a non-idempotent operation can cause unintended side effects:
- Double charges: Retrying a payment API call that already succeeded on the server but timed out before returning a response will charge the customer twice.
- Duplicate records: Retrying a database insert may create duplicate rows if the original insert succeeded but the acknowledgment was lost.
- Double emails: Retrying a notification service can send the same email multiple times.
Safe operations to retry (naturally idempotent): HTTP GET, HTTP PUT with full resource replacement, database SELECT, idempotent queue dequeues with unique processing keys.
Risky to retry without idempotency guards: HTTP POST, payment debits, external notifications, non-idempotent database writes.
Solutions: Use idempotency keys (a unique request ID sent with each call so the server can detect and deduplicate retries), optimistic locking, or at-least-once processing with deduplication at the consumer side. When using Spring Retry, limit retries to operations that are safe to re-execute, or ensure the downstream system supports idempotent requests.
A fixed backoff means the same delay is applied between every retry attempt, regardless of how many attempts have occurred. This is appropriate when the failure cause is likely to resolve in a predictable timeframe and exponential growth in delay is not needed.
Using RetryTemplate.builder() (modern fluent API):
RetryTemplate retryTemplate = RetryTemplate.builder()
.maxAttempts(4)
.fixedBackoff(2000) // 2 seconds between each retry
.retryOn(IOException.class)
.build();Using legacy explicit configuration:
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(2000L); // milliseconds
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(4);
RetryTemplate template = new RetryTemplate();
template.setBackOffPolicy(backOffPolicy);
template.setRetryPolicy(retryPolicy);The fluent builder API (available since Spring Retry 1.3) is more readable and less error-prone. It also supports chaining multiple configuration steps and inline recovery:
String result = RetryTemplate.builder()
.maxAttempts(3)
.fixedBackoff(1500)
.retryOn(HttpServerErrorException.class)
.build()
.execute(ctx -> callApi(), ctx -> "fallback");
Both maxAttempts and maxAttemptsExpression in @Retryable control how many total attempts (including the first try) are made. The difference is that maxAttempts accepts a hardcoded integer literal, while maxAttemptsExpression accepts a Spring Expression Language (SpEL) string that is evaluated at runtime.
| Attribute | Type | Value source | Evaluation time |
|---|---|---|---|
| maxAttempts | int | Compile-time constant | Annotation parsing |
| maxAttemptsExpression | String (SpEL) | Properties, beans, or expressions | Runtime per method call |
Using maxAttempts (static):
@Retryable(maxAttempts = 3)Using maxAttemptsExpression (dynamic from property):
@Retryable(maxAttemptsExpression = "${app.retry.maxAttempts:3}")Using maxAttemptsExpression with a bean reference:
@Retryable(maxAttemptsExpression = "@retryConfig.getMaxAttempts()")You cannot set both maxAttempts and maxAttemptsExpression on the same @Retryable annotation — only one must be used. If both are set, Spring will throw a configuration error.
The interaction between @Retryable and @Transactional depends critically on the order of proxy application. If a method is annotated with both, the retry proxy must wrap the transaction proxy — not the other way around. This ensures that when a failure occurs, the transaction is fully rolled back before the next retry attempt begins in a fresh transaction.
Correct ordering (retry wraps transaction):
@Retryable(retryFor = TransientDataAccessException.class, maxAttempts = 3)
@Transactional
public void saveOrder(Order order) {
orderRepository.save(order);
inventoryService.deduct(order.getItemId());
}When a TransientDataAccessException is thrown inside the transaction, the transaction rolls back, control returns to the retry proxy, which waits (per backoff policy) and re-invokes the method — starting a new transaction from scratch. This is correct behavior.
Wrong ordering (transaction wraps retry): If the transaction is the outer proxy and retry is inner, a failure inside the transaction puts it into a rollback-only state. The retry then re-invokes the inner method still inside the same poisoned transaction, which will fail again even if the underlying cause is resolved.
To ensure correct ordering, set @EnableRetry before @EnableTransactionManagement or explicitly set @EnableTransactionManagement(order = Ordered.LOWEST_PRECEDENCE) and rely on default retry advisor ordering.
The recover attribute in @Retryable allows you to explicitly name the recovery method that should be invoked when all retry attempts are exhausted. Without this attribute, Spring Retry uses automatic method matching — it looks for a @Recover method in the same bean whose return type and exception parameter type match the failed retryable method.
The automatic matching works in most cases, but can be ambiguous when:
- Multiple
@Recovermethods exist with the same return type - You want a specific recovery method for a specific
@Retryablemethod without sharing recovery logic
Using the recover attribute to be explicit:
@Retryable(
retryFor = IOException.class,
maxAttempts = 3,
recover = "recoverFromIOError"
)
public String fetchDocument(String id) throws IOException {
return documentService.get(id);
}
@Recover
public String recoverFromIOError(IOException ex, String id) {
return "Document " + id + " unavailable";
}
@Recover
public String recoverFromGenericError(Exception ex, String id) {
return "Generic fallback for " + id;
}By naming recover = recoverFromIOError"
you ensure that fetchDocument always routes to the correct recovery method
Correctly classifying failures as transient or permanent is the foundation of any effective retry strategy. Retrying permanent failures wastes resources and delays the surfacing of bugs; failing to retry transient failures reduces system resilience unnecessarily.
Transient failures are temporary, self-healing conditions where the same operation, retried after a brief delay, has a realistic chance of succeeding:
- Network timeouts and connection resets
- Service temporarily unavailable (HTTP 503)
- Database deadlocks or lock timeouts
- Throttling responses (HTTP 429)
- Brief cloud resource contention
Permanent failures are errors where retrying the identical request will always produce the same result:
- Invalid request (HTTP 400 Bad Request)
- Unauthorized (HTTP 401) — credentials won't change between retries
- Resource not found (HTTP 404) — the resource won't appear on retry
- Data validation constraint violations
- Programming errors like NullPointerException
In Spring Retry, you encode this distinction through retryFor and noRetryFor:
@Retryable(
retryFor = { SocketTimeoutException.class, HttpServerErrorException.class },
noRetryFor = { HttpClientErrorException.class, IllegalArgumentException.class }
)A well-designed retry policy only activates on exceptions that signal transient conditions, leaving permanent errors to surface immediately.
Sometimes the retry decision cannot be based on exception type alone — it needs to inspect the exception message, error code inside the exception, or a response payload embedded in a custom exception. In this case, you implement a custom RetryPolicy or use SimpleRetryPolicy with a map and a subclass override.
Custom policy inspecting exception detail:
public class HttpStatusRetryPolicy extends SimpleRetryPolicy {
public HttpStatusRetryPolicy(int maxAttempts) {
super(maxAttempts, Collections.singletonMap(MyApiException.class, true));
}
@Override
public boolean canRetry(RetryContext context) {
Throwable t = context.getLastThrowable();
if (t instanceof MyApiException ex) {
// Only retry on 503 or 429, not on 4xx
int status = ex.getStatusCode();
boolean retryable = (status == 503 || status == 429);
return retryable && context.getRetryCount() < getMaxAttempts();
}
return super.canRetry(context);
}
}Register it on RetryTemplate:
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(new HttpStatusRetryPolicy(4));
template.setBackOffPolicy(new ExponentialBackOffPolicy());This pattern is valuable when integrating with APIs that return domain-specific error codes inside the exception, where a 400 with error code RATE_LIMITED should be retried but a 400 with INVALID_INPUT should not.
A @Recover method can return void if and only if the corresponding @Retryable method also returns void. Spring Retry's recovery method matching requires that the return types align. If there is a mismatch, Spring Retry will not find the recovery method and will rethrow the final exception.
Correct void recover example:
@Retryable(retryFor = JmsException.class, maxAttempts = 3)
public void sendMessage(String payload) {
jmsTemplate.convertAndSend("queue.orders", payload);
}
@Recover
public void recoverSendMessage(JmsException ex, String payload) {
log.error("Failed to send message after retries: {}", payload, ex);
deadLetterStore.save(payload); // persist for manual reprocessing
}In this case, since sendMessage returns void, the recover method also returns void. When recovery is invoked, Spring simply calls the method for its side effects (logging, storing) and then returns normally to the caller — the caller sees no exception.
If sendMessage returned a value (e.g., String) and the recover method was declared as void, the recovery would not be matched and the exception would propagate. Always align return types precisely, including generic type parameters.
Exponential backoff increases the delay between retries by multiplying the previous delay by a factor (e.g., 2). While this prevents relentless hammering, it creates a new problem in high-concurrency systems: all clients that started failing at the same time will retry at exactly the same intervals — 1s, 2s, 4s, 8s — causing synchronized waves of load on the recovering service, known as the thundering herd problem.
Jitter adds randomization to the delay, spreading retries across a time window and breaking the synchronization.
Without jitter (synchronized retries):
// 100 clients all retry at exactly t=1s, t=2s, t=4s, t=8sWith jitter (spread retries):
// Client 1 retries at t=0.7s, t=1.9s, t=3.2s
// Client 2 retries at t=1.2s, t=2.7s, t=4.8s
// Retries are spread across the windowIn Spring Retry, jitter is enabled via:
@Backoff(delay = 500, multiplier = 2, maxDelay = 8000, random = true)Or programmatically:
ExponentialRandomBackOffPolicy backOff = new ExponentialRandomBackOffPolicy();
backOff.setInitialInterval(500);
backOff.setMultiplier(2);
backOff.setMaxInterval(8000);AWS recommends exponential backoff with full jitter as the default retry strategy for most distributed systems because it results in the best overall system throughput during recovery events.
Observing retry behavior in production is essential for detecting systemic issues with downstream dependencies. Spring Retry provides hooks through RetryListener, and for Spring Boot applications, you can wire this into Micrometer for metrics or into logging frameworks for audit trails.
Approach 1 — Logging RetryListener:
@Component
public class RetryLoggingListener implements RetryListener {
private static final Logger log = LoggerFactory.getLogger(RetryLoggingListener.class);
@Override
public <T, E extends Throwable> void onError(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
log.warn("Retry attempt {} failed: {}",
context.getRetryCount(), throwable.getMessage());
}
@Override
public <T, E extends Throwable> void close(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
if (throwable != null) {
log.error("All retry attempts exhausted after {} tries",
context.getRetryCount(), throwable);
}
}
}Approach 2 — Micrometer metrics counter:
@Override
public <T, E extends Throwable> void onError(RetryContext ctx, ...) {
meterRegistry.counter("app.retry.attempt",
"method", ctx.getAttribute("context.name").toString()).increment();
}Register the listener as a bean and it will be auto-detected by Spring Retry when annotation-based retry is enabled. For programmatic retry, pass it explicitly to retryTemplate.setListeners(). Dashboard alerts on high retry rates are an early indicator of upstream service degradation.
RetryTemplate.execute() is the standard method for running a retryable operation. It takes a RetryCallback and optionally a RecoveryCallback, then applies the configured retry policy and backoff until the operation succeeds or retries are exhausted.
There is no method named executeWithLoadBalancer() in the standard Spring Retry library. This is a common interview trap. The term is sometimes confused with:
- Spring Cloud LoadBalancer + Retry: Spring Cloud has its own retry integration through
spring-cloud-starter-loadbalancerwhich retries HTTP requests on different service instances using a load balancer. This is not part of Spring Retry itself but uses Spring Retry under the hood. - RetryTemplate with stateful execute:
retryTemplate.execute(RetryCallback, RecoveryCallback, RetryState)— the three-argument form that uses stateful retry with aRetryStatekey, which is the closest concept but not load balancing.
The actual execute overloads in RetryTemplate:
// Stateless
<T, E> T execute(RetryCallback<T, E> retryCallback);
<T, E> T execute(RetryCallback<T, E> retryCallback, RecoveryCallback<T> recoveryCallback);
// Stateful
<T, E> T execute(RetryCallback<T, E> retryCallback,
RecoveryCallback<T> recoveryCallback,
RetryState retryState);If an interviewer asks about this, the correct answer is to clarify that load balancer retry is a Spring Cloud feature layered on top of Spring Retry, not a method on RetryTemplate.
