(or any other response type) in your AI Services interface. LangChain4j submits the call on a separate thread and returns immediately with a future: interface AsyncAssistant { CompletableFuture chat(String message); CompletableFuture analyze(String review); // works with POJOs too } // Non-blocking: returns immediately, response arrives later CompletableFuture future = assistant.chat("Explain blockchain"); future.thenAccept(answer -> System.out.println("Got answer: " + answer)); // ... continue doing other work ... Streaming (TokenStream) — Token-by-token delivery. Neither sync nor truly async — it is event-driven and provides progressive output rather than waiting for the full response or getting it all at once later. Best for UI responsiveness. For Spring WebFlux applications, the recommended pattern is returning Flux by bridging LangChain4j's TokenStream to a reactive publisher via Sinks.Many or a FluxSink. Pure CompletableFuture works for non-streaming Spring MVC async (DeferredResult) or WebFlux scenarios."> (or any other response type) in your AI Services interface. LangChain4j submits the call on a separate thread and returns immediately with a future: interface AsyncAssistant { CompletableFuture chat(String message); CompletableFuture analyze(String review); // works with POJOs too } // Non-blocking: returns immediately, response arrives later CompletableFuture future = assistant.chat("Explain blockchain"); future.thenAccept(answer -> System.out.println("Got answer: " + answer)); // ... continue doing other work ... Streaming (TokenStream) — Token-by-token delivery. Neither sync nor truly async — it is event-driven and provides progressive output rather than waiting for the full response or getting it all at once later. Best for UI responsiveness. For Spring WebFlux applications, the recommended pattern is returning Flux by bridging LangChain4j's TokenStream to a reactive publisher via Sinks.Many or a FluxSink. Pure CompletableFuture works for non-streaming Spring MVC async (DeferredResult) or WebFlux scenarios." />

Prev Next

AI / LangChain4j interview questions

What is the difference between synchronous and asynchronous execution in LangChain4j?

LangChain4j supports both synchronous and asynchronous execution models for LLM calls. The choice affects how your application thread behaves while waiting for the (potentially slow) LLM response.

Synchronous — The calling thread blocks until the complete response is received. This is the default and simplest mode, appropriate for batch jobs, background tasks, and thread-per-request servers where thread blocking is acceptable.

// Sync: thread blocks until response arrives (may take 5-30 seconds)
String answer = assistant.chat("What is quantum computing?");

Asynchronous (CompletableFuture) — Declare the return type as CompletableFuture<String> (or any other response type) in your AI Services interface. LangChain4j submits the call on a separate thread and returns immediately with a future:

interface AsyncAssistant {
    CompletableFuture<String> chat(String message);
    CompletableFuture<ProductReview> analyze(String review); // works with POJOs too
}

// Non-blocking: returns immediately, response arrives later
CompletableFuture<String> future = assistant.chat("Explain blockchain");
future.thenAccept(answer -> System.out.println("Got answer: " + answer));
// ... continue doing other work ...

Streaming (TokenStream) — Token-by-token delivery. Neither sync nor truly async — it is event-driven and provides progressive output rather than waiting for the full response or getting it all at once later. Best for UI responsiveness.

For Spring WebFlux applications, the recommended pattern is returning Flux<String> by bridging LangChain4j's TokenStream to a reactive publisher via Sinks.Many or a FluxSink. Pure CompletableFuture works for non-streaming Spring MVC async (DeferredResult) or WebFlux scenarios.

What return type do you declare in a LangChain4j AI Services method to make it non-blocking?
What is the key difference between CompletableFuture and TokenStream as return types in LangChain4j?

Invest now in Acorns!!! 🚀 Join Acorns and get your $5 bonus!

Invest now in Acorns!!! 🚀
Join Acorns and get your $5 bonus!

Earn passively and while sleeping

Acorns is a micro-investing app that automatically invests your "spare change" from daily purchases into diversified, expert-built portfolios of ETFs. It is designed for beginners, allowing you to start investing with as little as $5. The service automates saving and investing. Disclosure: I may receive a referral bonus.

Invest now!!! Get Free equity stock (US, UK only)!

Use Robinhood app to invest in stocks. It is safe and secure. Use the Referral link to claim your free stock when you sign up!.

The Robinhood app makes it easy to trade stocks, crypto and more.


Webull! Receive free stock by signing up using the link: Webull signup.

More Related questions...

What is LangChain4j and what problem does it solve for Java developers? What are the core modules of LangChain4j? What is the AI Services feature in LangChain4j and how do you define one? How does ChatMemory work in LangChain4j and what types are available? What is Retrieval-Augmented Generation (RAG) in LangChain4j and how do you build a pipeline? What are Tools in LangChain4j and how does tool calling work? How do you integrate LangChain4j with Spring Boot? What is the EmbeddingModel in LangChain4j and which providers are supported? What EmbeddingStores does LangChain4j support and how do you choose one? What is document splitting in LangChain4j and why is it necessary? What is the @SystemMessage and @UserMessage annotation in LangChain4j AI Services? How does streaming work in LangChain4j and when should you use it? What is the ContentRetriever and RetrievalAugmentor in LangChain4j advanced RAG? How does LangChain4j handle structured output from LLMs? What is the PromptTemplate in LangChain4j and how does it differ from @UserMessage? What LLM providers does LangChain4j support and how do you switch between them? What is an Agent in LangChain4j and how does it differ from a simple AI Services call? How do you implement multi-turn conversation with memory per user in a Spring REST API using LangChain4j? What is the ImageModel in LangChain4j and which providers support image generation? How do you handle errors and retries in LangChain4j? How do you test LangChain4j AI Services without making real LLM API calls? What is the DocumentLoader API in LangChain4j and what sources does it support? What is the @Moderate annotation in LangChain4j and how does content moderation work? How does LangChain4j support vision (multi-modal) LLMs that accept images as input? What is the difference between synchronous and asynchronous execution in LangChain4j? What is LangChain4j's support for Quarkus and how does it differ from Spring Boot integration? How does LangChain4j implement the ReAct agent pattern and what are its limitations? What is the ModerationModel interface in LangChain4j and how can you implement a custom one? What is the Tokenizer interface in LangChain4j and why does it matter for memory management? How do you persist ChatMemory across application restarts in LangChain4j? What are the best practices for prompt engineering within LangChain4j AI Services? How does LangChain4j integrate with observability tools like OpenTelemetry? What is the InMemoryEmbeddingStore and when should you migrate to a real vector database? What are common LangChain4j anti-patterns to avoid in production applications? How does LangChain4j support multi-modal input processing for audio or documents beyond text and images? How do you implement a custom Tool with complex parameter types in LangChain4j? What is the HypotheticalDocumentEmbedder (HyDE) technique and how does LangChain4j support it? How do you handle LLM output parsing failures gracefully in LangChain4j? What is LangChain4j's support for graph-based RAG or knowledge graph integration? What is the LangChain4j EvaluationResult API and how do you measure RAG pipeline quality?


Comments & Discussions