Spring Boot microservices Interview Questions

Q.1) What is the difference between Web Services and Micro Services?
  • Web Services are about exposing functionalities over a network, often as part of a monolithic or layered application.

  • Microservices are about designing a system as a suite of small services, each running independently and focused on a single business capability.


Q.2) What is microservices?
Microservices are a software development technique and a variant of the Service-Oriented Architecture (SOA) style that structures an application as a collection of loosely coupled, fine-grained services communicating over lightweight protocols.

Key benefits:

  • Improved modularity: Decomposing a large application into smaller, manageable services makes it easier to understand, develop, and test.

  • Independent teams: Small autonomous teams can develop, deploy, and scale their services independently, speeding up development cycles.

  • Resilience to architecture erosion: Continuous refactoring of individual services helps prevent architectural decay.

  • Enables continuous delivery and deployment: The decoupled nature of microservices supports faster and more reliable releases.

  • Evolutionary architecture: Each service's design can evolve independently through continuous refactoring.

Another Answer
Ans) Microservices is an architectural style which says decompose big applications into smaller services which are autonomous, self-contained, loosely coupled, independently deployable and each microservice has its own Presentation Layer, Service Layer and Model Layer.

Q.3) How to design Micro Service project?
Ans)
i) Identify microservice boundaries
ii) Define Endpoints in every microservice
iii) Communicate microservices either Synchronously or Asynchronously
iv) We can use multiple schemas for multiple microservices
v) …


Q.4) What are the Twelve-Factor App Principles?
The Twelve-Factor App is a methodology for building modern, cloud-native, scalable, and maintainable applications—especially SaaS and microservices.

The 12 Factors (with explanation):
# Factor Description
1️⃣    Codebase One codebase tracked in version control (e.g., Git), many deploys (dev, prod, QA).
2️⃣    Dependencies Explicitly declare and isolate dependencies via build tools (Maven, Gradle).
3️⃣    Config Store config in the environment, not in the code. Use env vars, not application.properties.
4️⃣    Backing Services Treat services like databases, queues, SMTP as attached resources. Configured via URLs or credentials.
5️⃣    Build, Release, Run Separate the build (jar), release (config), and run (execution) stages.
6️⃣   Processes Run the app as stateless processes. No session/state stored in memory or disk.
7️⃣   Port Binding Export services via port binding (e.g., embedded Tomcat runs on port 8080).
8️⃣   Concurrency Scale out via processes/threads, not by increasing resource size.
9️⃣   Disposability Fast startup/shutdown for robust deployment and graceful termination.
πŸ”Ÿ   Dev/Prod Parity Keep development, staging, and production as similar as possible.
1️⃣1️⃣Logs Treat logs as event streams; don’t manage log files. Pipe logs to ELK, Loki, or other log systems.
1️⃣2️⃣ Admin Processes Run admin/maintenance tasks as one-off processes (e.g., via CLI or a job runner).

✅ Why is this important in Spring Boot and Microservices?
Stateless services = easy scaling in Kubernetes/Docker

Externalized config = 12-factor #3, handled via Spring Cloud Config, Vault, etc.

Port binding = embedded servers like Tomcat/Jetty/Netty in Spring Boot

Logs as streams = integrates with ELK, Graylog, Loki

πŸ“Œ Real Example in Spring Boot:
Use @Value or @ConfigurationProperties to inject environment variables.

Use spring-boot-starter-logging to log to stdout.

Use Dockerfile and JAR build for build–release–run separation.

Q.5) How did you use Microservice Design Patterns like Aggregator and Proxy?
1. Aggregator Pattern
Use Case:
In one of my Spring Boot-based banking projects, we had a requirement to show the customer dashboard with a consolidated view of:

Savings Account details

Fixed Deposit status

Loan summary

UPI transactions

Instead of exposing multiple APIs to the frontend, we implemented the Aggregator pattern using a composite service.

Implementation:

Created a REST controller in an aggregator-service.

Used RestTemplate / WebClient to call downstream microservices in parallel (with CompletableFuture or Mono.zip()).

Combined all the responses into a single JSON response and returned it to the frontend.

Tech Stack:

Spring Boot + WebFlux + Project Reactor

Load-balanced WebClient via Spring Cloud LoadBalancer

Circuit Breaker with Resilience4j

Q.6) How did you use Proxy Pattern (API Gateway)
Use Case:
We used the Proxy pattern to centralize routing and handle cross-cutting concerns like authentication, logging, and throttling.

Implementation:

Used Spring Cloud Gateway as the API Gateway.

Defined routes for each microservice (/accounts/**, /loans/**, /transactions/**).

Enabled request header manipulation, authentication token propagation, and circuit breaker using Resilience4j.

Features handled via the Gateway:

Authentication (with Keycloak/JWT)

Rate limiting (via Redis)

Logging and tracing (Spring Cloud Sleuth + Zipkin)

Retry logic

Q.7) How do you test Microservices?
Including concepts like bucket testing, split testing, Selenium, etc.

✅1. Unit Testing
Purpose: Test individual classes or methods in isolation (e.g., services, controllers).

Tools:

JUnit 5, Mockito, AssertJ

Use @WebFluxTest, @DataJpaTest, or @SpringBootTest as needed.

Example: Mocking repository/service methods and asserting business logic.

✅ 2. Integration Testing
Purpose: Test integration between microservices components (REST, DB, Kafka, etc.).

Tools:

Testcontainers (for spinning up DB, Redis, Kafka, etc.)

WebTestClient or RestAssured for HTTP calls

Example: Start full Spring context and test APIs end-to-end with real DB/Kafka in containers.

✅ 3. Contract Testing
Purpose: Validate that provider and consumer microservices agree on the API contract.

Tools:

Spring Cloud Contract

Pact

Use Case: If order-service provides an API consumed by payment-service, contract tests ensure both stay in sync even when deployed separately.

✅ 4. End-to-End (E2E) / UI Testing
Purpose: Validate the complete user journey across microservices via the UI.

Tools:

Selenium WebDriver

Cypress (for modern frontend apps)

Example: Automate the VKYC flow from login → OTP → document upload using Selenium.

✅ 5. Split Testing / A/B Testing
Purpose: Serve different versions of a feature to different user groups to compare performance/UX.

How:

Use feature flags with tools like Unleash, LaunchDarkly, or Spring Cloud Feature Toggle

Split traffic (e.g., 50% get old journey, 50% get new)

Use Case: Test a new KYC journey only for 20% of customers.

✅ 6. Bucket Testing (Canary Testing)
Purpose: Gradually roll out a new microservice version to a subset of users.

How:

Deploy the new version to a small pod (canary release) behind the load balancer.

Route a small percentage of traffic to the canary.

Monitor for errors/latency before full rollout.

Tools:

Kubernetes + Istio (or Linkerd) for traffic routing

Prometheus + Grafana for monitoring

✅ 7. Performance Testing
Purpose: Measure system behavior under load and stress.

Tools:

JMeter, Gatling, k6

Use Case: Simulate 1000 users hitting /api/verifyOtp and measure response time, error rate.

✅ 8. Resilience and Chaos Testing
Purpose: Validate microservices behavior during failures (e.g., timeouts, exceptions).

Tools:

Chaos Monkey for Spring Boot

Gremlin, LitmusChaos

Example: Simulate downstream service failure and ensure circuit breaker fallback logic works.


Q.8) What is an API Gateway in Microservices Architecture?
✅ Definition:
An API Gateway is a single entry point for all client requests in a microservices architecture. It handles routing, authentication, rate limiting, load balancing, and more.

✅ Why is it needed?
In microservices:

Clients would otherwise need to call each service individually.

You want to abstract internal microservice URLs.

You need to centralize cross-cutting concerns (like security, logging, throttling, CORS).

✅ Key Features of an API Gateway:
Feature Purpose
Request routing Directs the request to the correct microservice
Load balancing Distributes requests among service instances
Authentication/Authz Validates tokens (e.g., JWT), enforces access rules
Rate limiting / Throttling Protects services from overload
Response aggregation Combines results from multiple services (aggregator pattern)
Circuit breaking Falls back gracefully if downstream service fails (Resilience4j, etc.)
Logging and metrics Centralized logging and metrics collection (e.g., with Sleuth/Zipkin)
CORS handling Manages Cross-Origin Resource Sharing

✅ Popular API Gateway Tools in Spring Ecosystem:
Spring Cloud Gateway (Recommended for Spring Boot)

Fully reactive, built on Project Reactor.

Supports routing, filters, circuit breakers, rate limiters.

Netflix Zuul (Deprecated in favor of Spring Cloud Gateway)

Older servlet-based gateway, not ideal for modern reactive apps.

Third-party options:

Kong, NGINX, Traefik, Istio Gateway

✅ Spring Cloud Gateway Setup (Maven):
 
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
✅ Example Route Configuration (YAML):
yaml
Copy
Edit
spring:
  cloud:
    gateway:
      routes:
        - id: account-service
          uri: lb://ACCOUNT-SERVICE
          predicates:
            - Path=/accounts/**
          filters:
            - AddRequestHeader=X-Request-ID, ${random.uuid}
lb:// = uses Spring Cloud LoadBalancer

Routes /accounts/** to the account-service

Adds custom request headers

✅ Diagram:
arduino
Copy
Edit
Client
  ↓
API Gateway
  ├──> Account Service
  ├──> Loan Service
  ├──> Transaction Service
✅ Summary:
Topic Details
Framework Spring Cloud Gateway
Use Case Central entry point for routing & security
Patterns Supported Proxy, Aggregator, Authentication, Rate Limit
Built-in Support Filters, Circuit Breakers, LoadBalancer

Q.9) How do you connect multiple databases using Spring Boot?
✅ Use Case:
In microservices or modular monoliths, sometimes a Spring Boot application needs to connect to multiple databases, such as:

One database for user data (e.g., PostgreSQL)

Another for audit logs or reports (e.g., MySQL or Oracle)

✅ Approach: Use Multiple DataSource, EntityManagerFactory, and TransactionManager
✅ Step-by-Step Configuration (for JPA-based setup):
πŸ”Ή 1. Define properties for both databases in application.yml:
 
spring:
  datasource:
    primary:
      url: jdbc:mysql://localhost:3306/userdb
      username: user
      password: pass
      driver-class-name: com.mysql.cj.jdbc.Driver
    secondary:
      url: jdbc:postgresql://localhost:5432/auditdb
      username: audit
      password: pass
      driver-class-name: org.postgresql.Driver
πŸ”Ή 2. Configure the Primary Database:
 
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(
    basePackages = "com.example.repo.user",
    entityManagerFactoryRef = "primaryEntityManagerFactory",
    transactionManagerRef = "primaryTransactionManager"
)
public class PrimaryDbConfig {

    @Bean
    @Primary
    @ConfigurationProperties("spring.datasource.primary")
    public DataSourceProperties primaryDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Bean
    @Primary
    public DataSource primaryDataSource() {
        return primaryDataSourceProperties().initializeDataSourceBuilder().build();
    }

    @Bean
    @Primary
    public LocalContainerEntityManagerFactoryBean primaryEntityManagerFactory(
            EntityManagerFactoryBuilder builder) {
        return builder
                .dataSource(primaryDataSource())
                .packages("com.example.entity.user")
                .persistenceUnit("primary")
                .build();
    }

    @Bean
    @Primary
    public PlatformTransactionManager primaryTransactionManager(
            @Qualifier("primaryEntityManagerFactory") EntityManagerFactory emf) {
        return new JpaTransactionManager(emf);
    }
}
πŸ”Ή 3. Configure the Secondary Database:
 
@Configuration
@EnableJpaRepositories(
    basePackages = "com.example.repo.audit",
    entityManagerFactoryRef = "secondaryEntityManagerFactory",
    transactionManagerRef = "secondaryTransactionManager"
)
public class SecondaryDbConfig {

    @Bean
    @ConfigurationProperties("spring.datasource.secondary")
    public DataSourceProperties secondaryDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Bean
    public DataSource secondaryDataSource() {
        return secondaryDataSourceProperties().initializeDataSourceBuilder().build();
    }

    @Bean
    public LocalContainerEntityManagerFactoryBean secondaryEntityManagerFactory(
            EntityManagerFactoryBuilder builder) {
        return builder
                .dataSource(secondaryDataSource())
                .packages("com.example.entity.audit")
                .persistenceUnit("secondary")
                .build();
    }

    @Bean
    public PlatformTransactionManager secondaryTransactionManager(
            @Qualifier("secondaryEntityManagerFactory") EntityManagerFactory emf) {
        return new JpaTransactionManager(emf);
    }
}
✅ Best Practices:
Tip Description
Use @Primary Mark one datasource as default
Use @Qualifier To avoid ambiguity between beans
Logical package separation Keep entities, repos, and configs in clean packages
Use EntityManagerFactoryBuilder For consistent JPA setup

✅ Non-JPA Alternative:
If you're using JdbcTemplate or Spring Data R2DBC, the pattern is similar—define multiple DataSource or ConnectionFactory beans and wire them into your repositories or DAOs.

✅ Summary:
Component Role
@EnableJpaRepositories Activate Spring Data JPA per DB
DataSourceProperties Load DB config from application.yml
EntityManagerFactory Manage persistence context per DB
TransactionManager Manage transactions per DB
 

Q.10) How did you write Logs in microservices?

In microservices, logs are typically written using SLF4J + Logback and then centralized using tools like:

ELK Stack (Elasticsearch + Logstash + Kibana)

Graylog

Splunk

Fluentd + Grafana + Loki

OpenTelemetry (OTel) with Grafana

✅ Detailed Logging Strategy in Microservices:
1. Use SLF4J + Logback for Application Logging
In each microservice:
 
private static final Logger logger = LoggerFactory.getLogger(MyService.class);

logger.info("User {} logged in at {}", userId, timestamp);
logger.error("Payment failed for user {}", userId, ex);
Log format in logback-spring.xml:
 
<encoder>
  <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
2. Include Essential Metadata in Logs
Trace ID / Span ID (for distributed tracing)

Service name

Environment (dev/stage/prod)

User ID / IP / Request ID

Use Spring Cloud Sleuth (or Micrometer Tracing in Spring Boot 3+) to auto-inject trace IDs into logs.

3. Log Aggregation with ELK Stack (Example)
Component Purpose
Logstash Collects and processes log files
Elasticsearch Stores structured log data
Kibana UI for log search & analysis

Configure microservices to write logs to file or push to Logstash via TCP/HTTP.

4. Graylog or Splunk
Graylog: Open-source alternative to ELK, easier setup for small teams.

Splunk: Enterprise-grade, powerful analytics + alerting.

Spring Boot logs → Filebeat/Fluentd → Graylog/Splunk

5. Modern Approach: Grafana Loki
Use Grafana + Loki + Promtail to collect logs

Native integration with Grafana dashboards

Lightweight and Kubernetes-native

✅ Logging in Kubernetes / Cloud:
Use Fluent Bit, Promtail, or Filebeat to collect logs from containers.

Enrich logs with Kubernetes metadata (pod name, namespace, etc.)

Forward to Loki, ElasticSearch, or Cloud Logging (e.g., GCP/ELK).

✅ Bonus: Structured + JSON Logging
Configure logs as JSON to make parsing easier for ELK/Graylog:
 
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
✅ Summary:
Technique Description
SLF4J + Logback Standard logging API and backend
Sleuth / Micrometer Adds trace & span IDs for correlation
ELK / Graylog Centralized logging platform
Loki + Grafana Lightweight, Kubernetes-native log monitoring
Splunk Enterprise-grade search + alerting + dashboards
 

Q.11 ) How is session maintained or validated in a Microservices architecture?
✅ Short Answer:
In microservices, traditional HTTP sessions (server-side) are not used. Instead, stateless authentication using tokens like JWT (JSON Web Tokens) or opaque access tokens is preferred.

This enables:

Scalability (no session stickiness)

Stateless APIs

Cross-service authentication & authorization

✅ Options to Maintain Session-like State in Microservices:
πŸ”Ή 1. JWT Token (Stateless & Recommended)
After login, the client receives a signed JWT token.

The token contains user info (claims) like user ID, roles, expiry.

The token is sent with every request via HTTP Authorization: Bearer <token>.

Each microservice validates the JWT locally (using shared secret or public key).

Example:
 
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR...
Benefits:

No shared session storage required.

Easy to scale horizontally.

Supports stateless services.

πŸ”Ή 2. Session Store (Stateful Alternative)
If you must store session:

Use external session store: Redis, Memcached, etc.

Store session ID in a cookie: SESSION=abc123.

Each service checks Redis for user/session info.

Use case: SSO, legacy systems, logout control.

πŸ”Ή 3. API Gateway + OAuth2
Centralized login handled by an Authentication Server (e.g., Keycloak, Auth0, Spring Authorization Server).

Gateway validates token and routes to downstream services.

Internal microservices trust the gateway or introspect the token.

Spring Security + Spring Cloud Gateway:
 
spring:
  security:
    oauth2:
      resourceserver:
        jwt:
          issuer-uri: https://auth.example.com
✅ Validation in Microservices:
Task Responsibility
Token validation Each microservice or gateway
User role checks Done via JWT claims or introspection
Token expiration Verified via exp claim in JWT
Revocation Requires external cache or DB check (optional)

✅ Best Practices:
Practice Why it matters
Prefer stateless authentication Avoids scaling/session replication issues
Use short-lived access tokens Reduces risk if token is leaked
Add refresh token mechanism To renew sessions without re-authentication
Validate JWT signature Ensure it's from a trusted issuer
Use HTTPS only Prevent token interception

✅ Summary:
Mechanism Stateless Scalable Notes
JWT Token Recommended for modern microservices
Session Cookie ⚠️ Needs Redis or similar
OAuth2 w/ Gateway Good for SSO and centralized auth
Q.12) How to achieve service-to-service communication in Microservices?
 A. Synchronous Communication
This involves a direct request-response interaction between services using protocols like HTTP/REST or gRPC.

πŸ“Œ Example: Using RestTemplate (Deprecated) or WebClient (Preferred in Spring Boot 3+)
 
WebClient webClient = WebClient.create();
String response = webClient.get()
    .uri("http://order-service/api/orders/123")
    .retrieve()
    .bodyToMono(String.class)
    .block();  // Blocking call
✅ Pros:
Simple to implement

Easy to debug and trace

Real-time response

⚠️ Cons:
Tight coupling

Higher latency

Service availability dependency

πŸ”Ή B. Asynchronous Communication
Involves message brokers for decoupled communication. Services publish events to queues or topics and subscribe to them.

πŸ“Œ Common tools:
RabbitMQ (AMQP)

Apache Kafka

ActiveMQ

AWS SNS/SQS

πŸ“Œ Example: RabbitMQ (Spring Boot)
Publisher:

 
rabbitTemplate.convertAndSend("user.exchange", "user.created", userPayload);
Consumer:

 
@RabbitListener(queues = "user.queue")
public void handleUserCreated(UserEvent event) {
    // handle event
}
✅ Pros:
Loose coupling

High scalability and reliability

Supports event-driven architecture

⚠️ Cons:
Eventual consistency

Harder to debug

Retry and error handling needed

✅ Bonus: Service Discovery for Dynamic Communication
Use Spring Cloud Eureka or Consul for dynamically resolving service URLs:

 
order-service:
  url: http://ORDER-SERVICE/api/orders
Spring Cloud LoadBalancer + OpenFeign can simplify service-to-service calls:
 
@FeignClient(name = "order-service")
public interface OrderClient {
    @GetMapping("/api/orders/{id}")
    OrderDto getOrder(@PathVariable String id);
}
✅ Summary:
Mode                             Tool/Protocol                               Characteristics
Synchronous                     REST, gRPC, Feign                   Direct, blocking, immediate response
Asynchronous                    Kafka, RabbitMQ                    Event-driven, loosely coupled


Q.13) What is a Circuit Breaker in Microservices?
πŸ”Ή Correct Answer:
A Circuit Breaker is a resilience pattern used in microservices to prevent cascading failures when a dependent service is down, slow, or unresponsive.

πŸ›‘ Why is it needed?
Without a circuit breaker:

If Service A calls Service B and B is down, A will keep retrying.

This causes resource exhaustion, high latency, and system-wide failure.

With a circuit breaker:

Service A stops calling Service B temporarily if failures exceed a threshold.

Automatically resumes when Service B recovers.

πŸ”Œ How does it work?
State Description
Closed All requests are allowed. Failures are counted.
Open Requests are blocked immediately. Timeout period starts.
Half-Open A few test requests are allowed. If they succeed, circuit goes back to Closed.

⚙️ How to implement in Spring Boot (latest versions)
✅ Spring Cloud Circuit Breaker (Recommended)
Spring Boot 3.x no longer uses Hystrix (deprecated). Instead, use:

Resilience4j (lightweight and modular)

Spring Cloud Circuit Breaker abstraction

πŸ“¦ Maven Dependency (Resilience4j + Spring Boot)
 
<dependency>
  <groupId>io.github.resilience4j</groupId>
  <artifactId>resilience4j-spring-boot3</artifactId>
</dependency>
πŸ”§ Example using @CircuitBreaker annotation
 
@CircuitBreaker(name = "myServiceCB", fallbackMethod = "fallbackMethod")
public String callRemoteService() {
    return restTemplate.getForObject("http://remote-service/api", String.class);
}

public String fallbackMethod(Throwable t) {
    return "Remote service unavailable. Please try later.";
}
πŸ” Comparison: Hystrix vs Resilience4j
Feature Hystrix (Deprecated) Resilience4j (Recommended)
Spring Boot 3.x ❌ Not supported ✅ Yes
Modularity Monolithic Modular
Lightweight
Future-proof

 24) What did you do for distributed tracing in microservices?
 For distributed tracing across microservices, I used:

✅ Spring Cloud Sleuth + Zipkin or OpenTelemetry in latest Spring Boot 3+

πŸ” What is Distributed Tracing?
In microservices, a single request often flows through multiple services. Distributed tracing helps you:

Track the entire flow of a request across services.

Identify latency bottlenecks.

Debug issues like where failures occur or how much time each service takes.

⚙️ Tools Used:
Tool Purpose
Spring Cloud Sleuth Automatically adds trace ID, span ID to logs
Zipkin or Jaeger Collects and visualizes traces
OpenTelemetry Vendor-neutral tracing standard (recommended in latest versions)

πŸ“¦ Sleuth + Zipkin Setup (Spring Boot ≤ 2.x)
 
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
In application.properties:

 
spring.zipkin.base-url=http://localhost:9411
spring.sleuth.sampler.probability=1.0

Each log line will have:
 
traceId=4b3e5f10b4942b1a, spanId=4b3e5f10b4942b1a
🧭 Spring Boot 3+ Update:
🚫 Spring Cloud Sleuth is deprecated.
✅ Use Micrometer Tracing + Brave / OpenTelemetry:

 
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<dependency>
  <groupId>io.zipkin.reporter2</groupId>
  <artifactId>zipkin-reporter-brave</artifactId>
</dependency>
πŸ“Š Traces Visualization:
You can view traces on:

Zipkin UI: http://localhost:9411

Jaeger UI

Grafana Tempo (with OpenTelemetry)

✅ Summary:
Feature Tool/Library
Trace propagation Sleuth (≤2.x), Micrometer (3.x)
Trace collection/storage Zipkin, Jaeger, Tempo
Visualization Zipkin UI, Grafana, Jaeger
Logs with Trace ID Yes – links logs across services
 

Q.14) How to create a Fat Jar in Spring Boot?
Ans:
Spring Boot’s Maven plugin automatically creates a fat jar (also called an executable jar) that contains all dependencies bundled together.

To enable this, add the following plugin in your pom.xml inside the <build> section:
 
<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version><!-- specify your Spring Boot version or omit for default --></version>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>
How it works:
When you run mvn package or mvn install, this plugin repackages your jar by including all dependencies into a single executable jar.

You can run the resulting jar with:
 
java -jar target/your-app-name.jar
No additional configuration is usually needed if you use the standard Spring Boot starter parent.

Note:
If you use Spring Boot starter parent in your POM, the plugin is already included and configured by default.

So often, just running mvn clean package is enough to create a fat jar.
Later give goal as “package” in Run configurations.

Comments