๐ŸŒ Service Discovery & Load Balancing in Distributed Systems

Welcome back to The Code Hut Distributed Systems series! In this post, we’ll explore how distributed services find each other and balance load efficiently. ๐Ÿ”⚖️

๐ŸŒ Why Service Discovery Matters

In dynamic distributed systems, services may scale up/down or move across nodes. Hardcoding endpoints isn’t feasible. Service discovery allows services to locate each other automatically.

1. ๐Ÿ—‚️ Registry-Based Discovery

Services register themselves in a central registry:

  • ๐Ÿ“Œ Examples: Consul, Eureka, Zookeeper
  • ๐Ÿ”Ž Clients query the registry to find service instances
  • ⚡ Supports dynamic scaling and fault tolerance

// Spring Cloud Eureka client example
@Service
public class OrderClient {
    @Autowired
    private RestTemplate restTemplate;

    public Order getOrder(Long id) {
        return restTemplate.getForObject("http://ORDER-SERVICE/orders/" + id, Order.class);
    }
}

2. ⚖️ Client-Side Load Balancing

The client chooses a service instance, often with a load balancing strategy:

  • ๐Ÿ”„ Round-robin, random, or weighted
  • ๐Ÿ“š Libraries: Ribbon, Spring Cloud LoadBalancer

// Using Spring Cloud LoadBalancer
@Service
public class OrderClient {
    @LoadBalanced
    @Autowired
    private RestTemplate restTemplate;

    public Order getOrder(Long id) {
        return restTemplate.getForObject("http://order-service/orders/" + id, Order.class);
    }
}

3. ๐Ÿ›️ Server-Side Load Balancing

The load balancer sits in front of services and distributes requests:

  • ⚙️ Examples: Nginx, HAProxy, Envoy
  • ๐Ÿ™ˆ Clients don’t need to know about multiple instances

4. ๐Ÿ”— Sticky Connections (Session Affinity)

Sticky connections ensure that a client is consistently routed to the same service instance, which can be important for stateful services or caching scenarios.

  • ⚡ Useful for session-based apps or services that store temporary state in memory
  • ๐Ÿ“š Examples: Nginx `ip_hash`, HAProxy `stick-table`, Kubernetes `sessionAffinity` in Services
  • ๐Ÿ”„ Be cautious: can reduce load distribution efficiency if one instance becomes overloaded

# Kubernetes Service with sessionAffinity
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  sessionAffinity: ClientIP

5. ☁️ Exposing Apps & Load Balancer Types

In cloud environments, services are exposed via different load balancers or ingress controllers:

  • ๐Ÿ”น Azure Load Balancer (Layer 4): TCP/UDP traffic distribution for VMs or services
  • ๐Ÿ”น Azure Application Gateway (Layer 7): HTTP/HTTPS traffic with routing rules, SSL termination, WAF
  • ๐Ÿ”น Kubernetes Ingress: Manages external HTTP/S access to services in the cluster
  • ๐Ÿ”น Public vs Private: Decide if a service is externally accessible (public) or internal to the VNet (private)
  • ๐Ÿ”น Cloud-native vs Cloud-agnostic: Use native Azure features for efficiency, or abstraction layers like Terraform/Helm for portability

# Example: Kubernetes Ingress for public HTTP access
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: order-service-ingress
spec:
  rules:
    - host: orders.example.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: order-service
              port:
                number: 80

Next in the Series

In the next post, we’ll explore Consistency Models ⚖️ in distributed systems, including strong vs eventual consistency and trade-offs.

Label for this post: Distributed Systems

Comments

Popular posts from this blog

๐Ÿ› ️ The Code Hut - Index

๐Ÿ›ก️ Resilience Patterns in Distributed Systems

๐Ÿ›ก️ Thread-Safe Programming in Java: Locks, Atomic Variables & LongAdder