๐ Service Discovery & Load Balancing in Distributed Systems
Welcome back to The Code Hut Distributed Systems series! In this post, we’ll explore how distributed services find each other and balance load efficiently. ๐⚖️
๐ Why Service Discovery Matters
In dynamic distributed systems, services may scale up/down or move across nodes. Hardcoding endpoints isn’t feasible. Service discovery allows services to locate each other automatically.
1. ๐️ Registry-Based Discovery
Services register themselves in a central registry:
- ๐ Examples: Consul, Eureka, Zookeeper
- ๐ Clients query the registry to find service instances
- ⚡ Supports dynamic scaling and fault tolerance
// Spring Cloud Eureka client example
@Service
public class OrderClient {
@Autowired
private RestTemplate restTemplate;
public Order getOrder(Long id) {
return restTemplate.getForObject("http://ORDER-SERVICE/orders/" + id, Order.class);
}
}
2. ⚖️ Client-Side Load Balancing
The client chooses a service instance, often with a load balancing strategy:
- ๐ Round-robin, random, or weighted
- ๐ Libraries: Ribbon, Spring Cloud LoadBalancer
// Using Spring Cloud LoadBalancer
@Service
public class OrderClient {
@LoadBalanced
@Autowired
private RestTemplate restTemplate;
public Order getOrder(Long id) {
return restTemplate.getForObject("http://order-service/orders/" + id, Order.class);
}
}
3. ๐️ Server-Side Load Balancing
The load balancer sits in front of services and distributes requests:
- ⚙️ Examples: Nginx, HAProxy, Envoy
- ๐ Clients don’t need to know about multiple instances
4. ๐ Sticky Connections (Session Affinity)
Sticky connections ensure that a client is consistently routed to the same service instance, which can be important for stateful services or caching scenarios.
- ⚡ Useful for session-based apps or services that store temporary state in memory
- ๐ Examples: Nginx `ip_hash`, HAProxy `stick-table`, Kubernetes `sessionAffinity` in Services
- ๐ Be cautious: can reduce load distribution efficiency if one instance becomes overloaded
# Kubernetes Service with sessionAffinity
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- protocol: TCP
port: 80
targetPort: 8080
sessionAffinity: ClientIP
5. ☁️ Exposing Apps & Load Balancer Types
In cloud environments, services are exposed via different load balancers or ingress controllers:
- ๐น Azure Load Balancer (Layer 4): TCP/UDP traffic distribution for VMs or services
- ๐น Azure Application Gateway (Layer 7): HTTP/HTTPS traffic with routing rules, SSL termination, WAF
- ๐น Kubernetes Ingress: Manages external HTTP/S access to services in the cluster
- ๐น Public vs Private: Decide if a service is externally accessible (public) or internal to the VNet (private)
- ๐น Cloud-native vs Cloud-agnostic: Use native Azure features for efficiency, or abstraction layers like Terraform/Helm for portability
# Example: Kubernetes Ingress for public HTTP access
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: order-service-ingress
spec:
rules:
- host: orders.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: order-service
port:
number: 80
Next in the Series
In the next post, we’ll explore Consistency Models ⚖️ in distributed systems, including strong vs eventual consistency and trade-offs.
Label for this post: Distributed Systems
Comments
Post a Comment