Weeks 6 and 7

Amy Ma
3 min readJul 21, 2021

Configured Jetstack certificate manager for HPCC, allows for TLS certificates on HPCC to be independently and automatically provisioned. Then, tested different use cases, seeing how the system would respond to the requests under different circumstances.

  • Using the ingress-Nginx controller: Added ssl redirect annotation on configmap to redirect to HTTPS when TLS is enabled
  • Used annotation backend-protocol to indicate how Nginx should communicate with backend service
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  • Added an Ingress basic route to eclwatch port number in front of HPCC TLS, to access eclwatch service with the Ingress controller’s external IP

Then, added the Ingress-Nginx controller to HPCC TLS with multiple services. The controller is used to configure routes and rules for the services running in kubernetes. The Nginx controller will route multiple services to one Fully Qualified Domain Name (FQDN), and the certificate manager, ‘cert-manager’, automatically generates and configures certificates for these services, so they are secure.

Then, I revised and finalized my documentations for the Nginx controller’s features I tested

I began looking at the third party ingress controller, HAProxy. HAProxy offers load balancing and proxying for Transmission Control Protocol (TCP) and HTTP based applications. I also implemented the HAProxy basic authentication feature, used when traffic is HTTP, and this feature displays a login prompt to users for login information before giving access to content of the server. It has a drawback; the user’s credentials are transmitted over HTTP, which is not protected. This is why enabling TLS (as seen above) is necessary, because the traffic will be encrypted, therefore mitigating the risk.

Load balancing refers to the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. This can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle. I Deployed HAProxy Ingress controller, then tested it using HAProxy Basic Layer 7 (HTTP) Load balancing functionality, which routed HAProxy by specifying different path names (backend servers), which the load balancer forwarded requests to those servers based on the requests.

No Load Balancing: In a web application environment with no load balancing, the user connects directly to a web server. If the web server goes down, the user will no longer be able to access the web server. If many users are trying to access the server at the same time, and it is unable to handle the load, they will have a slow experience or they may not be able to connect at all.

With Load Balancing: One way to load balance network traffic to multiple servers is to use layer 4 (TCP) load balancing. This will send user traffic based on IP range and port. If a request

comes in for ‘http://example.com/anything’ the traffic will be forwarded to the backend that handles all the requests for yourdomain.com on the port number.

The user accesses the load balancer, which forwards the user’s request to the web-backend group of backend servers. The backend server which is selected will respond directly to the user’s request.

The other way to load balance network traffic is to use layer 7 (HTTP) load balancing. This way allows the load balancer to forward requests to different backend servers based on the content of the user’s request. This load balancing lets multiple web servers run under the same domain name and port.

If a request comes in for example.com/pathname, they are forwarded to the pathname’s backend, which is a set of servers that run a blog application. Other requests are forwarded to the web-backend, which might be running another application.

Example of the HAProxy Ingress file that exposes the echoserver service. The hostname will resolve to an ingress controller node, which handles all traffic from port number 8080.

--

--