sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

SKS Recon/Gossip operational functionality


From: Jeremy T. Bouse
Subject: SKS Recon/Gossip operational functionality
Date: Thu, 29 Oct 2020 00:22:54 -0400

Okay, so as my move has settled down and I've been working on trying to get my keyserver back online I think I've come up against a functionality issue that I'm trying to see if I can't figure out how to work around it.

So I'm trying to deploy within my AWS environment using ECS Fargate with an EFS volume for persistent storage. I have created a Docker image that is able to mount an EFS Access Point mount to download a recent key dump to it. I then have my Docker image for SKS itself in which the entry point script checks for the existence of the KDB and PTree directories and if they don't exist but the key dump files are available it initiates the import before starting up.I have a third Docker image that is my NGINX image with the appropriate configuration.

All Docker images have worked fine thus far with my testing. I have 2 SKS nodes launched with a key dump imported from October 26th and show around 604k keys. I have ECS using a Cloud Map service discovery for the SKS nodes which they use for each of their respective membership files and for the NGINX HTTP upstream configuration used for the proxy_pass back to SKS.

Up to this point, everything appears to be functioning fine. I can hit the server URL (http://sks.undergrid.services) and reach NGINX. I can successfully search for keys and I can check the statistics and refreshing I see it bouncing between the two SKS nodes. I've not yet opened up the recon port outside my environment and don't currently have any external peers to begin gossiping with and sync up. This is where I'm not sure if I'm beginning to overthink the solution or have actually run into an issue.

From monitoring the SKS I know it resolves the gossip peer hostnames into IP addresses. My intention was to stand up a network load balancer with listeners on 80, 11370 & 11371. Where the 80/tcp and 11371/tcp HTTP listeners would forward to the target group with the NGINX containers and the 11370/tcp TCP listener would forward to the target group with the SKS containers. The hostname would then point to the NLB, while currently, I have it pointing to the IP of an NGINX container that is running just to test. The issue I'm foreseeing is that while the SKS containers would have public IP addresses themselves, they are dynamic and obviously wouldn't match up with the NLB IP address when they would initiate a recon run.

So thoughts?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]