[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Sks-devel] SKS scaling configuration
From: |
Michiel van Baak |
Subject: |
Re: [Sks-devel] SKS scaling configuration |
Date: |
Mon, 18 Feb 2019 11:12:02 +0100 |
User-agent: |
NeoMutt/20180716 |
On Sun, Feb 17, 2019 at 09:18:11AM -0800, Todd Fleisher wrote:
> The setup uses a caching NGINX server to reduce load on the backend nodes
> running SKS.
> His recommendation is to run at least 3 SKS instances in the backend (I’m
> running 4).
> Only one of the backend SKS nodes is configured to gossip with the outside
> world on the WAN, along with the other backend SKS nodes on the LAN.
> The NGINX proxy is configured to prefer that node (the one gossiping with the
> outside world - let’s call it the "primary") for stats requests with a much
> higher weight.
> As a quick aside, I’ve observed issues in my setup where the stats requests
> are often directed to the other, internal SKS backend nodes - presumably due
> to the the primary node timing out due to higher load when gossiping.
> This then gets cached by the NGINX proxy and continues to get served so my
> stats page reports only the internal gossip peer’s IP address vs. all of my
> external peers.
> If Kristian or anyone else has ideas on how to mitigate/minimize this, please
> do share.
> Whenever I check his SKS node @
> http://keys2.kfwebs.net:11371/pks/lookup?op=stats
> <http://keys2.kfwebs.net:11371/pks/lookup?op=stats> I always find it
> reporting his primary node eta_sks1 with external & internal peers listed.
>
> Here are the relevant NGINX configuration options. Obviously you need to
> change the server IP addresses & the hostname returned in the headers:
>
> upstream sks_servers
> {
> server 192.168.0.55:11372 weight=5;
> server 192.168.0.61:11371 weight=10;
> server 192.168.0.36:11371 weight=10;
> }
>
> upstream sks_servers_primary
> {
> server 192.168.0.55:11372 weight=9999;
> server 192.168.0.61:11371 weight=1;
> server 192.168.0.36:11371 weight=1;
> }
I would only put the 55 server in the 'upstream sks_servers_primary' so
it does not know about the others.
That way the stats call will only go to the primary.
Downside is that it wont fail over when it times out. But maybe that is
exactly what you want for this specific call
--
Michiel van Baak
address@hidden
GPG key: http://pgp.mit.edu/pks/lookup?op=get&search=0x6FFC75A2679ED069
NB: I have a new GPG key. Old one revoked and revoked key updated on keyservers.