Skip to Content.
Sympa Menu

grouper-users - Re: [grouper-users] Any tips for k8s ingress configuration

Subject: Grouper Users - Open Discussion List

List archive

Re: [grouper-users] Any tips for k8s ingress configuration


Chronological Thread 
  • From: Alex Poulos <>
  • To: Darren Boss <>
  • Cc: "Hyzer, Chris" <>, Christopher Bongaarts <>, "" <>
  • Subject: Re: [grouper-users] Any tips for k8s ingress configuration
  • Date: Wed, 29 Apr 2020 11:29:08 -0400

I think the standard advice (from somewhere on the wiki) is 2-3g of ram for the WS and UI, and 12g for the daemon. FWIW we only give the daemon 3g and generally don't have issues (we do run 2 instaces). I guess it depends on how big  your group registry is and how many loader jobs there are.

On Wed, Apr 29, 2020 at 9:14 AM Darren Boss <> wrote:
I figured out my issue.

TL;DR was I had configured a 3 second proxy_timeout on the LB and this is what was causing the issue.

Longer version is I started to look at the nginx ingress controller logs and would see error status 499 messages before the AJAX error. I was not familiar with error 499 and for good reason, it's Nginx specific and means "client closed the connection" which hinted at more issues with the LB. In my searching it seems like both AWS ELB and Google LB both have a default of 60s so that's what I changed my proxy_timeout to. I did see one error bubble up to the client after making the change but didn't get the ajax error. Knowing that the occasional request was taking longer than 3 seconds I bumped up the resources on the aio container to use 2cpus.

This leads me into another question. As I transition the setup to use multiple containers, are there any suggestions for setting resource limits for the various containers?


On Tue, Apr 28, 2020 at 9:07 PM Hyzer, Chris <> wrote:

Ok if its sticky by sourceip  and that goes to one pod/node then you are all set with that.

 

If you look in developer tools in the browser you should see the network tab and requests with an error. You should see an HTTP error code.  That might help out and let you know whats going on (e.g. redirect to authn or csrf error or something else).  see if oyu can tie that error to the web server logs or tomcat log

 

 

 

From: Darren Boss
Sent: Tuesday, April 28, 2020 9:04 PM
To: Hyzer, Chris <>
Cc: Christopher Bongaarts <>; Alex Poulos <>;
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

For the LB proxy I'm using stream blocks, there are no sticky options for TCP/UDP based proxies. I've added the "hash $remote_addr;" option to the stream backend section so once I've started a session, I'm always going to the same node in the cluster and I've confirmed that my logging %h (remote hostname) and not %a (client ip) and making sure the ip wasn't changing. From there, the connection is handled by the nginx ingress where using a sticky cookie is an option but at this point in the connection, I don't think it really matters since there is only one grouper pod, I'm not running multiple replicas. I'll try sticky on the ingress and on the service as well just to make sure.

 

It seems like the proxy-body-size has improved things but perhaps it's just a placebo. After a while I did get the ajax error message and I searched the logs looking for error 413 Request Entity Too Large status but didn't see any.



--
Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



Archive powered by MHonArc 2.6.19.

Top of Page