Skip to Content.
Sympa Menu

grouper-users - Re: [grouper-users] Any tips for k8s ingress configuration

Subject: Grouper Users - Open Discussion List

List archive

Re: [grouper-users] Any tips for k8s ingress configuration


Chronological Thread 
  • From: Darren Boss <>
  • To: "Hyzer, Chris" <>
  • Cc: Christopher Bongaarts <>, Alex Poulos <>, "" <>
  • Subject: Re: [grouper-users] Any tips for k8s ingress configuration
  • Date: Tue, 28 Apr 2020 17:48:55 -0400

It's not conclusive yet but this annotation on the ingress seems to have made the biggest difference so far:
nginx.ingress.kubernetes.io/proxy-body-size: 8m

On Tue, Apr 28, 2020 at 5:21 PM Darren Boss <> wrote:
It made it worse unfortunately.

On Tue, Apr 28, 2020 at 5:20 PM Darren Boss <> wrote:
Ok, I added "hash $remote_addr;" to my backend_servers_https section so we will see if that solves the issue.

On Tue, Apr 28, 2020 at 5:05 PM Hyzer, Chris <> wrote:

Sticky load balancing is a requirement

 

From: Darren Boss
Sent: Tuesday, April 28, 2020 5:03 PM
To: Hyzer, Chris <>
Cc: Christopher Bongaarts <>; Alex Poulos <>;
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

I don't have sticky sessions on the upfront L4 proxy so I am definitely going through different nodes on route to the container. I'm having logs written to stdout because they further get forwarded to our ELK stack. Almost all the logs I see are from Apache, nothing from shibd around the time of the error, only on login. I should probably go back to the original log format so it's more obvious which node my connection is being routed through because right now I'm logging the client ip address:

 

httpd;access_log;-;-;x.x.x.x - spb-800 [28/Apr/2020:20:49:44 +0000] "GET /grouper/grouperUi/app/UiV2Group.groupCompositeFactorFilter?name=cc%3Abasis*&start=0&count=Infinity HTTP/1.1" 200 607 "https://xxx.xxx.c3.ca/grouper/grouperUi/app/UiV2Main.index?operation=UiV2Group.groupEditComposite&groupId=52a5fe03bb034f6e8650ceb697409350" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0"
httpd;access_log;-;-;x.x.x.x - spb-800 [28/Apr/2020:20:49:51 +0000] "GET /grouper/grouperExternal/public/UiV2Public.index?operation=UiV2Public.postIndex&function=UiV2Public.error&code=ajaxError HTTP/1.1" 200 6505 "https://xxx.xxx.c3.ca/grouper/grouperUi/app/UiV2Main.index?operation=UiV2Group.groupEditComposite&groupId=52a5fe03bb034f6e8650ceb697409350" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:75.0) Gecko/20100101
 Firefox/75.0"

 

I am definitely not getting redirected back to the IdP when this occurs. The error in the Browser is `There was an error with your request.`. Took me a few dozen clicks this time to trigger an error.

 

On Tue, Apr 28, 2020 at 4:43 PM Hyzer, Chris <> wrote:

You should see logs either way, are requests going to the same server?  Can you see what error the browser is getting and where it is getting it from (redirect error might be shib, 500 might be csrf)

 

From: On Behalf Of Christopher Bongaarts
Sent: Tuesday, April 28, 2020 4:41 PM
To: Alex Poulos <>; Darren Boss <>
Cc:
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

I could see similar stickiness issues if you are using Shib for authentication with non-clustered storage services - if you get flipped to a different node on the browser side, it's an extra trip through the IdP, but if it happens in an AJAX call there's no UI to redirect...

On 4/28/2020 3:36 PM, Alex Poulos wrote:

Do you have sticky sessions on your load balancers? I would suspect that the OWASP CSRF protections are mucking things up. 

 

On Tue, Apr 28, 2020 at 4:21 PM Darren Boss <> wrote:

This is the first time I have the TAP images running under Kubernetes. The way I've got things setup is as follows:

Nginx L4 rev proxy using proxy protocol -> Nginx Ingress Controller (NodePort) -> Grouper UI service -> Grouper UI Pod (Apache -> Tomee)

 

TLS is terminated at the Nginx Ingress Controller.

 

There is the issue with underscore in headers which you have to allow via a configmap for Nginx Ingress. I solved that one in the past so I knew about it before bringing up this deployment.

 

I've enabled mod_remoteip in the Grouper container which may have helped and at the very least is allowing me to write logs with the client ips instead of the ip addresses of the proxies. I believe that I was frequently getting booted back to the IdP (invalid session?) before I enabled this tweak. Before I enabled this I was seeing 4 different ip addresses for all clients, 3 being CNI private addresses and one being the private ip of the vm where the pod was running. I've got four nodes in my cluster so this makes sense. The vm ip address confused me at first but I think that's because the connection doesn't use kube proxy when the LB forward the connection to the node where the pod is running.

 

My problem now is I'm still getting frequent ajax errors and having to start over in the UI. I'm wondering if anyone else has more tweaks either for the Nginx configmap or for the ingress controller annotations that might improve things for the UI or more tweaks in the image.

 

--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal

-- 
%%  Christopher A. Bongaarts   %%            %%
%%  OIT - Identity Management  %%  http://umn.edu/~cab  %%
%%  University of Minnesota    %%  +1 (612) 625-1809    %%



--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



--
Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal


--
Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal


--
Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



Archive powered by MHonArc 2.6.19.

Top of Page