Skip to Content.
Sympa Menu

grouper-users - Re: [grouper-users] Any tips for k8s ingress configuration

Subject: Grouper Users - Open Discussion List

List archive

Re: [grouper-users] Any tips for k8s ingress configuration


Chronological Thread 
  • From: Christopher Bongaarts <>
  • To: Alex Poulos <>, Darren Boss <>
  • Cc:
  • Subject: Re: [grouper-users] Any tips for k8s ingress configuration
  • Date: Tue, 28 Apr 2020 15:41:13 -0500

I could see similar stickiness issues if you are using Shib for authentication with non-clustered storage services - if you get flipped to a different node on the browser side, it's an extra trip through the IdP, but if it happens in an AJAX call there's no UI to redirect...

On 4/28/2020 3:36 PM, Alex Poulos wrote:
Do you have sticky sessions on your load balancers? I would suspect that the OWASP CSRF protections are mucking things up. 

On Tue, Apr 28, 2020 at 4:21 PM Darren Boss <> wrote:
This is the first time I have the TAP images running under Kubernetes. The way I've got things setup is as follows:
Nginx L4 rev proxy using proxy protocol -> Nginx Ingress Controller (NodePort) -> Grouper UI service -> Grouper UI Pod (Apache -> Tomee)

TLS is terminated at the Nginx Ingress Controller.

There is the issue with underscore in headers which you have to allow via a configmap for Nginx Ingress. I solved that one in the past so I knew about it before bringing up this deployment.

I've enabled mod_remoteip in the Grouper container which may have helped and at the very least is allowing me to write logs with the client ips instead of the ip addresses of the proxies. I believe that I was frequently getting booted back to the IdP (invalid session?) before I enabled this tweak. Before I enabled this I was seeing 4 different ip addresses for all clients, 3 being CNI private addresses and one being the private ip of the vm where the pod was running. I've got four nodes in my cluster so this makes sense. The vm ip address confused me at first but I think that's because the connection doesn't use kube proxy when the LB forward the connection to the node where the pod is running.

My problem now is I'm still getting frequent ajax errors and having to start over in the UI. I'm wondering if anyone else has more tweaks either for the Nginx configmap or for the ingress controller annotations that might improve things for the UI or more tweaks in the image.

--
Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal
-- 
%%  Christopher A. Bongaarts   %%            %%
%%  OIT - Identity Management  %%  http://umn.edu/~cab  %%
%%  University of Minnesota    %%  +1 (612) 625-1809    %%



Archive powered by MHonArc 2.6.19.

Top of Page