Skip to Content.
Sympa Menu

grouper-users - RE: [grouper-users] Any tips for k8s ingress configuration

Subject: Grouper Users - Open Discussion List

List archive

RE: [grouper-users] Any tips for k8s ingress configuration


Chronological Thread 
  • From: "Hyzer, Chris" <>
  • To: Darren Boss <>
  • Cc: Christopher Bongaarts <>, Alex Poulos <>, "" <>
  • Subject: RE: [grouper-users] Any tips for k8s ingress configuration
  • Date: Wed, 29 Apr 2020 00:37:12 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=isc.upenn.edu; dmarc=pass action=none header.from=isc.upenn.edu; dkim=pass header.d=isc.upenn.edu; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CD5AnWvFwuuiUoYIgNkWIRw4gH0hfR+O0wVJb3FmboI=; b=dp3jR6zCkpXwVgeQjiHrxvLQJADgMdv19SRj7WUOARAd04NgLPp5C7qVtnyj+PFcoO0cq6MLRXAntG2TKGe7AgLhqVNuLMxVFOGbAlQLFZM2t33S3MhftvLzMqvAgVjDXQ29pxUaS7EkGIpB5uWPWCnTZKsarvUZZ3bxl0uWoSjUkY2RpozKs9t7uRayQ6skMWrXpIvEP7b1qIC237ZbKsEKkEzB8gkuF/MdA7IGGXbhbJNjvOq+u2c4etN35uIO/aE5wNhciilU+63HnBE7H/W0sZl6tgpAzuy1NdeBGg/0l0kpssFhOPej+0VcxAy0eevRQOy8HbWCF1idbZs4ig==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Mtyp0wVvgPTBlfaBLsEwOWseSMTtXTP0FAd7iwkjOob8yHEAPAde1pEkZIX8uV0jYsV/yvA03Lv02IWoTrnjzSxwLuzEv0XoyHJsyWdbxxcWmavzEAqhqGaHSTd2bwQl6RNxNaoQwaZKP2AtaGtEkrGWxq939ESVck9OT/rpXEi5n6OChP+tg/4AG6hT0u20JXDuLtU/9r5jtAOwlAF0JTWCyt6UnwRJaQKcdI0FBagdY3zJo0SfLfAvipX36GarECp3+2uhIDXnFRPR1sBMMkomhHCFIUJfT7JG2QnP0+5EMaHOTFZrx6AFNrJwjXXDIsTMS2metmqQDmQsXFNU4A==

do you have a sticky cookie set?

 

https://nginx.org/en/docs/http/ngx_http_upstream_module.html?&_ga=2.37216806.786640941.1588120530-1984050907.1588120530#sticky_cookie

 

 

From: Darren Boss
Sent: Tuesday, April 28, 2020 5:49 PM
To: Hyzer, Chris <>
Cc: Christopher Bongaarts <>; Alex Poulos <>;
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

It's not conclusive yet but this annotation on the ingress seems to have made the biggest difference so far:
nginx.ingress.kubernetes.io/proxy-body-size: 8m

 

On Tue, Apr 28, 2020 at 5:21 PM Darren Boss <> wrote:

It made it worse unfortunately.

 

On Tue, Apr 28, 2020 at 5:20 PM Darren Boss <> wrote:

Ok, I added "hash $remote_addr;" to my backend_servers_https section so we will see if that solves the issue.

 

On Tue, Apr 28, 2020 at 5:05 PM Hyzer, Chris <> wrote:

Sticky load balancing is a requirement

 

From: Darren Boss
Sent: Tuesday, April 28, 2020 5:03 PM
To: Hyzer, Chris <>
Cc: Christopher Bongaarts <>; Alex Poulos <>;
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

I don't have sticky sessions on the upfront L4 proxy so I am definitely going through different nodes on route to the container. I'm having logs written to stdout because they further get forwarded to our ELK stack. Almost all the logs I see are from Apache, nothing from shibd around the time of the error, only on login. I should probably go back to the original log format so it's more obvious which node my connection is being routed through because right now I'm logging the client ip address:

 

httpd;access_log;-;-;x.x.x.x - spb-800 [28/Apr/2020:20:49:44 +0000] "GET /grouper/grouperUi/app/UiV2Group.groupCompositeFactorFilter?name=cc%3Abasis*&start=0&count=Infinity HTTP/1.1" 200 607 "https://xxx.xxx.c3.ca/grouper/grouperUi/app/UiV2Main.index?operation=UiV2Group.groupEditComposite&groupId=52a5fe03bb034f6e8650ceb697409350" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0"
httpd;access_log;-;-;x.x.x.x - spb-800 [28/Apr/2020:20:49:51 +0000] "GET /grouper/grouperExternal/public/UiV2Public.index?operation=UiV2Public.postIndex&function=UiV2Public.error&code=ajaxError HTTP/1.1" 200 6505 "https://xxx.xxx.c3.ca/grouper/grouperUi/app/UiV2Main.index?operation=UiV2Group.groupEditComposite&groupId=52a5fe03bb034f6e8650ceb697409350" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:75.0) Gecko/20100101
 Firefox/75.0"

 

I am definitely not getting redirected back to the IdP when this occurs. The error in the Browser is `There was an error with your request.`. Took me a few dozen clicks this time to trigger an error.

 

On Tue, Apr 28, 2020 at 4:43 PM Hyzer, Chris <> wrote:

You should see logs either way, are requests going to the same server?  Can you see what error the browser is getting and where it is getting it from (redirect error might be shib, 500 might be csrf)

 

From: On Behalf Of Christopher Bongaarts
Sent: Tuesday, April 28, 2020 4:41 PM
To: Alex Poulos <>; Darren Boss <>
Cc:
Subject: Re: [grouper-users] Any tips for k8s ingress configuration

 

I could see similar stickiness issues if you are using Shib for authentication with non-clustered storage services - if you get flipped to a different node on the browser side, it's an extra trip through the IdP, but if it happens in an AJAX call there's no UI to redirect...

On 4/28/2020 3:36 PM, Alex Poulos wrote:

Do you have sticky sessions on your load balancers? I would suspect that the OWASP CSRF protections are mucking things up. 

 

On Tue, Apr 28, 2020 at 4:21 PM Darren Boss <> wrote:

This is the first time I have the TAP images running under Kubernetes. The way I've got things setup is as follows:

Nginx L4 rev proxy using proxy protocol -> Nginx Ingress Controller (NodePort) -> Grouper UI service -> Grouper UI Pod (Apache -> Tomee)

 

TLS is terminated at the Nginx Ingress Controller.

 

There is the issue with underscore in headers which you have to allow via a configmap for Nginx Ingress. I solved that one in the past so I knew about it before bringing up this deployment.

 

I've enabled mod_remoteip in the Grouper container which may have helped and at the very least is allowing me to write logs with the client ips instead of the ip addresses of the proxies. I believe that I was frequently getting booted back to the IdP (invalid session?) before I enabled this tweak. Before I enabled this I was seeing 4 different ip addresses for all clients, 3 being CNI private addresses and one being the private ip of the vm where the pod was running. I've got four nodes in my cluster so this makes sense. The vm ip address confused me at first but I think that's because the connection doesn't use kube proxy when the LB forward the connection to the node where the pod is running.

 

My problem now is I'm still getting frequent ajax errors and having to start over in the UI. I'm wondering if anyone else has more tweaks either for the Nginx configmap or for the ingress controller annotations that might improve things for the UI or more tweaks in the image.

 

--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal

-- 
%%  Christopher A. Bongaarts   %%            %%
%%  OIT - Identity Management  %%  http://umn.edu/~cab  %%
%%  University of Minnesota    %%  +1 (612) 625-1809    %%



--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal



--

Darren Boss
Senior Programmer/Analyst
Programmeur-analyste principal




Archive powered by MHonArc 2.6.19.

Top of Page