Skip to Content.
Sympa Menu

grouper-users - RE: [grouper-users] RE: Status Monitoring - Two Errors

Subject: Grouper Users - Open Discussion List

List archive

RE: [grouper-users] RE: Status Monitoring - Two Errors


Chronological Thread 
  • From: "Black, Carey M." <>
  • To: Chris Hyzer <>
  • Cc: Ryan Rumbaugh <>, "" <>, "Gettes, Michael" <>
  • Subject: RE: [grouper-users] RE: Status Monitoring - Two Errors
  • Date: Fri, 7 Sep 2018 17:58:07 +0000
  • Accept-language: en-US
  • Authentication-results: spf=pass (sender IP is 128.146.138.9) smtp.mailfrom=osu.edu; internet2.edu; dkim=pass (signature was verified) header.d=osu.edu;internet2.edu; dmarc=pass action=none header.from=osu.edu;
  • Authentication-results-original: spf=none (sender IP is ) ;
  • Ironport-phdr: 9a23:EuyjEhX1kpMNn5J1h6D0e+Dl983V8LGtZVwlr6E/grcLSJyIuqrYZRSBvadThVPEFb/W9+hDw7KP9fy4BipYud6oizMrSNR0TRgLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpTEdFQ/iOgVrO+/7BpDdj9it1+C15pbffxhEiCCybL9uLhi6txndutULioZ+N6g9zQfErGFVcOpM32NoIlyTnxf45siu+ZNo7jpdtfE8+cNeSKv2Z6s3Q6BWAzQgKGA1+dbktQLfQguV53sTSXsZnxxVCAXY9h76X5Pxsizntuph3SSRIMP7QawoVTmk8qxmUwHjhjsZODEl8WHXks1wg7xdoBK9vBx03orYbJiIOPZiYq/ReNUXTndDUMlMTSxMGoOyYZUSAeodM+hWrIf9qFkPrRSiCgahH/ngxiNUinLswaE2z+YsHAfb1wIgBdIOt3HUoc37OKoPVeC61rXHzTbbY/hLxzry8pLIfQ4hoPqRWrx8a8TdwlQoGgzeilWfs5DqPzCP2ugQrWSW4fFtVeW0i246sQ1+vCWgxto1h4TPm4kbxFfE9SBjz4Y0I921UEF7YcSjEJtNryGVKZF2Ttk4TGFwoCY20qEGtoShfCQQx5QmxgbTa+CHc4iS/B3sTvueLS1ihHJ4frKznxay8VS8yu3kTcm7zkxKrjRBktbSqnABzR7T5dKZSvRn5kuhwSyP1wfU6uFKOUA0kqvbK5g9wrIqipUTtkHDEjfrmEXqja+Wcl8o+u625OT7ZLjtu5ySN5dshw3jLqgjldazDOs6PwQUUGWX5P6w2KD98UHlWLlGkPI7nrPHvJzGJskXvLC1DxFb34o59hqyDTSr3MwWnXYZL19JZBeKgo31NF7TPf/3FvKyg1qwnzpv2f/LO7zsDonMI3XDlbrsfahy5kFaxQEu195Q/YhUBasEIP/rWk/+qtjYDhghPgKs3+vpD8lx240HVWyBBaGUPrresViT6e0xOeWMY5IVuCrmJPgi+v7ui2I2lUUFfamz2psXdG63Eeh6I0WYZnrshM0NEWAXvgoiSOzqj1qCUT1Ja3muWKI84yk3CIOgDYjdWoCgmKGN0Du0E5FMaW1LCEqAHHj1e4meRvsAdj6eLtdkkjMeVLihT4Eh1QuptA//07doNe7U9TcEupLgztR5+vPfmB8p+DxzFMSdznuBQH9pkWMVXzM2xrxwoVRhylef1qh1m/NYFcZU5/NUSgc1K4Tcw/VnC9DpRA3BZMyJRU2iQtWnGjExUskxz8EUb0Z8HdWikg7M3zCsA7ALi7yHGoY4/bzB0HjsdI5BzCPt1LMglBEGU9lCLyXyg6V/5hreHabIiE7fir6nc6Ja0SLQojSt122L6QtyVAd7UuGNdnkFa1qeiJKzrhfIS7alCvJ+aFBpztWfbKZGd4u63h19WP7/NYGGMCqKkGCqCEPNn+vUNtC4cngB3CjbFEkPmhwS+nDDLwUlGyO9uDuPVG51DVy6e0r36qE+s369QkIuhyCyJ0x6n+PvqlhM2aLaEqhDmONF3UVpsDBoBBC41tPSBcCHol9mZ7gabN8gsx9G037UrQpwIsbmIqx/1RYScAVy6lvnzA4/Sp5Bns4jsG4wwUJtILnQzl5bdj2Z0JyxX9+fKmT78B21LaKD3UrDltua5/QO7ug1sVPuoFvvG0Y/oD1r1tBP2C6E747RRAMZTZP2VBMx8B5377HXayUw/cXaz3ppZK6vr3nP1896CQ==
  • Spamdiagnosticmetadata: NSPM
  • Spamdiagnosticoutput: 1:99

Chris,

 

RE: “If we wait until work finishes, how do you define work, and will it ever really finish?

 

The “loader” is a big topic: ( AKA: What does a Loader process do?)

                Background processes for grouper

                                Daily report

                                Rules engine

                                Attestation

                                PSP ( PSPNG?)

                                Find Bad Memberships

                                TIER Instrumentation

                Loader jobs ( pull data into grouper)

                                Ldap sources

                                RDBMS sources

                ChangeLogConsumers ( send data out of grouper )

                                Custom code and a host of “send data out of grouper” type of things

                Others?...?

 

                And then there are the conditions/interactions around running N loader processes too.

                                They internally make sure they are not running the same job on N loaders.

                                They “skip running processes” if they come due again.

                                So currently I don’t think it is possible to know where a job will “decide to run” on which one of the loaders.

 

 

My thoughts about the loader “quiesce” mode would be to:

1)      No longer start any new jobs on that instance.

                                Essentially nullify all schedules, and do not check for changed schedules until after restart.

                                This would include all of the “internal jobs” like Daily reports, Rules engine, etc…

2)      Let the running jobs run till completion or “failed to complete” state.

3)      Then exit.

 

                This would allow a host to be “quiesed” and “roll the work load off to other nodes” in a controlled way without requiring “rework” or disrupting the current work and causing undesired delays for those jobs.

               

I am not sure how processes would “not finish”. Can you explain that part of your response?

 

 

 

 

However, maybe it would be helpful to take a single specific example and walk through it in detail? ( Basically a “Long running/big process” condition. )

 

I have “LDAP Loader jobs” ( mostly “LDAP_GROUPS_FROM_ATTRIBUTES”, but there are other styles of loader jobs too including some SQL jobs.) Some of them can pull in “large numbers of groups and/or members”.

                In fact, I have had to “break down a single ldap search condition” into many narrower searches  to reduce the size and number of the data ( groups ) returned so that the RAM/CPU load is manageable across time. Well, and so the job would actually finish.

                Just for the record, I have done things like dump millions of ldap objects with standard LDAP command line tools from this source and it normally took between 30 min and about 2 hours depending on the complexity of the search and how indexes support the search.) So the LDAP source can support the work. And the search that I am using is well indexed so we should be on the low end of that range. (Yet the loader job takes about 2 hours to complete, when it does not error out and fail. But that is a different topic….)

 

 

                So as a “simple example” ( that I think most universities could relate to ) let us talk about the largest cohort that a University has. Their Alumni.

                We try to provide some services to our Alumni. So the University needs to know who is an Alumni for authorization data to applications. For our current numbers we are talking about a single group on the order of 500K members. I have isolated that group loader job to just load that one group. And it well does not behave very well. It takes a lot of RAM when it runs, and I think I have even observed CPU spikes while it is running.  So much so, I have disabled the job and I am looking for a “better way” to deal with the large group “issue” that I see. (  I did not break this “one group” down to “Load 26 sub groups” ( by first letter of their last name) and then have a group the has those sub groups as members. But I may need to go there…. I just don’t want too. L  )  However, in fairness, grouper 2.4 move to Ldaptive ( instead of vt-ldap) and that may change this in some helpful ways. However, I still think this is a good example for many reasons. ( And no, this set does not just change at the end of terms. It is a continuous flow, with very large spikes of change at the end of term. Believe it or not, we even try to know when our Alum change state to “deceased” as well. Which is most of the continuous membership changes for this group. )  This job can take 2 hours to complete to “success”.

 

 

So I will continue on this example.

Just talking about the run time of this one loader job:

                Obviously this loader job takes time to search the ( ldap ) source for 500k entries (members).  ( And the data can be changing while the “pull of data” is going on too.  But I leave that as a “source” issue to deal with.) From previous experience I expect that to be about 20-40 minutes from “search” to “results”.

                So if that job is running and the loader job is killed, then a lot of work(time/cpu cycles) may be “lost”.  And it will take time for the next loader to “start again” and get back to the “relative point” in the job that was killed.

 

                Questions about what happens when the loader job is abruptly stopped:

                                In the middle of the query(s)? How would you “pick up where you left off”? Maybe just start again?

                                While loading the results into the grouper staging table(?) How do you know it was done loading the data? Is there a “total count” recorded before the first record is loaded?

                                While converting the temp data into memberships? ( Maybe you could continue from here… maybe….)

                                Am I describing the internal process of the loader job poorly? ß If so, then it could be that I just don’t understand the phases of the job well enough to see the features.

                                                Maybe there are “gates” that are recoverable points where the next loader could “pick up and keep going”?

 

--

Carey Matthew

 

From: Gettes, Michael <>
Sent: Friday, September 7, 2018 10:02 AM
To: Chris Hyzer <>
Cc: Black, Carey M. <>; Ryan Rumbaugh <>;
Subject: Re: [grouper-users] RE: Status Monitoring - Two Errors

 

Well, that’s cool if we can restart midway.  BUT, if grouper is down for an hour or twelve, I don’t think I would want to restart.  Maybe it is configurable?   The default being something like a restart within 20 minutes causes grouperus loaderus interruptus to be continued.  Longer than that and we continue with the normal schedule???

 

(It’s Friday.  I’m punchy).

 

/mrg



On Sep 7, 2018, at 9:38 AM, Hyzer, Chris <> wrote:

 

I don’t think it is bad to stop loader jobs abruptly, but I agree that when it starts again it should continue with in progress jobs.  Right?  If we wait until work finishes, how do you define work, and will it ever really finish?  If it picks back up where it left off, it should be fine since things are transactional and not marked as complete until complete…  thoughts?

 

Thanks

Chris

 

From:  [] On Behalf Of Gettes, Michael
Sent: Monday, August 27, 2018 12:00 PM
To: Black, Carey M. <
>
Cc: Ryan Rumbaugh <
>;
Subject: Re: [grouper-users] RE: Status Monitoring - Two Errors

 

I’ve always wanted a quiesce capability.  Something that lets all the current work complete but the current loader instance won’t start any new jobs.  This would be needed for all loader daemons or just specific ones so we can safely take instances down.  I have no idea if this is possible with Quartz and haven’t had a chance to look into it.

 

/mrg




On Aug 27, 2018, at 11:20 AM, Black, Carey M. <> wrote:

 

Ryan,

 

RE: “I had been restarting the API daemon” …  ( due to docker use )

                I have often wondered how the “shutdown process” works for the daemon. Is it “graceful” ( and lets all running jobs complete before shutdown) or does it just “pull the plug”? 

                                I think it just pulls the plug.

                                Which “leaves” running jobs as “in progress”(in the DB status table) and they refuse to immediately start when the loader restarts. Well, until the “in progress” record(s) get old enough that they are assumed to be dead. Then the jobs will no longer refuse to start.

 

                I say that to say this:

                                If the loader is restarted repeatedly, quickly, and/or often, you may be interrupting the running jobs and leaving them as “in progress” (in the DB) and producing more delay on the jobs re-starting again. But it all depends on how fast/often those things are spinning up and down.

 

                                However, maybe If you always spinning up instances (and let the old ones run for a bit) you may be able to “wait till a good time” to turn them off.

                                Maybe if you cycle out the old instances gracefully by timing it with these settings?

                                “

                                ##################################

                                ## enabled / disabled cron

                                ##################################

                                

                                #quartz cron-like schedule for enabled/disabled daemon.  Note, this has nothing to do with the changelog

                                #leave blank to disable this, the default is 12:01am, 11:01am, 3:01pm every day: 0 1 0,11,15 * * ?

                                changeLog.enabledDisabled.quartz.cron = 0 1 0,11,15 * * ?

                                “

 

 

RE: how to schedule the “deprovisioningDaemon”

 

                Verify that your grouper-loader.base.properties has this block: ( or you can add it to your grouper-loader.properties )

                NOTE: it was added to the default base as of GRP-1623. ( which maps to grouper_v2_3_0_api_patch_107  ( and for the UI grouper_v2_3_0_ui_patch_44 ) ) You likely are past those patches… but just saying. J

                “

                #####################################

                ## Deprovisioning Job

                #####################################

                otherJob.deprovisioningDaemon.class = edu.internet2.middleware.grouper.app.deprovisioning.GrouperDeprovisioningJob

                otherJob.deprovisioningDaemon.quartzCron = 0 0 2 * * ?

                “

 

HTH.

 

-- 

Carey Matthew 

 

From:  <> On Behalf Of Ryan Rumbaugh
Sent: Monday, August 27, 2018 10:12 AM
To: 

Subject: [grouper-users] RE: Status Monitoring - Two Errors

 

An update to this issue that may be helpful to others…

 

Before I left the office on Friday I ran the gsh command “loaderRunOneJob(“CHANGE_LOG_changeLogTempToChangeLog”)” process and now the number of rows in the change_entry_temp table is zero! I tried running that before, but really didn’t see much of anything happening. Maybe I was just too impatient.

 

Now when accessing grouper/status?diagnosticType=all the only error is related to “OTHER_JOB_deprovisioningDaemon”. If anyone had any tips on how to get that kick started it would be greatly appreciated.

 

 

--

Ryan Rumbaugh

 

From:  <> On Behalf Of Ryan Rumbaugh
Sent: Friday, August 24, 2018 9:15 AM
To: 

Subject: [grouper-users] Status Monitoring - Two Errors

 

Good morning,

 

We would like to begin monitoring the status of grouper by using the diagnostic pages at grouper/status?diagnosticType=all, but before doing so I would like to take care of the two issues shown below.

 

Can anyone provide tips/suggestions on how to fix the two failures for CHANGE_LOG_changeLogTempToChangeLog and  OTHER_JOB_deprovisioningDaemon?

 

We had a Java heap issue late last week which I believe caused the “grouper_change_log_entry_temp” table to keep growing. It’s at 69,886 rows currently while earlier this week it was at 50k. Thanks for any insight.

 

 

 

2 errors in the diagnostic tasks:

 

DiagnosticLoaderJobTest, Loader job CHANGE_LOG_changeLogTempToChangeLog

 

DiagnosticLoaderJobTest, Loader job OTHER_JOB_deprovisioningDaemon

 

 

 

Error stack for: loader_CHANGE_LOG_changeLogTempToChangeLog

java.lang.RuntimeException: Cant find a success in job CHANGE_LOG_changeLogTempToChangeLog since: 2018/08/16 14:19:22.000, expecting one in the last 30 minutes

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticLoaderJobTest.doTask(DiagnosticLoaderJobTest.java:175)

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticTask.executeTask(DiagnosticTask.java:78)

                at edu.internet2.middleware.grouper.j2ee.status.GrouperStatusServlet.doGet(GrouperStatusServlet.java:180)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:110)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)

                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)

                at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)

                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)

                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)

                at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)

                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)

                at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:478)

                at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)

                at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798)

                at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1441)

                at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

                at java.lang.Thread.run(Thread.java:748)

 

 

Error stack for: loader_OTHER_JOB_deprovisioningDaemon

java.lang.RuntimeException: Cant find a success in job OTHER_JOB_deprovisioningDaemon, expecting one in the last 3120 minutes

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticLoaderJobTest.doTask(DiagnosticLoaderJobTest.java:173)

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticTask.executeTask(DiagnosticTask.java:78)

                at edu.internet2.middleware.grouper.j2ee.status.GrouperStatusServlet.doGet(GrouperStatusServlet.java:180)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:110)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)

                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)

                at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)

                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)

                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)

                at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)

                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)

                at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:478)

                at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)

                at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798)

                at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1441)

                at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

                at java.lang.Thread.run(Thread.java:748)

 

--

Ryan Rumbaugh

 




Archive powered by MHonArc 2.6.19.

Top of Page