Skip to Content.
Sympa Menu

grouper-users - RE: [grouper-users] RE: Status Monitoring - Two Errors

Subject: Grouper Users - Open Discussion List

List archive

RE: [grouper-users] RE: Status Monitoring - Two Errors


Chronological Thread 
  • From: "Black, Carey M." <>
  • To: "Gettes, Michael" <>
  • Cc: Chris Hyzer <>, Ryan Rumbaugh <>, "" <>
  • Subject: RE: [grouper-users] RE: Status Monitoring - Two Errors
  • Date: Fri, 7 Sep 2018 20:21:20 +0000
  • Accept-language: en-US
  • Authentication-results: spf=pass (sender IP is 128.146.163.17) smtp.mailfrom=osu.edu; internet2.edu; dkim=pass (signature was verified) header.d=osu.edu;internet2.edu; dmarc=pass action=none header.from=osu.edu;
  • Authentication-results-original: spf=none (sender IP is ) ;
  • Ironport-phdr: 9a23:gXXC7BOX/OYieIwnZkkl6mtUPXoX/o7sNwtQ0KIMzox0K/z6pMbcNUDSrc9gkEXOFd2Cra4c1KyO6+jJYi8p2d65qncMcZhBBVcuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx7xKRR6JvjvGo7Vks+7y/2+94fcbglUhjexe69+IAmrpgjNq8cahpdvJLwswRXTuHtIfOpWxWJsJV2Nmhv3+9m98p1+/SlOovwt78FPX7n0cKQ+VrxYES8pM3sp683xtBnMVhWA630BWWgLiBVIAgzF7BbnXpfttybxq+Rw1DWGMcDwULs5Qiqp4bt1RxD0iScHLz85/3/Risxsl6JQvRatqwViz4LIfI2ZMfxzdb7fc9wHX2pMRsZfWTJcDIOgYYUBDOQBMuRZr4bhqFQDtgGxCRWwCO711jNEmn370Ksn2OohCwHG2wkgEsoAvHnJqNX6LrsdUeOtwKLV0zjMdelW1in96YPVdR4tu/+AVq93fMXKzUkgDR/KjlaKpYD4IT+Y2P8As2+A7+p9T+6glXMoqxxorzWp28wihI7JhocPxVDF8yV02Ik1Jce/SE5med6rDoFQuzuAOItuWsMuW2BouCAmyrIYo567ejYFyIg5yxLFdfOIbpWI7gr/VOmLOzd0nn1lebOnixaq60igzer8Vses0FZNrypFlMXMumoR2BzU78iKTOZ28ES52TuXygzf9u5JLVo7mKfZMZIszLE9moAOvUjdECL6gFv6gaCMekk65uSl6P7rbqv4qpKSLYN4lwPzP6c2lsyxH+s1MRQCUm2e9OugybLs4Fb2TbBEjvIqlqTWrpXXKMUHqqO4AwJY14Yu5AiwAjqgzd8Wh2MILEhfdxKCl4XpO0/BIPT/Dfqnm1qhjDBly+zbMrH4H5jCLGbPnK7mfblm9UFQ0g0zzc1D551PDbEBPfTzVVLruNzAFB85NBC0zPj7B9Vh14MeXmSPDrWeMKPPrV+I4uUvI+6PZIMPpDn9LP0l6+bvjX8/h1AdYbGk0YYLZH23BPhrLEeUbWDij9oOC2sGoxQyQeLyhF2HSzFTZnKyX6wm5jE8DYKrFZzMRoS3j7Od3ye3BIBaan5IB12XFnfobJiEV+0SZy2PP89tiiYEWqS5S489yRGusxf3y7V9LurT5y0YrYzs1MJs6+3OjhE96yZ0D9+G3mGJTmF0hX8IRyQo0KxloEx9zEuD3rZig/xeC9NT++1FXh0kOpHB0uwpQ+z1D0juc8uVRU3iCvCnCjE4Q9Z7i4sBblphFs6Kkx7HmSemHulGuaaMAcl+2KbV1Hu1b+100XvXnIxnxRFySM9GPm7g3/Qk3w/IGsjEn1jPxPXiTrgVwCOYrDTL9mGJpkwNFVcoCf+fD3kCekvbq8j47UreTrioTK4qKRZF1dXbe/IYccXn2E1PX+yreM/TZW68gS+RPV6J3fvVNtqsIjlDmnyDUw5dzEF2nz6dMBQmQCKoombQFjtrQFX0fgXh/fQt4HK9UkIuyQyWNQts26fmshIWhPnJU/oIxfpEoyYurTxoAUywl83fEJKeqhBgcqRRbZJYgh9H2GvVuhY7MsmnNL0ki1IDIAVxo0700RhrUMNNndV55H8vxRB5fLqRy0gJfjiE3Jf2b7vQLGS6/B2mZ6PMnF/E19PD/bwSrvk0tgbu
  • Spamdiagnosticmetadata: NSPM
  • Spamdiagnosticoutput: 1:99

Michael,

 

I admit it. I am a bit of a control freak(CF)/worry wart(WW). J

I like _solved_ problems not “Meh, close enough.”

 

 

Q: “why wouldn’t it be fire up the new loaders and at some point you shoot the old ones.  ” ?

 

My short reply is: 

                It depends on the details.

 

My CF/WW answer:

                I can only speak for myself and some of the cases that I can quickly see.  ( Other CF/WW users likely have other concerns too.)

 

                However, being able to control where a job runs seems like a fundamental feature to be able to manage the system.

                Being able to choose when to interrupt a process instead of having no other choice than to “shoot it dead” is more ideal and I suspect less likely to produce negative effects.

 

                In the current design you cannot control which instance picks up a job. So by adding another loader process, some new jobs MAY be picked up, or not. You need to look (via SQL to see across jobs and/or by loader host) to see where the job(s) are running. Then decide when it is “OK” to “shoot that one dead”.  And also hope you can do it fast enough that there are no side effects of it starting a job between when you looked, decided and hit return.

 

                What if the job takes hours to complete? Can you wait N*2 hours for the next feed? ( OK, don’t do that.. but you get the idea.)

                What if the quality/timeliness  is really important?

                                Like start/end of term processing?

                                It takes N hours, and we start it at T-(N+0.5 hours) so we have it as fresh and ready when we need it to be right.

                What if the CPU/RAM load is “suddenly” too much and you need to spread the jobs across more hosts so that existing processes can finish?

 

 

The “quiesce” idea allows for a controlled “move” instead of a more chaotic one.

                So maybe it is just me who sees this as a valuable feature??  ( Maybe I need to let go of the WW? J )

                I guess a similar idea could also be achieved if jobs starting up could be “pinned”(whitelisted) to run on a loader.  ( Which might provide a few more features(separation of jobs, cpu, ram, network firewalls, etc…)/options too. J )

 

--

Carey Matthew

 

 

P.S. Good life leason: When you ask Santa for the big shiny toy. Ask for the one you really, really, want. ( You just might get it. J )

P.S.S. Not everyone agrees what shiny looks like.

P.S.S.S. Sometimes you play the Role of Santa and need to make your own toy. ( ß  DOH!  )

 

 

From: Gettes, Michael <>
Sent: Friday, September 7, 2018 3:25 PM
To: Black, Carey M. <>
Cc: Chris Hyzer <>; Ryan Rumbaugh <>;
Subject: Re: [grouper-users] RE: Status Monitoring - Two Errors

 

Ok, I need some education then.  In the scenario you describe (and I agree with the value of what you describe) why wouldn’t it be fire up the new loaders and at some point you shoot the old ones.  If a loader was mid-job, and you shoot it, the next scheduled time the job would re-run, all is well.  Is not acceptable to skip a cycle in the less frequent scenario you describe?

 

/mrg



On Sep 7, 2018, at 3:07 PM, Black, Carey M. <> wrote:

 

Chris,

 

I think we are not yet seeing the same picture. I am thinking about a use case more in the “docker spin up / spin down” type cycles.

                Which means patches, upgrades, icon changes, days of the week, moving load between data centers, moving to or from the cloud, etc…

                Basically when the wheels are going “round and round”. J

 

A Grouper shop could spin up a “new loader”.

                It would happily start processing jobs etc… (that are not already running on other loaders.)

Then go to the “old loader(s)” and say “Hey.. you have been replaced. Finish your work and die.”

 

I see no “gap in things running” in that process.

                Start a “new home” for the jobs to move to as they can. ( by schedule and/or run time for the job)

                Wait for them to finish, then exit.

 

 

( Yes, I think it is generally a bad idea to have long running jobs. But sometimes that is what it takes to do the job. Larger data sets take more time.)

 

--

Carey Matthew 

 

From: Hyzer, Chris <> 
Sent: Friday, September 7, 2018 2:24 PM
To: Black, Carey M. <>
Cc: Ryan Rumbaugh <>; ; Gettes, Michael <>
Subject: RE: [grouper-users] RE: Status Monitoring - Two Errors

 

First off the loader process is also the grouper daemon, theres more there than just loader.  There are long running daemon jobs and there are short running daemon jobs.  I cant imagine someone would want a quiesce where it takes a couple hours to stop the loader and in the meantime, no jobs run.  Including jobs that do change log temp to change log, sending out messages, provisioning, etc.  Is this only for upgrades?  You want it to stop, do whatever you had to do, and  turn it back on quickly, and any jobs that didn’t finish, it should try them again (and it will continue where they left off not including the initial query/filter).  We have discussed this and have a jira on it

 

 

If you want a quiesce, and a timeout of a minute or 5 or whatever, then each daemon job type I would think would need to check if quiescing and return gracefully from where they are (since I assume its just a transaction level thing not the entire job).  I think the above jira would be higher priority…

 

Anyways, if Im off base please correct me

 

Thanks

Chris

 

From:  [] On Behalf Of Black, Carey M.
Sent: Friday, September 07, 2018 1:58 PM
To: Hyzer, Chris <>
Cc: Ryan Rumbaugh <>; ; Gettes, Michael <>
Subject: RE: [grouper-users] RE: Status Monitoring - Two Errors

 

Chris,

 

RE: “If we wait until work finishes, how do you define work, and will it ever really finish?”

 

The “loader” is a big topic: ( AKA: What does a Loader process do?)

                Background processes for grouper

                                Daily report

                                Rules engine

                                Attestation

                                PSP ( PSPNG?)

                                Find Bad Memberships

                                TIER Instrumentation

                Loader jobs ( pull data into grouper)

                                Ldap sources

                                RDBMS sources

                ChangeLogConsumers ( send data out of grouper )

                                Custom code and a host of “send data out of grouper” type of things

                Others?...?

 

                And then there are the conditions/interactions around running N loader processes too.

                                They internally make sure they are not running the same job on N loaders.

                                They “skip running processes” if they come due again.

                                So currently I don’t think it is possible to know where a job will “decide to run” on which one of the loaders.

 

 

My thoughts about the loader “quiesce” mode would be to:

1)      No longer start any new jobs on that instance.

                                Essentially nullify all schedules, and do not check for changed schedules until after restart.

                                This would include all of the “internal jobs” like Daily reports, Rules engine, etc…

2)      Let the running jobs run till completion or “failed to complete” state.

3)      Then exit.

 

                This would allow a host to be “quiesed” and “roll the work load off to other nodes” in a controlled way without requiring “rework” or disrupting the current work and causing undesired delays for those jobs.

               

I am not sure how processes would “not finish”. Can you explain that part of your response?

 

 

 

 

However, maybe it would be helpful to take a single specific example and walk through it in detail? ( Basically a “Long running/big process” condition. )

 

I have “LDAP Loader jobs” ( mostly “LDAP_GROUPS_FROM_ATTRIBUTES”, but there are other styles of loader jobs too including some SQL jobs.) Some of them can pull in “large numbers of groups and/or members”. 

                In fact, I have had to “break down a single ldap search condition” into many narrower searches  to reduce the size and number of the data ( groups ) returned so that the RAM/CPU load is manageable across time. Well, and so the job would actually finish.

                Just for the record, I have done things like dump millions of ldap objects with standard LDAP command line tools from this source and it normally took between 30 min and about 2 hours depending on the complexity of the search and how indexes support the search.) So the LDAP source can support the work. And the search that I am using is well indexed so we should be on the low end of that range. (Yet the loader job takes about 2 hours to complete, when it does not error out and fail. But that is a different topic….)

 

 

                So as a “simple example” ( that I think most universities could relate to ) let us talk about the largest cohort that a University has. Their Alumni. 

                We try to provide some services to our Alumni. So the University needs to know who is an Alumni for authorization data to applications. For our current numbers we are talking about a single group on the order of 500K members. I have isolated that group loader job to just load that one group. And it well does not behave very well. It takes a lot of RAM when it runs, and I think I have even observed CPU spikes while it is running.  So much so, I have disabled the job and I am looking for a “better way” to deal with the large group “issue” that I see. (  I did not break this “one group” down to “Load 26 sub groups” ( by first letter of their last name) and then have a group the has those sub groups as members. But I may need to go there…. I just don’t want too. L  )  However, in fairness, grouper 2.4 move to Ldaptive ( instead of vt-ldap) and that may change this in some helpful ways. However, I still think this is a good example for many reasons. ( And no, this set does not just change at the end of terms. It is a continuous flow, with very large spikes of change at the end of term. Believe it or not, we even try to know when our Alum change state to “deceased” as well. Which is most of the continuous membership changes for this group. )  This job can take 2 hours to complete to “success”.

 

 

So I will continue on this example.

Just talking about the run time of this one loader job:

                Obviously this loader job takes time to search the ( ldap ) source for 500k entries (members).  ( And the data can be changing while the “pull of data” is going on too.  But I leave that as a “source” issue to deal with.) From previous experience I expect that to be about 20-40 minutes from “search” to “results”.

                So if that job is running and the loader job is killed, then a lot of work(time/cpu cycles) may be “lost”.  And it will take time for the next loader to “start again” and get back to the “relative point” in the job that was killed.

 

                Questions about what happens when the loader job is abruptly stopped:

                                In the middle of the query(s)? How would you “pick up where you left off”? Maybe just start again?

                                While loading the results into the grouper staging table(?) How do you know it was done loading the data? Is there a “total count” recorded before the first record is loaded?

                                While converting the temp data into memberships? ( Maybe you could continue from here… maybe….)

                                Am I describing the internal process of the loader job poorly? ß If so, then it could be that I just don’t understand the phases of the job well enough to see the features.

                                                Maybe there are “gates” that are recoverable points where the next loader could “pick up and keep going”?

 

--

Carey Matthew 

 

From: Gettes, Michael <> 
Sent: Friday, September 7, 2018 10:02 AM
To: Chris Hyzer <>
Cc: Black, Carey M. <>; Ryan Rumbaugh <>; 
Subject: Re: [grouper-users] RE: Status Monitoring - Two Errors

 

Well, that’s cool if we can restart midway.  BUT, if grouper is down for an hour or twelve, I don’t think I would want to restart.  Maybe it is configurable?   The default being something like a restart within 20 minutes causes grouperus loaderus interruptus to be continued.  Longer than that and we continue with the normal schedule???

 

(It’s Friday.  I’m punchy).

 

/mrg

 

On Sep 7, 2018, at 9:38 AM, Hyzer, Chris <> wrote:

 

I don’t think it is bad to stop loader jobs abruptly, but I agree that when it starts again it should continue with in progress jobs.  Right?  If we wait until work finishes, how do you define work, and will it ever really finish?  If it picks back up where it left off, it should be fine since things are transactional and not marked as complete until complete…  thoughts?

 

Thanks

Chris

 

From:  [] On Behalf Of Gettes, Michael
Sent: Monday, August 27, 2018 12:00 PM
To: Black, Carey M. <
>
Cc: Ryan Rumbaugh <
>; 
Subject: Re: [grouper-users] RE: Status Monitoring - Two Errors

 

I’ve always wanted a quiesce capability.  Something that lets all the current work complete but the current loader instance won’t start any new jobs.  This would be needed for all loader daemons or just specific ones so we can safely take instances down.  I have no idea if this is possible with Quartz and haven’t had a chance to look into it.

 

/mrg

 

On Aug 27, 2018, at 11:20 AM, Black, Carey M. <> wrote:

 

Ryan,

 

RE: “I had been restarting the API daemon” …  ( due to docker use )

                I have often wondered how the “shutdown process” works for the daemon. Is it “graceful” ( and lets all running jobs complete before shutdown) or does it just “pull the plug”? 

                                I think it just pulls the plug.

                                Which “leaves” running jobs as “in progress”(in the DB status table) and they refuse to immediately start when the loader restarts. Well, until the “in progress” record(s) get old enough that they are assumed to be dead. Then the jobs will no longer refuse to start.

 

                I say that to say this:

                                If the loader is restarted repeatedly, quickly, and/or often, you may be interrupting the running jobs and leaving them as “in progress” (in the DB) and producing more delay on the jobs re-starting again. But it all depends on how fast/often those things are spinning up and down.

 

                                However, maybe If you always spinning up instances (and let the old ones run for a bit) you may be able to “wait till a good time” to turn them off.

                                Maybe if you cycle out the old instances gracefully by timing it with these settings?

                                “

                                ##################################

                                ## enabled / disabled cron

                                ##################################

                                

                                #quartz cron-like schedule for enabled/disabled daemon.  Note, this has nothing to do with the changelog

                                #leave blank to disable this, the default is 12:01am, 11:01am, 3:01pm every day: 0 1 0,11,15 * * ?

                                changeLog.enabledDisabled.quartz.cron = 0 1 0,11,15 * * ?

                                “

 

 

RE: how to schedule the “deprovisioningDaemon”

 

                Verify that your grouper-loader.base.properties has this block: ( or you can add it to your grouper-loader.properties )

                NOTE: it was added to the default base as of GRP-1623. ( which maps to grouper_v2_3_0_api_patch_107  ( and for the UI grouper_v2_3_0_ui_patch_44 ) ) You likely are past those patches… but just saying. J

                “

                #####################################

                ## Deprovisioning Job

                #####################################

                otherJob.deprovisioningDaemon.class = edu.internet2.middleware.grouper.app.deprovisioning.GrouperDeprovisioningJob

                otherJob.deprovisioningDaemon.quartzCron = 0 0 2 * * ?

                “

 

HTH.

 

-- 

Carey Matthew 

 

From:  <> On Behalf Of Ryan Rumbaugh
Sent: Monday, August 27, 2018 10:12 AM
To: 

Subject: [grouper-users] RE: Status Monitoring - Two Errors

 

An update to this issue that may be helpful to others…

 

Before I left the office on Friday I ran the gsh command “loaderRunOneJob(“CHANGE_LOG_changeLogTempToChangeLog”)” process and now the number of rows in the change_entry_temp table is zero! I tried running that before, but really didn’t see much of anything happening. Maybe I was just too impatient.

 

Now when accessing grouper/status?diagnosticType=all the only error is related to “OTHER_JOB_deprovisioningDaemon”. If anyone had any tips on how to get that kick started it would be greatly appreciated.

 

 

--

Ryan Rumbaugh

 

From:  <> On Behalf Of Ryan Rumbaugh
Sent: Friday, August 24, 2018 9:15 AM
To: 

Subject: [grouper-users] Status Monitoring - Two Errors

 

Good morning,

 

We would like to begin monitoring the status of grouper by using the diagnostic pages at grouper/status?diagnosticType=all, but before doing so I would like to take care of the two issues shown below.

 

Can anyone provide tips/suggestions on how to fix the two failures for CHANGE_LOG_changeLogTempToChangeLog and  OTHER_JOB_deprovisioningDaemon?

 

We had a Java heap issue late last week which I believe caused the “grouper_change_log_entry_temp” table to keep growing. It’s at 69,886 rows currently while earlier this week it was at 50k. Thanks for any insight.

 

 

 

2 errors in the diagnostic tasks:

 

DiagnosticLoaderJobTest, Loader job CHANGE_LOG_changeLogTempToChangeLog

 

DiagnosticLoaderJobTest, Loader job OTHER_JOB_deprovisioningDaemon

 

 

 

Error stack for: loader_CHANGE_LOG_changeLogTempToChangeLog

java.lang.RuntimeException: Cant find a success in job CHANGE_LOG_changeLogTempToChangeLog since: 2018/08/16 14:19:22.000, expecting one in the last 30 minutes

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticLoaderJobTest.doTask(DiagnosticLoaderJobTest.java:175)

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticTask.executeTask(DiagnosticTask.java:78)

                at edu.internet2.middleware.grouper.j2ee.status.GrouperStatusServlet.doGet(GrouperStatusServlet.java:180)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:110)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)

                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)

                at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)

                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)

                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)

                at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)

                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)

                at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:478)

                at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)

                at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798)

                at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1441)

                at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

                at java.lang.Thread.run(Thread.java:748)

 

 

Error stack for: loader_OTHER_JOB_deprovisioningDaemon

java.lang.RuntimeException: Cant find a success in job OTHER_JOB_deprovisioningDaemon, expecting one in the last 3120 minutes

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticLoaderJobTest.doTask(DiagnosticLoaderJobTest.java:173)

                at edu.internet2.middleware.grouper.j2ee.status.DiagnosticTask.executeTask(DiagnosticTask.java:78)

                at edu.internet2.middleware.grouper.j2ee.status.GrouperStatusServlet.doGet(GrouperStatusServlet.java:180)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)

                at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:110)

                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)

                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)

                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)

                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)

                at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)

                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)

                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)

                at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)

                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)

                at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:478)

                at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)

                at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798)

                at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1441)

                at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

                at java.lang.Thread.run(Thread.java:748)

 

--

Ryan Rumbaugh

 




Archive powered by MHonArc 2.6.19.

Top of Page