Skip to Content.
Sympa Menu

shibboleth-dev - RE: Mockrunner 0.3.6

Subject: Shibboleth Developers

List archive

RE: Mockrunner 0.3.6


Chronological Thread 
  • From: "Howard Gilbert" <>
  • To: <>
  • Subject: RE: Mockrunner 0.3.6
  • Date: Wed, 16 Nov 2005 23:13:01 -0500

> Not that I'm against testing, but before anything else gets checked in
> I'd like us to figure out exactly what we're testing, how we're testing,
> and how we're implementing the tests. We no have three separate testing
> sets; the jUnit tests, the tests that Will wrote, and the stuff that
> Howard recently checked in. I don't think anyone has a good idea of
> what each one covers, or is meant to cover.

A Unit Test tests a class or group of classes that implement an interface.
The interface has some well defined specification, and the Unit Tests should
exercise each aspect of the specification verifying that correct requests
are processed correctly and erroneous requests are rejected properly.

In the existing code, the obvious use for Unit Tests is at each of the well
defined plug in interface points: Trust, Metadata, Credentials, RequestMap,
AAP, ARP, etc. The tests create a class implementing the interface and then
verifies that the response from requests corresponds to the configuration of
the component.

When the component is called through the Servlet API, then the Unit Test
must create the necessary control blocks to simulate a Servlet Container.
This is the function of Mockrunner.

An Integration test combines two or more unit classes into part or all of
the final application. It verifies that if the pieces work separately, they
also work together.

Will has a set of Integration tests that exercise the IdP. This works
because the IdP can be tested on its own. Mockrunner is used to generate
Servlet based Requests and to test Servlet based Responses.

My tests are one step higher than that. Each Test class can create more than
one instance of the IdP, SP, or Resource Filter context. It can then test
protocol sequences that involve more complicated interactions. These tests
completely exercise all processing of the tested components for one or more
protocol scenarios.

In the most complete case, my Integration Test first goes to the Resource
context and runs the Filter with a data URL that requires session
attributes. If it works correctly, the Filter should generate a redirect to
the configured WAYF. However, the handler (was SHIRE) URL can imply POST or
Artifact protocol and it can send the IdP response to either the SP server
context or back to the Filter (Resource) context. All four possibilities
have to be tested. So various test cases take the data from the Filter and
plug it into the SSO function of the IdP, then they take the POST from the
IdP and plug it back into the AssertionConsumerServlet or the Resource
Filter.

Under the covers there is either an Attribute or Artifact Query to create
the session. Then a Redirect back to the Resource Filter is simulated and
the attributes should be loaded and mapped to all the appropriate aliases
and Header names.

What this test tends to shake out are sequencing problems. For example, in
one of these tests, the Resource Filter was called three times:

The first time with a raw Resource URL to produce a Redirect to the WAYF.
The second time from the IdP POST to consume the Assertion.
The third time with a redirect back to the original raw Resource URL but
this time there should be a session with attributes.

The tests verify that the same Filter code selects the right condition and
performs the correct function based on the Request, Cookie, and Session
state. I suppose it would have been possible to do this without the IdP, and
just create the Session attributes manually, but that is harder.

One of the tests of correct operation is that the Redirect URL generated by
the SP AssertionConsumerServlet or by the Filter at the end of the
processing of the POST is the same URL presented to the Filter at the
beginning of the first step. For that to happen there are several dozen
intermediate steps that all have to happen correctly through the sequence.

It is very hard to test whether the SP generates a correct Artifact Query
unless you have a real Artifact and a real IdP to process it. For all this
to work, the SP configuration file has to match the Metadata that the IdP is
using, and visa versa. The tests certainly point out where the
AssertionConsumer URLs in the SP configuration files don't match the IdP
metadata, or there is an audience or Entity/providerId conflict, but I use
this to get the configuration files right. This is not a good way to test
the response to single configuration parameter mismatches, but I don't know
of a better way to do it.

There may be a way to create real Unit Tests of some of these issues, but I
don't know how to do it. What I do have is a test framework where you can
create any Principal (or group of Principals) with any attributes, then have
them access any resources through the SSO and SP functions.





Archive powered by MHonArc 2.6.16.

Top of Page