Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement integration tests #32

Open
fredreichbier opened this issue Jul 3, 2017 · 6 comments
Open

Implement integration tests #32

fredreichbier opened this issue Jul 3, 2017 · 6 comments

Comments

@fredreichbier
Copy link
Contributor

It would be nice to have "integration tests": We could fake user input and actions with mocks and check that the user actions result in the correct configurations.

@cornelinux
Copy link
Member

Is there a good way to mock the usage of the dialogs? To me this sounds good. We could run the redundant appliance snapshots and start the integrationt tests with common tasks.

@fredreichbier
Copy link
Contributor Author

I hope so :) I think this may be possible using the mock module: We could replace dialog's methods like inputbox(...) with mock functions that return a result that we have predefined in our integration test, e.g.:

# test creation of admin
user_actions = MockUser()
user_actions.add_inputbox_answer('admin')
user_actions.add_inputbox_answer('password')
with activate_mock(user_actions):
    # run appliance
# check that admin/password exists in database

In other words, we would predefine a "screenplay" of the user's actions, then run the appliance tool in which we read the user's reactions to dialogs from the screenplay.

@cornelinux
Copy link
Member

awesome.

fredreichbier pushed a commit that referenced this issue Jul 10, 2017
fredreichbier pushed a commit that referenced this issue Jul 10, 2017
This is necessary in order to import the dialogs
for integration testing.

Working on #32
fredreichbier pushed a commit that referenced this issue Jul 10, 2017
The API uses mocks to simulate user behavior.

Working on #32
@fredreichbier
Copy link
Contributor Author

32/integration-tests now has a prototype API that allows to implement integration tests. Here is an exemplary test:

def test_set_admin_realms(self):
    # user1 sets admin realms
    user1 = ApplianceBehavior()
    user1.navigate('privacyIDEA', 'admin realms')
    user1.answer_inputbox('super1,super2,super3')
    self.simulate_run(user1)
    p_config1 = PrivacyIDEAConfig(DEFAULT_CONFIG)
    self.assertEqual(p_config1.get_superusers(), ['super1', 'super2', 'super3'])
    # user2 sets other admin realms
    user2 = ApplianceBehavior()
    user2.navigate('privacyIDEA', 'admin realms')
    user2.answer_inputbox('nix')\
         .check(initial('^super1,super2,super3$'))
    self.simulate_run(user2)
    p_config2 = PrivacyIDEAConfig(DEFAULT_CONFIG)
    self.assertEqual(p_config2.get_superusers(), ["nix"])

I think it's pretty nice so far , although I'm not 100% sure if the answer_* functions are a nice interface yet :) What do you think?

@cornelinux
Copy link
Member

Looks cool to me.
What are your concerns with the answer_inputbox?

One other question - how would

  .check(initial('^super1,super2,super3$'))

work. Does it always belong to an anser_* or can we call it derectly after user2.navigate() like

user.navigate("privacyIDEA", "admin realms")
user2.check(initial("^super1,super2,super3$")

?

@fredreichbier
Copy link
Contributor Author

Reading answer_inputbox('nix').check(...) seems backwards to me: "Answer the inputbox with nix, but oh, please check if the initial value is such-and-such before you do that!" 😄
But I guess I'll keep that API for now, I cannot think of a cleaner alternative that doesn't overcomplicate everything :-)

Currently, the check always belongs to an answer_* call. Always providing an answer has the nice effect that we end up with complete appliance tool runs: Under the hood, simulate_run simulates all provided answers and then simulates an user that chooses Cancel until the appliance tool has exited.

fredreichbier pushed a commit that referenced this issue Jan 9, 2018
fredreichbier pushed a commit that referenced this issue Jan 9, 2018
The API uses mocks to simulate user behavior.

Working on #32
cornelinux pushed a commit that referenced this issue Jan 9, 2018
cornelinux pushed a commit that referenced this issue Jan 9, 2018
The API uses mocks to simulate user behavior.

Working on #32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants