[Solved]-How to measure the time profile of each django test?

9👍

Update:
Original post was in 2019 and it seems that the package discussed below (django-slowtest) is no longer maintained. As for now I don’t see any maintained forks either.

Old question but I came across it and noticed that django-slowtest is not mentioned, which I think is absolutely worth mentioning. I’ve been using it for quite a while now and I’m very satisfied with the results. Only downside, it does not yet state that it supports Django >= 2 officially, but I have not encountered any issues and since it is in development only for me I feel safe using it.

  • works properly in --parallel
  • is configurable in how many slow tests you want to show after running tests
  • is configurable in threshold when it should mark a tests as slow
  • has report (export) functionality.
  • still maintained as of 2019-02

https://github.com/realpython/django-slow-tests

from docs:
Install:

$ pip install django-slowtests

Add the following settings:

TEST_RUNNER = 'django_slowtests.testrunner.DiscoverSlowestTestsRunner'
NUM_SLOW_TESTS = 10

# (Optional)
SLOW_TEST_THRESHOLD_MS = 200  # Only show tests slower than 200ms

# (Optional)
ALWAYS_GENERATE_SLOW_REPORT = False  # Generate report only when requested using --slowreport flag

finally, run your tests

$ python manage.py test
Creating test database for alias 'default'...
..........
----------------------------------------------------------------------
Ran 10 tests in 0.413s

OK
Destroying test database for alias 'default'...

Ten slowest tests:
0.3597s test_detail_view_with_a_future_poll (polls.tests.PollIndexDetailTests)
0.0284s test_detail_view_with_a_past_poll (polls.tests.PollIndexDetailTests)
0.0068s test_index_view_with_a_future_poll (polls.tests.PollViewTests)
0.0047s test_index_view_with_a_past_poll (polls.tests.PollViewTests)
0.0045s test_index_view_with_two_past_polls (polls.tests.PollViewTests)
0.0041s test_index_view_with_future_poll_and_past_poll (polls.tests.PollViewTests)
0.0036s test_index_view_with_no_polls (polls.tests.PollViewTests)
0.0003s test_was_published_recently_with_future_poll (polls.tests.PollMethodTests)
0.0002s test_was_published_recently_with_recent_poll (polls.tests.PollMethodTests)
0.0002s test_was_published_recently_with_old_poll (polls.tests.PollMethodTests)

7👍

This does enough to get me by (relative timings), but it does not work for tests run in parallel, and there is certainly scope to improve the accuracy of the timings. Nonetheless:

Override the default runner via settings.py:

TEST_RUNNER = 'myapp.test_runner.MyTestRunner'

Thence create your own test_runner myapp/test_runner.py:

from django.test.runner import DiscoverRunner

class MyTestRunner(DiscoverRunner):
    test_runner = TimedTextTestRunner

And, in turn, override the resultclass in that:

from unittest.runner import TextTestRunner, TextTestResult

class TimedTextTestRunner(TextTestRunner):
    resultclass = TimedTextTestResult

Now the results object is not just one result but many, so we need a collection of clocks, keyed by test. Then capture the time the test starts and print out the time elapsed when it prints the success string:

class TimedTextTestResult(TextTestResult):

    def __init__(self, *args, **kwargs):
        super(TimedTextTestResult, self).__init__(*args, **kwargs)
        self.clocks = dict()

    def startTest(self, test):
        self.clocks[test] = time()
        super(TextTestResult, self).startTest(test)
        if self.showAll:
            self.stream.write(self.getDescription(test))
            self.stream.write(" ... ")
            self.stream.flush()

    def addSuccess(self, test):
        super(TextTestResult, self).addSuccess(test)
        if self.showAll:
            self.stream.writeln("ok-dokey (%.6fs)" % (time() - self.clocks[test]))
        elif self.dots:
            self.stream.write('.')
            self.stream.flush()

Which gave me test reports looking like this:

test_price_impact (api.tests.TestGroupViews) ... ok-dokey (3.123600s)
test_realised_spread (api.tests.TestGroupViews) ... ok-dokey (6.894571s)
test_sqrt_trade_value (api.tests.TestGroupViews) ... ok-dokey (0.147969s)
test_trade_count_share (api.tests.TestGroupViews) ... ok-dokey (3.124844s)
test_trade_size (api.tests.TestGroupViews) ... ok-dokey (3.134234s)
test_value_share (api.tests.TestGroupViews) ... ok-dokey (2.939364s)

3👍

nose has a timer plugin, which records the wall time of each single test execution.

https://github.com/mahmoudimus/nose-timer/tree/master/nosetimer

Nose’s coberatura xml report also shows the time spent in each test, by default.


For django specific issues there are a number of easy optimizations that can be made to increase run time of tests:

  • Use Sqlite, if you’re not using DB specific functionality
  • use a md5 password hasher, (or no password hasher at all)
  • disable migrations
  • remove io, isolate logic as much as possible so complex model dependencies can avoid be created

How many tests are in test_dependencies and test_metrics? What sorts of tests are in them?

Leave a comment