Testing (Microblog)


Hi, welcome to Getting Started with Django lesson 3. We were going to jump straight to the next project, but I really think we need to take one more episode, a short one this time, and cover testing. Testing is a very important part of being a developer and one that we definitely shouldn't skip. I don't tend to do test-driven development, where you would write the tests first, but I do try and make sure to write tests before anything goes live.

Packages we're going to use

The first package we're going to talk about and use is django-discover-runner and the second is coverage.py. I've already done a vagrant up to get my VM up and running so I'll do vagrant ssh to SSH into the machine.

After that, I'll, of course, do source ~/blog-venv/bin/activate and then cd /vagrant/projects/microblog.


Once I'm sourced into the virtaulenv, I'll go ahead and run pip install django-discover-runner coverage. If you haven't found this out already, you can name multiple packages that you want Pip to install by separating them with spaces.


Django-discover-runner is basically just a smarter version of Django's default test runner. Instead of looking in your INSTALLED_APPS tuple and then finding tests in those applications, it uses your source tree to find files whose names start with the string "test". This has the added benefit of avoiding running tests for third-party apps and Django itself.

There are a couple of settings that we're going to specify later that control where it looks for test files and what string it uses to match them.


Coverage.py keeps track of which lines in your codebase are and are not executed by your tests and gives you a rating so you know how much more you need to test and where. It can print out a guide in the terminal or you can have it generate HTML that you can click through and get a better visual on.

Remove old tests

In our blog/ directory, we have a file name tests.py. This would be fine if we were testing just minor functionality or didn't actually want to test our code. Since we'd like to write more serious tests, we're going to trash this file.

In its place, we'll create a directory named tests/ and then touch tests/__init__.py. Since we'll be using django-discover-runner, we could actually place the tests/ directory wherever we wanted, and you'd probably want to do this if you were actually creating a distributable app. Since we're not, we'll just keep the tests with the app.


If we run python manage.py test right now, it'll run all of our tests, which we don't want. So the first thing we need to do is create a set of settings for tests. In microblog/settings/, you should have base.py and local.py. I'll create a new one, for testing, which I'll name testing.py. Now let's open it up for editing.

On Snipt, I've uploaded a snippet that works really well for our purposes here. It's tagged with "gswd", so it should be easy to find. Copy and paste the lines into your test settings.

The first line imports our root() lambda function from our base settings. This'll let us define paths more easily. (NOTE: I fixed the snippet but got this wrong in the video. The first line should import * so that we get everything)

The last four settings control what class we use for running our tests, two settings for where to look for tests, and finally what pattern we want to use for matching test filenames. We want our test runner to start looking for tests one level above where root() would place it, so both of those are set to root('..').

The DATABASES dict creates a different default database for our tests to use. Normally, testing uses the same database settings as your actual site, but spawn a new database whose name is prefixed with test_. We're actually changing the database engine from Postgres to SQLite3 and, thanks to the NAME being :memory:, it actually lives entirely in memory instead of being written to disk.

I know, normally, I've said not to use SQLite. You shouldn't use it for development. I think it works well as a testing database, though, mostly because of speed. If you need to test complex, database-controlled logic, don't use a different engine than what you use in production.

Running tests

(There's a mistake in the video that's just too hard to fix right now)

If we go back to the directory that contains manage.py, we can run python manage.py test --settings=microblog.settings.testing and it'll use our test runner, our database, and run zero tests, because we haven't written any yet. We'll be running tests this same way the entire time.

If we run coverage run manage.py test --settings=microblog.settings.testing, we still get 0 tests run, but we can run coverage report and we get a list of all of the tests that coverage has paid attention to. At the bottom, though, we see our code listed and we can see what coverage we get even without tests. That doesn't mean we don't need write them, though!

Writing our tests

Before we start writing tests, we should think about what we want to test. To start that, let's open up blog/views.py, blog/models.py, and microblog/views.py. These three files represent almost all of the code we wrote for the project so far. We can test each view that we wrote (all three of them) and we can test our model and model manager.


Let's start with testing the models. First, we need to create blog/tests/test_models.py and then open it up for editing. To start creating a basic Django test case, we first need to import a few things. We need to bring in TestCase from django.test. This is the class that all of our test classes with start from.

We also need our model, so we'll import Post from ..models, again, we want to use relative paths. The .. works just like it would in a UNIX file system, moving up two modules to find the desired one.

We'll create a new class named BlogTests and it'll extend TestCase. We'll create a simple test inside here named test_model_creation. All test functions will begin with the word "test". Assign Blog.objects.create() to a variable named blog and then let's check out our model to see exactly what we need to provide and what would be good to test explicitly.

Our model requires a title and an author. Since we have to have an author every time, we need to make sure one is available in every test. First, we need to import User from django.contrib.auth.models. Then we can add a setUp method to our class. Methods named setUp in a TestCase are run, automatically, before every test method. Conversely, you can create a method named tearDown which will be run after every test method.

In setUp, we can assign a variable to the class, named user and give it a new User instance that we'll create(). Since we're not using this user for anything other than a foreign key in tests, we don't have to worry about providing a password or anything else.

Now back to our test_model_creation method. In the create() call, we'll set the title to "Test Blog Post", the slug to "test-blog-post", the author to self.user. Since we're testing that it creates the model instance correctly, we'll assert that isinstance(blog, Post) returns True. Actually, we should change all of our uses of "blog" to "post" since that's our actual model name.

Let's actually make the test a bit more useful. First, copy and paste the import for slugify from models.py to test_models.py. Since our model has a custom save() which povides a slug for us, we should test that that actually works as desired.

Back in test_models, let's take the slug assignment out of the create() so that our model will do it for us. Now, we'll add two more assertions. The first asserts that our instance's __unicode__() method returns our instance's title. The second, that our saved instance's .slug is equal to the output of the slugify() function on our instance's title.

If we run this test, we get that one test ran, which is what we want since we only wrote one test, and it didn't came back with "OK", so the test was successful. If we turn up the verbosity of the test runner (python manage.py test --settings=microblog.settings.testing -v 2), we can see which test or tests were ran. If we run it with coverage and then check the coverage report, we can see that blog/models.py has 25 statements and two have been missed. Not bad for only writing one test.

Next model tests

So what else can and should we test? Back in our model, we see that it has get_absolute_url(). This isn't a bad thing to test, so let's do that. First, in our test file, we need to import reverse from django.core.urlresolvers. Then let's make a new test method, this time named test_model_url.

Well, now we need to create a Post more than once. This is a great time and place to make a new method to help up. Let's create a method named create_post. You can cut and paste the lines for creating a post from the test_model_creation test method. We should change where we assigned it to a variable to now just return the created instance. In the test_model_creation method, you should add a line to assign the post variable to the create_post method. In our new test, do the same thing, post = self.create_post().

Since we're checking the URL, we can go ahead and assert that post.get_absolute_url() is equal to the return value of reverse('blog:detail', kwargs={'slug': post.slug}).

If we run the tests again, they both pass.

Let's expand our create_post method to make it a bit more useful. Add a new argument to the method for setting the title. Give it a default of "Test Blog Post" like before. Our model also has a field named published that controls whether or not an instance is visible; we should make that part of our creation method, too. So add another argument for published and default it to True. You also need to change the arguments to the create method, so set title=title and published=published so both of those attributes are controlled by how our method is called. A great side effect is, with how we've build this method, we don't have to change either of our existing calls to it.

Let's make a new test to test our model manager. We'll give it the name test_model_manager and, within it, we need to create two different posts. The first one, live_post will just call our create_post method with no special arguments. The second, draft_post, will call create_post but give it a custom title and set the published argument to False. Now we want to check the queryset that our model manager's custom method sends back. We'll do that with assertIn.

assertIn checks that the first item is in the second item. So we'll assert that live_post is in Post.objects.live(). We also want to assert that draft_post is not in that same queryset. If either of these assertions fail, we'll know that our model manager isn't working correctly. If we run the tests, though, we see that they all pass.

Making a test fail

Let's copy and paste our last assertion and change draft_post to live_post. Run the tests again and you'll see an error come up, rather loudly, and tell us that our assertNotIn has failed. Now, we can come back to the tests and fix this line, if it was a real test case that failed. Since we're just doing this for illustration, delete the line.

Last test on models

Looking back at our model and manager, we've covered pretty much everything. We've checked that the title is returned for calls to __unicode__(), we've tested that the title is slugged appropriately, that the correct URL is returned, and we've checked that our querysets are filtered correctly. Our save() method, however, is supposed to leave slugs existing or provided slugs alone. This is another area we should test.

We'll make one more test method, test_custom_slug. This time, instead of calling our create_post method, we'll just call create() on the Post model and pass in all of the information we want. We could update create_post to take either a set slug or a boolean that would control whether or not it created a slug, if we wanted to.

So set the slug argument to something different from the title argument; I chose the word fizzbuzz. Then we'll assert that our instance's slug is not equal to the slugify()'d version of our instance's title attribute. For a little extra assurance, we'll assert that post.slug and "fizzbuzz" are equal.

Running the tests again (especially after providing the missing author foreign key), shows that all of our tests have passed. I think the models are pretty well covered.

More failures

We can be fairly sure now that our model and manager code works exactly as we want, but just to illustrate again how the tests help us catch these things, let's change our model manager to just return the unfiltered queryset. If we run our tests now, we'll get a failure in our test_model_manager test case. Again, these tests give us a good sanity check, especially when doing rewrites and overhauls of somewhat complex functionality.


Now, to test views, we want to start by creating blog/tests/test_views.py and opening it up in our editor.

We need to import reverse and TestCase again. We'll use reverse to generate URLs for our tests to fetch and this actually tests that our urls.py is correctly configured, getting us two test birds with one test stone. We also know that we're going to need users, so bring in the User model, too. We can actually copy our Post import, BlogTests class, and the setUp and create_post methods from our other test file to save a little time. These things could, obviously, be refactored into other modules, but that's not really the best use of our time right now.

We could import our views, too, but I tend to think it's a bit of an ant-pattern, and slower, to create instances of your view classes for tests. As I mentioned before, fetching the views through their URLs gives us a bonus and accomplishes the same goal.

We can rename the class to BlogViewTests just to give them some distinction. We'll leave the user creation alone, we still need that, but let's also add a couple of Post instance creations to setUp so we have posts to work with right away. Again, we'll have live_post and draft_post, both of which follow along with their counterparts in test_models.py.

List view test

Create a new test case named test_list_view. The way you would manually test a list view is to create a post, then load the list view's URL in your browser and make sure the new post is there. We're basically going to do the same thing in this test.

First, we'll assign a url variable to the return value of reverse('blog:list'), which is our list view's route name. If you forget your url route names, you can check back to blog/urls.py. TestCase has a member named client which lets us make requests against our project. We'll make a new variable, named req for request, that holds on to self.client.get(url), which is the response provided by the client when it makes a GET request against that URL. This is what we'll make assertions against.

We'll assert that req's status_code is equal to the number 200. This indictate's that it was a good request with no redirections or errors. We'll also assert that our blog/list.html template was used. Since our views don't list their template files, we can check the file system to make certain.

We can also assert that our live_post's title appear somewhere in the req.

Running this test gets us a failure on the last assertion. req is just a TemplateResponse object so it doesn't, by itself, have the post's title. What we actually want to check is the rendered_content attribute of the req object, so change the test to assert against that and run the tests again. They all pass.

Detail view test

Our new test method is named test_detail_view and it starts off much like the list view test. We'll have a url variable, but this time it holds onto the reverse of "blog:detail" with keyword arguments where the slug is equal to self.live_post.slug.

We can copy and paste the assertions and the req line down to this test since they're almost the same. We need to change our template name from "blog_list" to "blog_detail", though.

We can run the tests again and they all pass. We can add one more assertion, though, as a small test against our template structure. We can assert that reverse('blog:list')'s output appear somewhere in the rendered_content. This is our link back to the list of posts. We can do the same in our previous test, by looking for the URL to our live_post in req.rendered_content. This can be done by assigning the reverse()'d URL to a variable and looking for that, or, more easily, by checking for live_post.get_absolute_url()'s value in the rendered_content.

This isn't necessarily a great idea, though, as you have to update the tests when you change your templates. It would be better to test this kind of content with something like Selenium, but that's beyond the scope fo this tutorial.

Draft view test and 404s

We need to test one more aspect of our views, and that's that requesting a non-published post should give us a 404. Up at the top of the file, we need to import Http404 from django.http and then we'll start another new test case, this time named test_draft_view. We're basically expecting this one to fail.

While I normally prefer to write my test cases with similar patterns, this time we'll set our url variable to draft_post.get_absolute_url(). Then we'll assert that we expect Http404 to be raised by our following code. Inside the with, we'll fetch the URL.

Running this test fails and complains about us missing our 404.html template. Why would it give us that error? Because tests run with DEBUG equal to False, so it expects to find and show a template for 404s instead of just showing Django's standard error page.

Looking back at our test, though, we're doing this the wrong way. When our code raises Http404, that actually just causes Django to render the 404 template and provide a status_code of 404. It doesn't, however, bubble the Http404 error all the way up to our test runner or the requesting user. So we don't need the import at the top or the assertRaises.

Let's do our normal request for the URL and assert that the req.status_code is "404". This test fails, too, though, because we still haven't created the needed template. So let's create microblog/templates/404.html and try the test again.

All of the tests now pass.

Homework and Coverage

One other thing that should really be tested from our existing code is the HomepageView from microblog.views. I'll leave that as a homework exercise, though. You could also test that the created_at and updated_at dates on the Post model are being set correctly, but that almost smells like testing Django's functionality and can easily be ignored.

If you run coverage html after running your tests, you'll notice a new directory named htmlcov. Open up that directory's index.html in your browser and you'll see a similar output to coverage report. Down near the bottom, you'll see your models.py and views.py and every thing else. Click on a file and on the lefthand side you'll see green bars. These indicated lines that have been called during testing; lines that are covered by a test. If you open a random other file, you're likely to see red lines; these are lines that have not been called by a test and should have tests written for them.


Hopefully this gets you comfortable with writing basic tests and you'll go ahead and write a test for the HomepageView that checks that the page loads and, perhaps, that it has a link to your list page.

That's all for this time. We'll be back soon with a longer episode getting into the next project. Thanks for watching!

Code Snippets

At the end of everything, here's how the two test files stand:


from django.contrib.auth.models import User
from django.core.urlresolvers import reverse
from django.template.defaultfilters import slugify
from django.test import TestCase

from ..models import Post

class BlogTests(TestCase):

    def setUp(self):
        self.user = User.objects.create(username='test')

    def create_post(self, title='Test Blog Post', published=True):
        return Post.objects.create(

    def test_model_creation(self):
        post = self.create_post()
        self.assertTrue(isinstance(post, Post))
        self.assertEqual(post.__unicode__(), post.title)
        self.assertEqual(post.slug, slugify(post.title))

    def test_model_url(self):
        post = self.create_post()
            reverse('blog:detail', kwargs={'slug': post.slug}))

    def test_model_manager(self):
        live_post = self.create_post()
        draft_post = self.create_post(title='Draft Post',
        self.assertIn(live_post, Post.objects.live())
        self.assertNotIn(draft_post, Post.objects.live())

    def test_custom_slug(self):
        post = Post.objects.create(
            title='A Post with a Custom Slug',
        self.assertNotEqual(post.slug, slugify(post.title))
        self.assertEqual(post.slug, 'fizzbuzz')


from django.contrib.auth.models import User
from django.core.urlresolvers import reverse
from django.test import TestCase

from ..models import Post

class BlogViewTests(TestCase):

    def setUp(self):
        self.user = User.objects.create(username='test')
        self.live_post = self.create_post()
        self.draft_post = self.create_post(title='Draft Post',

    def create_post(self, title='Test Blog Post', published=True):
        return Post.objects.create(

    def test_list_view(self):
        url = reverse('blog:list')
        req = self.client.get(url)
        self.assertEqual(req.status_code, 200)
        self.assertTemplateUsed(req, 'blog/post_list.html')
        self.assertIn(self.live_post.title, req.rendered_content)

    def test_detail_view(self):
        url = reverse('blog:detail',
            kwargs={'slug': self.live_post.slug})
        req = self.client.get(url)
        self.assertEqual(req.status_code, 200)
        self.assertTemplateUsed(req, 'blog/post_detail.html')
        self.assertIn(self.live_post.title, req.rendered_content)
        self.assertIn(reverse('blog:list'), req.rendered_content)

    def test_draft_view(self):
        url = self.draft_post.get_absolute_url()
        req = self.client.get(url)
        self.assertEqual(req.status_code, 404)

Questions? Answers