What To Do When A Testing Suite Isn’t Feasible

There are times that we – programmers and/or our clients – have limited resources with which to write both the expected deliverable and the automated tests for that deliverable. When the application is small enough you can cut corners and skip tests because you remember (mostly) what happens elsewhere in the code when you add a feature, fix a bug, or refactor. That said, we won’t always work with small applications, plus, they tend to get bigger and more complex over time. This makes manual testing difficult and super annoying.

For my last few projects, I was forced to work without automated testing and honestly, it was embarrassing to have the client email me after a code push to say that the application was breaking in places where I hadn’t even touched the code.

So, in cases where my my client either had no budget or intention of adding any automated test framework, I started testing the whole website’s basic functionality by sending an HTTP request to each individual page, parsing the response headers and looking for the ‘200’ response. It sounds plain and simple, but there is a lot you can do to ensure fidelity without actually having to write any tests, unit, functional, or integration.

While there are merits to all three types of testing, they don’t get written in most of projects.

Tests don’t get written in a lot of contract projects for a variety of reasons, so what can you do?

Automated Testing

In web development, automated tests comprise of three major test types: unit tests, functional tests and integration tests. We often combine unit tests with functional and integration tests to make sure everything runs smoothly as a whole application. When these tests are run in unison, or sequentially (preferably with single command or click), we start calling them automated tests, unit or not.

Largely the purpose of these tests (at least in web dev) is to make sure all application pages are rendered without trouble, free from fatal (application halting) errors or bugs.

Unit Testing

Unit testing is a software development process in which the smallest parts of code – units – are independently tested for correct operation. Here’s an example in Ruby:

test “should return active users” do
     active_user =  create(:user, active: true)
     non_active_user = create(:user, active: false)
     result = User.active

     assert_equal result, [active_user]

Functional Testing

Functional testing is a technique used to check the features and functionality of the system or software, designed to cover all user interaction scenarios, including failure paths and boundary cases.

Note: all our examples are in Ruby.

test "should get index" do
    get :index
    assert_response :success
    assert_not_nil assigns(:object)

Integration Testing

Once the modules are unit tested, they are integrated one by one, sequentially, to check the combinational behavior, and to validate that the requirements are implemented correctly.

test "login and browse site" do
    # login via https
    get "/login"
    assert_response :success
    post_via_redirect "/login", username: users(:david).username, password: users(:david).password
    assert_equal '/welcome', path
    assert_equal 'Welcome david!', flash[:notice]
    get "/articles/all"
    assert_response :success
    assert assigns(:articles)

Tests in an Ideal World

Testing is widely accepted in the industry and it makes sense; good tests let you:

  • Quality assure your whole application with the least human effort
  • Identify bugs more easily because you know exactly where your code is breaking from test failures
  • Create automatic documentation for your code
  • Avoid ‘coding constipation’, which, according to some dude on Stack Overflow, is a humorous way of saying, “when you don’t know what to write next, or you have a daunting task in front of you, start by writing small.”

I could go on and on about how awesome tests are, and how they changed the world and yada yada yada, but you get the point. Conceptually, tests are awesome.

Tests in the Real World

While there are merits to all three types of testing, they don’t get written in most of projects. Why? Well, let me break it down:


Everyone has deadlines, and writing fresh tests can get in the way of meeting one. It can take time and a half (or more) to write an application and its respective tests. Now, some of you do not agree with this, citing time saved ultimately, but I don’t think this is the case and I’ll explain why in ‘Difference of Opinion’.

Client Issues

Often, the client doesn’t really understand what testing is, or why it has value for the application. Clients tend to be more concerned with rapid product delivery and therefore see programmatic testing as counterproductive.

Or, it may be as simple as the client not having the budget to pay for the extra time needed to implement these tests.

Lack of Knowledge

There is a sizeable tribe of developers in the real world that doesn’t know testing exists. At every conference, meetup, concert, (even in my dreams), I meet developers that don’t know how to write tests, don’t know what to test, don’t know how to setup the framework for testing, and so on. Testing isn’t exactly taught in schools, and it can be a hassle to set up/learn the framework to get them running. So yes, there’s a definite barrier to entry.

‘It’s a Lot of Work’

Writing tests can be overwhelming for both new and experienced programmers, even for those world-changer genius types, and to top it off, writing tests isn’t exciting. One may think, “Why should I engage in unexciting busywork when I could be implementing a major feature with results that will impress my client?” It’s a tough argument.

Last, but not least, it is hard to write tests and computer-science students are not trained for it.

Oh, and refactoring with unit tests is no fun.

Difference in Opinion

In my opinion, unit testing makes sense for algorithmic logic but not so much for coordinating living code.

People claim that even though you’re investing extra time up front in writing tests, it saves you hours later when debugging or changing code. I beg to differ and offer one question: Is your code static, or ever changing?

For most of us, it’s ever changing. If you are writing successful software, you’re always adding features, changing existing ones, removing them, eating them, whatever, and to accommodate these changes, you must keep changing your tests, and changing your tests takes time.

But, You Need Some Kind Of Testing

No one will argue that lacking any sort of testing is the worst possible case. After making changes in your code, you need to confirm that it actually works. A lot of programmers try to manually test the basics: Is the page rendering in the browser? Is the form being submitted? Is the correct content being displayed? And so on, but in my opinion, this is barbaric, inefficient and labour intensive.

No one will argue that lacking any sort of testing is the worst possible case.

What I Use Instead

The purpose of testing a web app, be it manually or automated, is to confirm that any given page is rendered in the user’s browser without any fatal errors, and that it shows its content correctly. One way (and in most cases, an easier way) to achieve this is by sending HTTP requests to the endpoints of the app and parse the response. The response code tells you whether the page was delivered successfully. It’s easy to test for content by parsing the response body of the HTTP request and searching for specific text string matches, or, you can be one step fancier and use web scraping libraries such as nokogiri.

If some endpoints require a user login, you can use libraries designed for automating interactions (ideal when doing integration tests) such as mechanize to login or click on certain links. Really, in the big picture of automated testing, this looks a lot like integration or functional testing (depending on how you use them), but it’s a lot quicker to write and can be included in an existing project, or added to a new one, with less effort than setting up whole testing framework. Spot on!

Edge cases present another problem when dealing with large databases with a wide range of values; testing whether our application is working smoothly across all anticipated datasets can be daunting.

One way to go about it is to anticipate all the edge cases (which is not merely difficult, it’s often impossible) and write a test for each one. This could easily become hundreds of lines of code (imagine the horror) and cumbersome to maintain. Yet, with HTTP requests and just one line of code, you can test such edge cases directly on the data from production, downloaded locally on your development machine or on a staging server.

Now of course, this testing technique is not a silver bullet and has lots of shortcomings, the same as any other method, but I find these types of tests faster, and easier, to write and modify.

With HTTP requests & one line of code, you can test such edge cases directly on data from production.

In Practice: Testing with HTTP requests

Since we’ve already established that writing code without any kind of accompanying tests isn’t a good idea, my very basic go-to test for an entire application is to send HTTP requests to all its pages locally and parse the response headers for a 200 (or desired) code.

For example, if we were to write the above tests (the ones looking for specific content and a fatal error) with an HTTP request instead (in Ruby), it would be something like this:

# testing for fatal error
http_code = `curl -X #{route[:method]} -s -o /dev/null -w "%{http_code}" #{Rails.application.routes.url_helpers.articles_url(host: 'localhost', port: 3000)
if  http_code !~ /200/
    return “articles_url returned with #{http_code} http code.”

# testing for content
 active_user =  create(:user, name: “user1”, active: true)
 non_active_user = create(:user, name: “user2”, active: false)
content = `curl #{Rails.application.routes.url_helpers.active_user_url(host: 'localhost', port: 3000)
if content !~ /#{active_user.name}/
     return “Content mismatch active user #{active_user.name} not found in text body” #You can customise message to your liking
if content =~ /#{non_active_user.name}/
     return “Content mismatch non active user #{active_user.name} found in text body” #You can customise message to your liking

The line curl -X #{route[:method]} -s -o /dev/null -w "%{http_code}" #{Rails.application.routes.url_helpers.articles_url(host: 'localhost', port: 3000) } covers a lot of test cases; any method raising an error on the article’s page will be caught here, so it effectively covers hundreds of lines of code in one test.

The second part, which catches the content error specifically, can be used multiple times to check the content on a page. (More complex requests can be handled using mechanize, but that’s beyond the scope of this blog.)

Now, in cases where you want to test if a specific page works on a large, varied set of database values (for example, your article page template is working for all the articles in the production database), you could do:

ids = Article.all.select { |post| `curl -s -o /dev/null -w “%{http_code}” #{Rails.application.routes.url_helpers.article_url(post, host: 'localhost', port: 3000)
}`.to_i != 200).map(&:id)
return ids

This will return an array of IDs of all the articles in the database that were not rendered, so now you can manually go to the specific article page and check out the problem.

Now, I understand that this way of testing might not work in certain cases, such as testing a standalone script or sending an email, and it is undeniably slower than unit tests because we are making direct calls to an endpoint for each test, but when you can’t have unit tests, or functional tests, or both, this is better than nothing.

How would you go about structuring these tests? With small, non-complex projects, you can write all your tests in one file and run that file each time before you commit your changes, but most projects will require a suite of tests.

I usually write two to three tests per endpoint, depending on what I’m testing. You can also try testing individual content (similar to unit testing), but I think that would be redundant and slow since you will be making an HTTP call for every unit. But, on the other hand, they will be cleaner and easy to understand.

I recommend putting your tests in your regular test folder with each major end point having its own file (in Rails, for example, each model/controller would have one file each), and this file can be divided into three parts according to what we are testing. I often have at least three tests:

Test One

Check that the page returns without any fatal errors.

Test one checks that the page returns without any fatal errors.

Note how I made a list of all the endpoints for Post and iterated over it to check that each page is rendered without any error. Assuming everything went well, and all the pages were rendered, you will see something like this in the terminal: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of failed url(s) -- []

If any page is not rendered, you will see something like this (in this example, the posts/index page has error and hence is not rendered): ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of failed url(s) -- [{:url=>”posts_url”, :params=>[], :method=>”GET”, :http_code=>”500”}]

Test Two

Confirm that all the expected content is there:

Test two confirms that all the expected content is there.

If all the content we expect is found on the page, the result looks like this (in this example we make sure posts/:id has a post title, description and a status): ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of content(s) not found on Post#show page with post id: 1 -- []

If any expected content is not found on the page (here we expect the page to show status of post - ‘Active’ if post is active, ‘Disabled’ if post is disabled) the result looks like this: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of content(s) not found on Post#show page with post id: 1 -- [“Active”]

Test Three

Check that the page renders across all datasets (if any):

Test 3 checks that the page renders across all datasets.

If all the pages are rendered without any error, we will get an empty list: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of post(s) with error in rendering -- []

If the content of some of the records has a problem rendering (in this example, pages with the ID 2 and 5 are giving an error) the result looks like this: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of post(s) with error on rendering -- [2,5]

If you want to fiddle around with the above demonstration code, here’s my github project.

So Which Is Better? It Depends…

HTTP Request testing might be your best bet if:

  • You’re working with a web app
  • You’re in a time crunch and want to write something fast
  • You’re working with a big project, pre-existing project where tests were not written, but you still want some way to check code
  • Your code involves simple request and response
  • You don’t want to spend a large portion of your time maintaining tests (I read somewhere unit test = maintenance hell, and I partially agree with him/her)
  • You want to test if an application works across all the values in an existing database

Traditional testing is ideal when:

  • You’re dealing with something other than a web application, such as scripts
  • You’re writing complex, algorithmic code
  • You have time and budget to dedicate to writing tests
  • The business requires bug-free or a low-error rate (finance, large user base)

Thanks for reading the article; you should now have a method for testing you can default to, one you can count on when you’re pressed for time.

About the author

Bhushan Lodha, Germany
member since July 15, 2014
Bhushan is a Hacker School alum and a developer proficient Ruby, Rails, and Backbone.js. He has a knack for design and follows a minimalist approach to make interfaces with excellent UX. He has worked in two startups and has co-founded one as well. [click to continue...]
Hiring? Meet the Top 10 Freelance Developers for Hire in April 2019


lol writing a bunch of HTTP tests to check endpoints for a response? Might as well do something like a protractor or selenium test and just test that way. If you're in test maintenance hell that means you either: 1. Have a shitty architecture that isn't modular at all. 2. You're actually changing logic/workflow that should be represented in your unit tests. I get that you're probably just bad at writing tests so you wrote an article about doing HTTP(Acceptance) tests. If you're in a time crunch and need something fast you still write tests. It's called TDD. If you're bad at it, there are plenty of online courses that teach you how to do it well. Sadly these days programmers type so slow that they try to find all sorts of hacks and copy paste solutions to their work. Learn to type faster and learn to refactor. Tools like Resharper help along with good editors(let's hope you're not using notepad).
Christian Pemberton
Hi Bushan, FYI. With regards to your "Testing isn’t exactly taught in schools" assertion, perhaps this was the case in the past but now a lot of software engineering programmes teach testing. For example, this particular course is held in July at the University of Oxford : http://www.cs.ox.ac.uk/softeng/subjects/APE.html
Alex Zherebtsov
I totaly agree with this opinion. Good tests checking the content and also behaviour of the page components it is hard but one-time job with not so complicated maintanence. And such tests for me will be much valuable because I can use them to identify regressions. Modern pages full of JS components and theirs behaviour is crucial part of GUI, author has described tests acceptable 10 years ago or in other words "better-than-nothing" but nowadays it is waste of time.
Thiago Negri
Good post! Never thought about doing it (HTTP tests). It seems a huge win for such a low cost. Thanks for sharing!
When the time crunch passes? You should never compromise on code quality, EVER. If you are working on a project where you cant add "legitimate tests", it's a project thats either a prototype or a bunch of throw-away work. I would never release anything to production without "legitimate tests". I'm sorry if I have a high respect for my clients and employers that I don't instantly resort to band-aid solutions. That said, if you feel like you can't write tests because the deadline is so tight, then I would consider looking at some of the other blog posts on this site and get familiar with test driven development or how to productively add unit tests to your projects. It's REALLY not that hard. I swear every person that claims its hard and claims that it takes a lot of time is just used to writing awful unmaintainable code. Start learning how to write testable code. Once you start writing more and more tests you realize what is considered quality code. Many people read this blog and having awful advice is unacceptable when the industry is already moved on from old school hacker applications and developers who like to compromise with band-aids
Most modern universities these days teach software testing. I don't know where the author went but my alum has plenty of courses on it. It's 2016, not 2005.
Exactly, cheers mate.
Joshua Plicque
You are being a total jerk. This post presents a good band-aid solution for when a project is out of control and has no tests. When the time crunch passes, of course you should add legitimate tests. Your comment is unfairly negative.
I thought the same at first sight, but this idea is actually quite short. You must remember most of the site testing you should actually do should relate to its UI, not only the static content it returns. For instance, it's of low usage a test that tells you each page of a search result returns correct values if the paging buttons do not work, or if the search form does not respond. HTTP Testing is not suitable for website testing, only for API testing. For real website testing you should use Selenium or similar suites instead - they're also a low cost tool that would provide you much more consistent e usable results.
Thiago Negri
There is always better ways to do anything "for real". Given the context of the post (projects with an absolute lack of automated tests), doing HTTP testing like he proposes is easily feasible and has a super low maintenance cost. It does not guarantee that the entire page works, but it guarantees that no one is going to see a page with "oops, something bad happened". At least the bare minimum (e.g. post content) will be available. Yes, testing with Selenium would assert more about the usability of the web site, but it comes with increased costs of project and maintenance. There are more refactorings that could break Selenium tests than refactorings that could break HTTP tests. Considering that the project is big enough and has no test base, convincing the client and the team to embrace a tool like Selenium is not a trivial task. Spinning up an idea like HTTP testing seems pretty straightforward and fast enough to deliver without additional costs to the client, and as they have a low maintenance cost I don't see why a client/team wouldn't appreciate to have it instead of going blind on a deploy.
Alex Zherebtsov
When project has no tests you in general do not have problems and restrictions on what test methodology to use and if you know that Your users are happy from what they use you don't have to spent time on testing legacy code. What you have to do is to choose appropriate test methodology and tools and focus on testing "new" (recently added) code and treat legacy code as working well enough. Also when you have to change old code you have to create tests for this particular update or if it is possible to the whole this old module/function/script whatever. In general tests described here may be replaced with trivial http-log grep checking that all/some urls have been loaded with status 200 and not empty response, this work is not testing in general. You said "project is out of control" - for me it is possible in two cases: - owner of the project doesn't care what's going on - in this cases described tests are pretty good because they are also out of control and do not add any value to the project and don not change the knowledge about project quality - you have task to test some third-party project - but this is very complicated task and should be done in a quite different way
You can add protractor(selenium under the hood) tests in a matter of 15minutes with a few configuration files. It is only a high cost if the developer isn't willing to do.
Mrunal Khatri
Excellent information here. Keep sharing such a nice updates. It help us a lot understanding testing and the latest trends going on and a lot more.
Joseph S.
IMHO, with this kind of testing, you are checking if a html page is showed (even you do not if it is showed correctly) or not, you can have a lot of mistakes in your business logic, but you cannot realize of them.
Maati Sk
<blockquote>It's called TDD. If you're bad at it, there are plenty of online courses that teach you how to do it well.</blockquote> About that quote. Do you recommend some particular websites or courses? Thanks.
comments powered by Disqus
Free email updates
Get the latest content first.
No spam. Just great articles & insights.
Free email updates
Get the latest content first.
Thank you for subscribing!
Check your inbox to confirm subscription. You'll start receiving posts after you confirm.
Trending articles
Relevant Technologies
About the author
Bhushan Lodha
Ruby on Rails Developer
Bhushan is a Hacker School alum and a developer proficient Ruby, Rails, and Backbone.js. He has a knack for design and follows a minimalist approach to make interfaces with excellent UX. He has worked in two startups and has co-founded one as well.