Code coverage from manual/automated testing of web sites, application

This article explains how to get code coverage from website, web application , web service, stanalone executable etc..

Table of contents

    • Introduction
    • Code coverage steps
      • compiling website
      • instrumenting all dlls or executable
      • start vsperfcmd
      • run application
      • stop application
      • stop vsperfcmd
      • convert code coverage file to xml
      • use report generator to get html formatted report
    • Automating the code coverage process via batch script
    • Conclusion
    • History


Here is the definition of code coverage (courtesy – wikipedia)
In computer science, code coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high code coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage.Many different metrics can be used to calculate code coverage; some of the most basic are the percent of program subroutines and the percent of program statements called during execution of the test suite.

In my earlier project, we used to write unit testcases and get code coverage out of that. But for legacy project, we found it hard to write unit testcases as we have to refactor the code which is time consuming task. So we decided to set up a process, that gets us code coverage from manual or automated testing of website.

Here is the detailed explanation of steps.

Code coverage implementations.

Here, I will explain , how to get code coverage for website and stanalone executable. I have chosen website instead of web application, because

  • in earlier project we had website, and we have not migrated that to web application.
  • in case of web application, we already have dlls, so it is easier to get code coverage (saves one step of the whole process)

For website, we will only have .aspx and aspx.cs or aspx.vb files along with .config file. If you have any project reference, you will also have bin folder with dlls inside.

For sake of understanding, I have taken an example of website. This is a simple calculator website and standalone executable. Which takes 3 input, operator, 2 inputs and calculates value.

It has 3 projects.

  • CalcDLL“, this is a library project. Which has logic of calculating the value from inputs. It has one public function “Calculate” which takes operator and its constructor takes 2 input. “Calculate” function returns result.
  • A website (“CalculateWebsite“), what has 2 textboxes and a dropdownlist. This takes input from user and calculates value. It has project reference of CalcDLL. It has a publishing profile, which simply copies file to a separate location. Here it points to “C:\demo\” .
  • A console project (“CalcConsole“), which takes 3 command line argument. E.g. “CalcConsole.exe + 1 1″, output will be Result = 2. This is self explanatory.



  • This project can be testable via unit testing. I have skipped this, because, this article demonstrates code coverage via auto or manual testing of app (sort of functional testing).
  •  I have set paths of every tool required in my environment variable, that is why I can execute those command directly.

Pre steps for getting DLLs and EXEs.


For website, first, we need to publish to local directory. Since it will not copy “.pdb” files of dependent project reference, we need to manually copy that file from bin folder of website project (location – \CalculateWebsite\bin\Calc.pdb) to published dir (C:\demo\bin).

Here is the screenshot of the website.

Compiling website:

For website, we will have only “.aspx.cs” and “.aspx” files, and since instrumentation works on only dll or exe, we need to convert the web site to web application or we need to precompile the project. Here is the command to do so.

aspnet_compiler -p %original_dir% -v / %compiled_dir% -d
  • -p is for physical path
  • -v for virtual directory
  • -d is for debug version.

Here is the screenshot of website, after compiling.

This command will generate a dll with name similar to this – App_Web_m55qrtmf.dll . And it will have only “aspx” file and config file. Because it compiled .aspx.cs file to that dll.

Here is the screenshot .


For console project
For console, we need to build in debug mode.

Note: When we go for code coverage, we always need to have debug version of project. Instrumentation process needs DDL and PDB files.

Instrumenting dlls:

Here is the command to instrument.

vsinstr /coverage 

Location for vsinstr.exe is “C:\Program Files (x86)\Microsoft Visual Studio 12.0\Team Tools\Performance Tools” . You can set PATH variable of your system to run this command directly. “vsinstr” stands for visual studio instrumentation tool.

Since I have VS 2013, path of vsinstr is above, for you can try replacing 12.0 with your versions.

For websites or app, we may have multiple dlls, so we need to do that for all dlls.

In case of console project, following is the exact command.

vsinstr /coverage Calc.dll
vsinstr /coverage CalcConsole.exe

In case of website, we need to execute following command.

vsinstr /coverage C:\demo\compiled\bin\Calc.dll
vsinstr /coverage C:\demo\compiled\bin\App_Web_m55qrtmf.dll

Here is the screenshot of vsinstr output.


When you run vsinstr, it will add hooks to functions, conditional block etc.., and that is why dll and exe size will increase. It will keep a back of those file with “.orig” extension.

Here is the screenshot of files after running vsinstr . If you compare with previous, you can see that dll and exe size has grown.


Start vsperfcmd:

Next, we will start vsperfcmd. “vsperfcmd” stand for visual studio performance command line tool. Following is the command to run vsperfcmd. Location for vsperfcmd is same as vsinstr. If you set path in environment variable, it will be easier to execute them.

vsperfcmd /start:coverage /output:C:\demo\compiled\demo.coverage.
  • /start:coverage – we are telling to start in coverage mode.
  • /output – location of .coverage file.

Here is the screenshot of output.


Running the application:

For console project, you can start testing directly from command line. E.g. say we want to test “+” and “-“. Then use following command to test.

"CalcConsole.exe + 2 5" .

Here “CalcConsole.exe” is the output of CalcConsole console project. You need to launch exe from this location – “\CalcConsole\bin\debug\CalcConsole.exe“.
For website project, you need to start IISExpress and run the website from there and start testing manually. I have set path of IISExpress (C:\Program Files\IIS Express\) in my enviroment. Following is the command to run the IISExpress.

iisexpress /path:C:\demo\compiled\ /port:8888 /clr:v4.0
  • /path – physical path to website location
  • /port – port to listen.
  • /clr – clr version to use.

Here is the output of command. To stop IISExpress, you need to press “q” .


As you can see, I have tested only “+” operator. There is only a POST request.

stop application:

For website, press “q” to stop web server.

For console project, just stop testing 🙂 .

stop vsperfcmd:

Now, we need to stop vsperfcmd. Here is the command to do that.

vsperfcmd /shutdown

This will collect statistic about testing and generate “demo.coverage” file, the name we have at start of coverage process.

Here is the screenshot for this.


convert code coverage file to xml:

This coverage file can be opened in ultimate or premium edition of visual studio. Here is the screenshot for this.


To skip this step, you can use VS (with required versions), and export to xml version of file. This coverage file is a binary file, so it need to be converted to readable format to get nicely looking html report.

Or you can use following code snippet to covert to XML file.

 // args[0] - input path of demo.coverage file
 // args[1] - output directory for converted xml file
 string wspath = args[1] + "coverage.xml";
 CoverageInfo coverage = CoverageInfo.CreateFromFile(args[0]);
 DataSet data = coverage.BuildDataSet(null);
 data.WriteXml(wspath );

Here you need DLL dependency of “Microsoft.VisualStudio.Coverage.Analysis.dll” which is part of Visual studio* . Location for those dependent dlls is “C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies

* – again those are part of required vs version, stated above.

I have written a simple utility using above codebase, which converts to xml file. Here is the command to execute that.

coveragereport C:\demo\compiled\demo.coverage C:\demo\compiled\

This will create “coverage.xml” file there.

HTML report generator:

For this you need an open source tool “ReportGenerator” . This will convert the xml report to nicely looking html report. Here is the command to execute.

reportgenerator -reports:C:\demo\compiled\coverage.xml -targetdir:C:\demo\compiled\report

Here is the screenshot for this.


If you check the directory, it has created html file for reports in “C:\demo\compiled\report“. You can check, the report by opening html file (index.htm).

Now, you can see “SumEvaluator” has 100% coverage.

Here is the screenshot of html file.


Automating the code coverage process via batch script:

To automate this process, I have created a batch file “start-coverage.bat” which takes one input, path of website. It will take input of website location and calculate code coverage. You just have execute batch file like this.

Assumption: Required path has been set up in executing system environment variable. “vsinstr”,”vsperfcmd”,”coverage_to_xml_tool”,”reportgenerator” path need to be set in environment variable.

start-coverage.bat C:\demo


This process will simplify getting coverage report from functional testing, be it manual or automatic. Batch file will make easier to get it done. Only manual step here is to copy the pdb file to published location.

Here is the link for code location. This contains source code for 3 projects + 1 project ( tool to convert coverage file to xml file).


I have tried many helpful suggestions from microsoft MSDN link to achive this. Following are few of them.

  • deploying the website to IIS server and using “vsperfclrenv /globaltraceon” to start getting coverage.
  • attach w3wp to vsperfmon.
  • use vsperfreport to get vsp file and then convert.

But unfortunately none of them worked for me. That encouraged to write me this blog and help other who are looking for it.


Code coverage from manual/automated testing of web sites, application

SEO Friendly URL Routing in ASP.Net

I have stumbled upon this while answering to this in stackoverflow. So thought of sharing how to do this webproject.

Expectation / End result :

  1. If you have Search.aspx file in project, it will automatically become SEO friendly, i.e. you can browse it like this  “/Search” instead of “/Search.aspx”.
  2. If you need to pass parameters to that page, say product name, then you can do that using “/Search/Kindle”  instead of “/Search.aspx?productname=Kindle”.

Steps to achieve this:


Install “Microsoft.AspNet.FriendlyUrls” from nuget package.

Open package manager console – Help. Then type following –

Install-Package Microsoft.AspNet.FriendlyUrls


Then it will automatically add following in RouteConfig.cs.

public static class RouteConfig
   public static void RegisterRoutes(RouteCollection routes)
      var settings = new FriendlyUrlSettings();
      settings.AutoRedirectMode = RedirectMode.Permanent;


Add a webform with name say “Search.aspx”. And now if you browser, it will hit “Search.aspx”.

Now you are done with making SEO friendly URLS.

More Customization

Part – 1

If you want to make Search.aspx  to be called as “Search-Product”, then you can do that using following.

routes.MapPageRoute("", "Search-Product", "~/Search.aspx");

You need to add this to RouteConfig.cs, just after “routes.Enable…”

Now, if you hit this url –, it will hit search.aspx.

Part -2

Now, you may need to pass parameters to Search.aspx. Yes you can do that, use following line instead of above.

  routes.MapPageRoute("Find", "Search-product/{productname}", "~/Search.aspx");

To get value of productname in Search.aspx, use following “Page.RouteData.Values[“productname”]” in page_load or any other event in Search.aspx.


I have created an example. Used code suggested above. Hit following url

Output in code

Hope, it works for all those following this blog.

SEO Friendly URL Routing in ASP.Net

Enabling CORS in WCF


This is an intermediate example of WCF as REST based solution and enabling CORS access, so that this WCF service can be consumed from other domains without having cross-domain issues. I will explain more on CORS in latter section, so hold on , read through problem and solution. While developing this similar solution, I have faced issues, and did not find any helpful working Article/blog, so i am posting this. Hope this will be helpful.


We develop WCF service as REST service and consume that using javascript and jquery calls. This is good to start with single page application or purely javascript based application. You will never face any issue as long as wcf service hosted domain remains same as the domain where you have consumer service. The issue arises when, you started allowing other companies to consume WCF service as REST service. For e.g. you have some reporting service and exposed as REST service. You have a web portal , where this is consumed. And since this purely REST based, you want to allow 3rd party companies to consumer REST based service and show same reporting in their website. NOTE: In this case, JS used to consume WCF service will be sitting at client’s domain, but WCF domain will be your domain. And this different domain will cause cross domain issue, i.e. WCF will throw error while invoking.

Using the code

Before jumping straight into code, I want to formally introduce what is REST and CORS issue.

Representational state transfer (REST) is an abstraction of the architecture of the World Wide Web; more precisely, REST is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.–

Cross-origin resource sharing(CORS) User agents commonly apply same-origin restrictions to network requests. These restrictions prevent a client-side Web application running from one origin from obtaining data retrieved from another origin, and also limit unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application’s origin. –

In this example, I will use sample WCF service , that Visual studio provides. First, we will create a WCF REST service, which can accept POST request with parameter as an object. Write a simple JS based APP to consume that. And WCF service will simply return the some prefix + received object value. As we are mainly focusing to enable CORS, I have kept this very basic.

Then I will show you , where exactly error happens. After that, solution for overcoming CORS issue.

Step#1. Lets create a WCF service project, create service contract and operation contract as shown below.

public interface IService1

    [WebInvoke(UriTemplate = "/TestMethod", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json
    string TestMethod(CompositeType value);


Step#2 Definition of CompositeType is –

public class CompositeType
    bool boolValue = true;
    string stringValue = "Hello ";

    public bool BoolValue
        get { return boolValue; }
        set { boolValue = value; }

    public string StringValue
        get { return stringValue; }
        set { stringValue = value; }

Step#3 Then, create service class. Following is the code for this.

public class Service1 : IService1

    public string TestMethod(CompositeType value)
        return string.Format("You entered: {0}", value.StringValue);

Step#4 Assume it is hosted somewhere ( ) and test with fiddler whether it works. Following is the result.


Hurray!, it is working fine, see Result – 200 status.

Step#5 I have a simple javascript ( this will be in a HTML file) to invoke this REST based method.  The html file is hosted in – http://localhost Source code for javascript part in html file.

$(document).ready(function () {
	$("button").click(function () {
		var data = $("#txt").val();
		var postdata = {};
		var data_obj = {"BoolValue" : "true" , "StringValue": data}
		postdata["value"] =  data_obj; 

		var url = "";
			type: "POST",
			url: url,
			contentType: "application/json; charset=utf-8",
			data: JSON.stringify(postdata),
			dataType: "json",
			success: function(data) {console.log(data);},
			error: function(a,b,c) {console.log(a);}

—————–HTML Part————-

Enter something <input id="txt" type="text" /><button>Get WCF data</button>

Now, when i execute this javascript it will throw error. Following is the error message from browser console.

OPTIONS   test1.html:1
XMLHttpRequest cannot load No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access. The response had HTTP status code 405.

Following is the browser request payload info.


And it is not working anymore with other domain javascript call Frown | :( .

If you closely look into this – we are invoking WCF with “POST” request , but it shows request method as “OPTIONS”. This is because, POST, PUT, DELETE methods are unsafe methods and cross domain requests first makes a preflight request i.e. OPTIONS request to see if that succeeds means server responds/sends OK signal to that , then only it will again make actual POST request.

Also, note that it sends various request headers such as “Access-Control-Request-Headers”, “Access-Control-Request-Method”.

What it means? –  We as wcf service developer need to responds to that OPTIONS htttp request.

How to do that? –  Add global.asax file and add following code to Application_BeginRequest. Following is the code snippet.

protected void Application_BeginRequest(object sender, EventArgs e)
    HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", "http://localhost);
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Methods", "POST, PUT, DELETE");

        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Headers", "Content-Type, Accept");
        HttpContext.Current.Response.AddHeader("Access-Control-Max-Age", "1728000");

As you can see from above, i am allowing origin to “http://localhost&#8221;, so that if javascript is placed in this domain and that is making call to WCF, then it will be allowed. Also i have added request response header that we should send as part of OPTIONS Request header.

This is extremely important decision :  You can always use “*” for Access-Control-Allow-Origin, but for security reason that is discouraged. Because you are opening access to all to invoke your WCF server as REST Service from anywhere. Whereas you should know, to whom you are providing access for CORS and put those domains here only.

This is basic thing i am doing here, you can make those thing configurable.

So we are done with this setup, and i am going to deploy this solution, see if that helps.

Conclusion :

Now, i am using same javascript as above , and just hosted changed WCF code into some other virtual directory(testwcf). So when i issue ajax request, see that it has made 2 requests – OPTIONS, POST. Refer below screenshot.


We will analyse both request details, so first, see what is OPTIONS request’s response and how that is different from 1st attempt with non-CORS WCF.


As you can see that, now our WCF service responded with all required response headers such as “access-control-allow-* ” . – Note: we have done these in global.asax.

So when this request succeed , then browser made 2nd request i.e. actual POST . Lets check the details of that.


Now, you can see that, it actually made request payload and see that response header ( see Status code- 200 OK), it succeed and has some content-length.

Points of Interest

If you find it interesting and have some suggestion , put a comment , am i ready to interact with you.

Download source code here.

Enabling CORS in WCF