This is the second part of a multi-part post series I’m going to do about Code Generation using C#. In the previous part I’ve shown how to invoke C# Scripts (and CSX files) from PowerShell and Visual Studio. In this part, I’ll create a simple extractor to read the physical schema of a SQL Server Database and save it as a JSON file.

Creating the Project

The first step is creating a .NET project.

Running CSX scripts do not require csproj (or Visual Studio), but it’s very helpful to have a csproj for many reasons:

  • You get intellisense (mostly on CS, not so much on CSX - so try to use CSX as little as possible)
  • You can check for compilation-errors
  • You can configure a Console App (or any other entry point) to launch and debug your scripts
  • csproj is required for automatically restoring NuGet packages (although if you already have the NuGet packages you may just use them without csproj)

All libraries which I’ll use in my generator (Dapper and Newtonsoft) are compatible with netstandard2.0.
This means that the project type can be either .NET Framework (net472, which used to be called “full framework”) or .NET Core (netcore21 or netcore31) - both should work.

NuGet Packages

I’ll use two third-party libraries: Dapper (micro-ORM, for loading objects from SQL Queries) and Newtonsoft.Json (JSON library, just for serialization). All we need is their dlls, but the easiest way to getting those is by installing their NuGet packages to our csproj (which will automatically install those packages to anyone who tries to build our csproj, so it’s hassle-free collaboration among multiple developers).
If you’re not using Visual Studio IDE you can install those packages by running
dotnet add package Dapper and dotnet add package Newtonsoft.Json
in the same folder of your csproj.
If you’re using Visual Studio you can use the Package Manager Console and run this:

Install-Package Dapper
Install-Package Newtonsoft.Json

Sql Server Schema objects

Next, I’ll create my objects that will represent my physical SQL database. One important thing to remember is that C# scripting engines do NOT allow the use of namespaces.

SqlServerTable.cs:

using System;
using System.Collections.Generic;

public class SqlServerTable
{
    public string Database { get; set; }
    public string TableSchema { get; set; }
    public string TableName { get; set; }

    /// <summary>
    /// Can be "TABLE" or "VIEW"
    /// </summary>
    public string TableType { get; set; }

    public string TableDescription { get; set; }

    public List<SqlServerColumn> Columns { get; set; } = new List<SqlServerColumn>();

    /// <summary>
    /// FKs which point from THIS (Child) table to the primary key of OTHER (Parent) tables
    /// </summary>
    public List<SqlServerForeignKey> ForeignKeys { get; set; } = new List<SqlServerForeignKey>();

    /// <summary>
    /// FKs which point from OTHER (Child) tables to the primary key of THIS (Parent) table
    /// </summary>
    public List<SqlServerForeignKey> ChildForeignKeys { get; set; } = new List<SqlServerForeignKey>();

}

I’ll omit other classes for brevity, but you can refer to all code here (classes SqlServerColumn.cs, SqlServerForeignKey.cs, SqlServerForeignKeyMember.cs ).

Schema Reader

Finally, I’ll create the class which will read SQL metadata - SqlServerSchemaReader.cs:

using Dapper;
using System;
using System.Data;
using System.IO;
using System.Linq;

public class SqlServerSchemaReader
{
  public Func<IDbConnection> CreateDbConnection { get; set; }

  public SqlServerSchemaReader(Func<IDbConnection> createDbConnection)
  {
    CreateDbConnection = createDbConnection;
  }

  public void ExportSchemaToJSON(string outputJsonSchema)
  {
    Console.WriteLine("Reading Database...");

    using (var cn = CreateDbConnection())
    {
      var tables = cn.Query<SqlServerTable>(@"
        SELECT 
          t.TABLE_CATALOG as [Database], 
          t.TABLE_SCHEMA as [TableSchema], 
          t.TABLE_NAME as [TableName], 
          CASE WHEN t.TABLE_TYPE='VIEW' THEN 'VIEW' ELSE 'TABLE' END as [TableType],
          ep.value as [TableDescription]
		    FROM  INFORMATION_SCHEMA.TABLES t
		    INNER JOIN sys.schemas sc ON t.TABLE_SCHEMA = sc.[name]
          ... full code omitted for brevity - please refer to: 
          ... https://github.com/Drizin/CodegenCS/tree/master/src/CodegenCS.SqlServer
      ").AsList();

      var allColumns = cn.Query<SqlServerColumn>(@"
          ... full code omitted for brevity... 
      ").AsList();

      var fks = cn.Query<SqlServerForeignKey>(@"
          ... full code omitted for brevity... 
      ").AsList();

      var fkCols = cn.Query<SqlServerForeignKeyMember>(@"
          ... full code omitted for brevity... 
      ").AsList();
      foreach (var fk in fks)
      {
        fk.Columns = fkCols.Where(c => 
            c.ForeignKeyConstraintName == fk.ForeignKeyConstraintName && 
            c.FKTableSchema == fk.FKTableSchema
        ).OrderBy(c => c.PKColumnOrdinalPosition).ToList();
      }

      foreach (var table in tables)
      {
        table.Columns = allColumns.Where(c => c.TableSchema == table.TableSchema && c.TableName == table.TableName).ToList();
        foreach(var column in table.Columns)
          column.ClrType = GetClrType(table, column);
        table.Columns.ForEach(c => { c.Database = null; c.TableSchema = null; c.TableName = null; });

        // We copy FKs and remove redundant properties of the parent object (table) which we're attaching this FK into
        table.ForeignKeys = Clone(fks.Where(fk => fk.FKTableSchema == table.TableSchema && fk.FKTableName == table.TableName).ToList());
        table.ForeignKeys.ForEach(fk => { fk.FKTableSchema = null; fk.FKTableName = null; });

        // We copy FKs and remove redundant properties of the parent object (table) which we're attaching this FK into
        table.ChildForeignKeys = Clone(fks.Where(fk => fk.PKTableSchema == table.TableSchema && fk.PKTableName == table.TableName).ToList());
        table.ChildForeignKeys.ForEach(fk => { fk.PKTableSchema = null; fk.PKTableName = null; });

      }

      SqlServerDatabaseSchema schema = new SqlServerDatabaseSchema()
      {
        LastRefreshed = DateTimeOffset.Now,
        Tables = tables,
      };

      Console.WriteLine($"Saving into {outputJsonSchema}...");
      File.WriteAllText(outputJsonSchema, Newtonsoft.Json.JsonConvert.SerializeObject(schema, Newtonsoft.Json.Formatting.Indented));
    }

    Console.WriteLine("Success!");
  }

  string GetClrType(SqlServerTable table, SqlServerColumn column)
  {
    string sqlDataType = column.SqlDataType;
    switch (sqlDataType)
    {
      case "bigint":
        return typeof(long).FullName;
      case "smallint":
        return typeof(short).FullName;
      case "int":
        return typeof(int).FullName;
      case "uniqueidentifier":
        return typeof(Guid).FullName;
      case "smalldatetime":
      case "datetime":
      case "datetime2":
      case "date":
      case "time":
        return typeof(DateTime).FullName;
      case "datetimeoffset":
        return typeof(DateTimeOffset).FullName;
      case "float":
        return typeof(double).FullName;
      case "real":
        return typeof(float).FullName;
      case "numeric":
      case "smallmoney":
      case "decimal":
      case "money":
        return typeof(decimal).FullName;
      case "tinyint":
        return typeof(byte).FullName;
      case "bit":
        return typeof(bool).FullName;
      case "image":
      case "binary":
      case "varbinary":
      case "timestamp":
        return typeof(byte[]).FullName;
      case "nvarchar":
      case "varchar":
      case "nchar":
      case "char":
      case "text":
      case "ntext":
      case "xml":
        return typeof(string).FullName;
      default:
        Console.WriteLine($"Unknown sqlDataType for {table.TableName}.{column.ColumnName}: {sqlDataType}");
        return null;

      // Vendor-specific types
      case "hierarchyid":
        return "Microsoft.SqlServer.Types.SqlHierarchyId"; // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.34+)
      case "geography":
        return "Microsoft.SqlServer.Types.SqlGeography";  // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.32+)
      case "geometry":
        return "Microsoft.SqlServer.Types.SqlGeometry";  // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.33)+
    }
  }

  public static T Clone<T>(T source)
  {
    var serialized = JsonConvert.SerializeObject(source);
    return JsonConvert.DeserializeObject<T>(serialized);
  }

}

CSX

The idea is that we should put as much as possible into .cs files (and as little as possible in csx script), since intellisense and compile-time checks work better than in the CSX. The script file RefreshDatabaseSchema.csx will basically load libraries, define connection strings and paths, and invoke SqlServerSchemaReader.cs:

/// <summary>
/// This CSX Script will invoke SqlServerSchemaReader, which extracts the schema of SQL database and saves into a JSON file.
/// The easiest way to launch csi.exe (which is shipped with Visual Studio) to run this script is by using PowerShell script RefreshDatabaseSchema.ps1
/// You can do that from Visual Studio (see instructions in RefreshDatabaseSchema.ps1) or you can just execute "Powershell RefreshDatabaseSchema.ps1"
/// </summary>

// System libraries
#r "System.Data.dll"

// Load 3rd-party libraries by their relative paths, relative to "$Env:userprofile\.nuget\packages\"
#r "dapper\2.0.35\lib\netstandard2.0\Dapper.dll"
#r "newtonsoft.json\12.0.3\lib\netstandard2.0\Newtonsoft.Json.dll"

// CS files are better than CSX because Intellisense and Compile-time checks works better. 
#load "SqlServerTable.cs"
#load "SqlServerColumn.cs"
#load "SqlServerForeignKey.cs"
#load "SqlServerForeignKeyMember.cs"
#load "SqlServerDatabaseSchema.cs"
#load "SqlServerSchemaReader.cs"

using System;
using System.IO;
using System.Runtime.CompilerServices;
using System.Data;
using System.Data.SqlClient;

// Helpers to get the location of the current CSX script
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);


// location relative to the CSX script
string outputJsonSchema = Path.GetFullPath(Path.Combine(GetScriptFolder(), "AdventureWorksSchema.json")); 
string connectionString = @"Data Source=MYWORKSTATION\SQLEXPRESS;
                            Initial Catalog=AdventureWorks;
                            Integrated Security=True;";

Func<IDbConnection> connectionFactory = () => new SqlConnection(connectionString);
var reader = new SqlServerSchemaReader(connectionFactory);
reader.ExportSchemaToJSON(outputJsonSchema);

Powershell

Last, I’ll create a PowerShell to invoke the CSX file, which is useful because it can locate the csi.exe in multiple locations and because it can provide to csi.exe the location of per-user NuGet packages, so that CSX can load libraries by their relative-locations, without having to hard-code user-specific folders. RefreshDatabaseSchema.ps1:

# To Execute Powershell Scripts from Visual Studio:
# 1) Right-button PS1 file - "Open With...""
# 2) Configure:
#      Program: Powershell.exe
#      Arguments: -noexit -File %1
#      Friendly Name: Execute PowerShell Script

# To execute CSX scripts you'll need CSI.EXE (C# REPL) which is shipped with Visual Studio
# but can also be installed by using the NuGet package Microsoft.Net.Compilers.Toolset - https://www.nuget.org/packages/Microsoft.Net.Compilers.Toolset/

# For more info about launching CSX scripts from PowerShell or from Visual Studio, check https://drizin.io/code-generation-csx-scripts-part1/

$dir = Split-Path $MyInvocation.MyCommand.Path 
$script = Join-Path $dir ".\RefreshDatabaseSchema.csx"


# Locate CSI.EXE by searching common paths
$csi = ( 
    "$Env:userprofile\.nuget\packages\microsoft.net.compilers.toolset\3.6.0\tasks\net472\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Roslyn\csi.exe"
) | Where-Object { Test-Path $_ } | Select-Object -first 1

if (!$csi)
{
    Write-Host "---------------------------------------" -for red
    Write-Host "Can't find csi.exe" -for red
    Write-Host "Please fix search paths above, or install NuGet Microsoft.Net.Compilers.Toolset" -for red
    Write-Host "---------------------------------------" -for red
    Exit 1
}


$stopwatch = [System.Diagnostics.Stopwatch]::StartNew()

Write-host "Starting csi.exe $script ..." -for yellow
& $csi /lib:"$Env:userprofile\.nuget\packages\" $script

Write-Host "Finished in $($stopwatch.Elapsed.TotalMilliSeconds) milliseconds"

# Since I configured "-noexit" parameter in Visual Studio I don't need this
#if ($host.Name -notmatch 'ISE') { Write-Host -NoNewLine "(Just press Enter to exit)" -for cyan; read-host; }  

Running Powershell:

Result file AdventureWorksSchema.json:

Full Source code available here

This is the second part of a multi-part post series I’m going to do about Code Generation using C#:

This is the first part of a multi-part post series I’m going to do about Code Generation using C#.

I’ve mentioned before that I’ve been a fan of code generation for a long-time. I have used CodeSmith generator, MyGeneration Code Generator, and in the past few years, I’ve been using T4 templates.

The main advantage of code generation (which includes DB scaffolding, also called database first by some ORMs) is that it does the work for you, is repeatable, and less error-prone than manually writing everything. So it’s mostly about productivity and consistency. If you’re writing repetitive code by hand, you’re stealing from your employer or clients. And if you have a data-oriented application, almost certainly you have repetitive code to some degree.

The right tool for the job

Until recently the gold standard for code generation in Microsoft stack was using T4 templates, which is the out-of-the-box tool provided by Microsoft and shipped with Visual Studio. The major problem with T4 is that it has terrible syntax, terrible tooling and debugging support, and can get as ugly as this:

C# Script Files (CSX files)

In the search of a modern code-generation tool I’ve explored Razor Templates, Mustache, and Mustache-like templates (like DotLiquid and Handlebars), and others. I’ve noticed that there’s a growing popularity of using C# code (Roslyn or not) to do the code-generation using pure C# - which makes sense, since you can get strong typing, compile-time checking, full IDE support (with debugging), cross-platform (dotnet core), full access to all .NET Framework (SqlServer, Dapper, Newtonsoft JSON, etc). So you get a full-fledged language instead of using a templating-engine which only offers a subset of the underlying language features.

C# Script Files (CSX) were introduced with Roslyn, and can be executed in Roslyn or in other compatible cross-platform scripting engines like dotnet-script or even with C# REPL called csi.exe. Those scripting engines certainly have some limitations (like using namespaces), but they allow us to virtually invoke any C# code, with essential features like loading external assemblies, loading code organized across multiple source files, etc. And it’s much easier than using Powershell to invoke C#.

C# REPL (CSI.EXE)

Visual Studio ships with a command line REPL called CSI that can be used to run .csx scripts.

A sample CSX script (MyScript.csx) can be as simple as this:

// CSI.EXE requires absolute paths for loading external assemblies:
#r "C:\Users\drizin\.nuget\packages\newtonsoft.json\12.0.3\lib\netstandard2.0\Newtonsoft.Json.dll"
#r "C:\Users\drizin\.nuget\packages\dapper\2.0.35\lib\netstandard2.0\Dapper.dll"

#load "File1.cs"
#load "File2.cs"
#load "MyProgram.cs"

new MyProgram().MyMethod();
Console.WriteLine("Hello Code-Generation!");

You can run CSI.EXE from Visual Studio Developer Command Prompt (csi MyScript.csx):

Invoking C# REPL (CSI running CSX scripts) in PowerShell

To execute using Powershell you just have to know the location of csi.exe, which by default comes with Visual Studio but can also be installed by using the NuGet package Microsoft.Net.Compilers.Toolset (warning: as described here the installation script for the package changes .csproj file to use the compiler provided by the package - this is likely not what you want, so make sure to roll-back such changes if they are made during installation).

Sample Powershell RunMyScript.ps1:

# Locate CSI.EXE by searching common paths
$csi = ( 
    "$Env:userprofile\.nuget\packages\microsoft.net.compilers.toolset\3.6.0\tasks\net472\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Roslyn\csi.exe"
) | Where-Object { Test-Path $_ } | Select-Object -first 1

$dir = Split-Path $MyInvocation.MyCommand.Path 
$script = Join-Path $dir "MyScript.csx"

& $csi $script

And you can also configure PowerShell to be executed directly from Visual Studio:

PS: If you haven’t used unsigned PowerShell scripts you’ll have to run Powershell as Administrator and execute Set-ExecutionPolicy -ExecutionPolicy Unrestricted. Don’t forget to execute this both for Windows PowerShell (C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe) and for Windows PowerShell (x86) (C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe) which is the one that is invoked from inside Visual Studio IDE.

Full-path Assembly References

One of the problems with CSI is that the #r directive (for loading assembly references) doesn’t accept nuget-like references or environment variables so all assembly references should be specified with full paths. This is not a showstopper but it’s a little annoying since it makes harder to share code among multiple developers. If you’re using PowerShell (as described above), we can use environment-variables in PowerShell and use #r directive with relative paths like this:

# Call csi.exe and specifies that libraries specified by the #r directive should search in the local (per-user) nuget location
& $csi /lib:"$Env:userprofile\.nuget\packages\" $script
// CSX can load libraries by defining their relative paths, under "$Env:userprofile\.nuget\packages\"
#r "dapper\2.0.35\lib\netstandard2.0\Dapper.dll"
//...
new MyProgram().MyMethod();
Console.WriteLine("Hello Code-Generation!");

Assuming that the CSX file and the associated CS files are all in a Visual Studio project, you can rely on NuGet packages (PackageReference) on the project itself, so that when each developer tries to build the project the NuGet packages will be automatically restored and consequently your script will have the required libraries in the per-user NuGet location (e.g. C:\Users\drizin\.nuget\packages\dapper\etc..)

dotnet-script

dotnet-script is a nice alternative to CSI because it makes it easier to specify NuGets by their names/versions. Additionally, in my tests dotnet-script engine was 5-10x faster than csi.exe.

A sample dotnet-script CSX script (MyScript2.csx) can be as simple as this:

// dotnet-script is much faster and allows us to reference nuget packages without specifying full path:
#r "nuget: Newtonsoft.Json, 12.0.3"
#r "nuget: Dapper, 2.0.35"

#load "File1.cs"
#load "File2.cs"
#load "MyProgram.cs"

new MyProgram().MyMethod();
Console.WriteLine("Hello Code-Generation!");

The easiest way of using dotnet-script is installing as a global tool
(dotnet tool install -g dotnet-script), and in this case you can launch CSX scripts by running this:
dotnet script MyScript2.csx.

You may also launch dotnet-script using Powershell:

RunMyScript.ps1:

$dir = Split-Path $MyInvocation.MyCommand.Path 
$script = Join-Path $dir "MyScript2.csx"

& dotnet script $script

# if you don't have dotnet-script globally installed (as a **donet tool**) but you have local NuGet package, you may prefer this:
# $dotnetscript = "$Env:userprofile\.nuget\packages\dotnet-script\0.53.0\tools\netcoreapp3.1\any\dotnet-script.dll"
# & dotnet $dotnetscript $script

Or if you don’t have dotnet-script installed globally (as tool) and don’t want to use PowerShell you can just launch the dotnet-script from Command Prompt:

So cool and so easy, isn’t it?

In my next post, I’ll create a SQL schema extractor, and this schema will be used in next posts to build code generator for POCOs, EFCore, and other related services.

Enjoy!

This was the first part of a multi-part post series I’m going to do about Code Generation using C#:

ASP.NET Web API by default does not allow 2 controllers with same name.

Basically, this is what happens:

  • IHttpControllerSelector is resolved (by default it’s DefaultHttpControllerSelector) to call IHttpControllerSelector.GetControllerMapping()
  • IHttpControllerSelector.GetControllerMapping() will resolve (through a HttpControllerTypeCache cache manager) IHttpControllerTypeResolver to call IHttpControllerTypeResolver.GetControllerTypes().
  • By default IHttpControllerTypeResolver is DefaultHttpControllerTypeResolver, in which GetControllerTypes() will loop through all loaded assemblies and will return all types which implement IHttpController and which end with "Controller".
  • Going back… IHttpControllerSelector.GetControllerMapping() will return a map of all controllers, but in case will ignore namespace, strip "Controller" suffix, and will also ignore duplicates (they are moved into another dictionary which is used just to throw an exception that there are multiple matching controllers).
  • IHttpControllerSelector.SelectController is invoked on each request and will also use a cached map of controllers (cached by that same cache manager HttpControllerTypeCache) which will lookup controller name in the route (like {controller}) and will choose the correct controller.

I’ve found some some solutions to solve the problem of duplicated names, but most sounded too complex to me, since they were based on tagging the namespaces of each controller as DataTokens applied to the routes registered.

Since I’m using Attribute-Based routes and since I have (for legacy reasons) some non-attribute-based (regular explicitly defined routes), I ended up developing my own solution.

This NamespaceHttpControllerSelector below allows the use of multiple controllers with same name (obviously in different namespaces) by prepending controller identifier with their namespaces (if they have [RoutePrefix] attribute). And if there are controllers without [RoutePrefix] they are just registered as usual, without their namespace, and will be matched by their route {controller} as usual.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Controllers;
using System.Web.Http.Dispatcher;
using System.Web.Http.Routing;

/// <summary>
/// Allows the use of multiple controllers with same name (obviously in different namespaces) 
/// by prepending controller identifier with their namespaces (if they have [RoutePrefix] attribute).
/// Allows attribute-based controllers to be mixed with explicit-routes controllers without conflicts.
/// </summary>
public class NamespaceHttpControllerSelector : DefaultHttpControllerSelector
{
    // Some sample references: 
    // https://www.strathweb.com/2013/08/customizing-controller-discovery-in-asp-net-web-api/
    // https://github.com/WebApiContrib/WebAPIContrib/blob/master/src/WebApiContrib/Selectors/NamespaceHttpControllerSelector.cs 
    // https://github.com/umbraco/Umbraco-CMS/blob/7.2.0/src/Umbraco.Web/WebApi/NamespaceHttpControllerSelector.cs
    // https://github.com/mono/aspnetwebstack/blob/master/src/System.Web.Http/Dispatcher/DefaultHttpControllerSelector.cs
    // https://github.com/mono/aspnetwebstack/blob/master/src/System.Web.Http/Dispatcher/HttpControllerTypeCache.cs

    private HttpConfiguration _configuration;
    private IHttpControllerSelector _previousSelector;
    private readonly Lazy<Dictionary<string, HttpControllerDescriptor>> _controllers;

    public NamespaceHttpControllerSelector(HttpConfiguration httpConfiguration) : base(httpConfiguration)
    {
        _configuration = httpConfiguration;
        _previousSelector = _configuration.Services.GetService(typeof(IHttpControllerSelector)) as IHttpControllerSelector;
        _controllers = new Lazy<Dictionary<string, HttpControllerDescriptor>>(InitializeControllerDictionary);
    }

    public override IDictionary<string, HttpControllerDescriptor> GetControllerMapping()
    {
        return _controllers.Value; // just cache the list of controllers, so we load only once at first use
    }

    /// <summary>
    /// The regular DefaultHttpControllerSelector.InitializeControllerDictionary() does not 
    ///  allow 2 controller types to have same name even if they are in different namespaces (they are ignored!)
    /// 
    /// This method will map ALL controllers, even if they have same name, 
    /// by prepending controller names with their namespaces if they have [RoutePrefix] attribute
    /// </summary>
    /// <returns></returns>
    private Dictionary<string, HttpControllerDescriptor> InitializeControllerDictionary()
    {
        IAssembliesResolver assembliesResolver = _configuration.Services.GetAssembliesResolver();
        IHttpControllerTypeResolver controllersResolver = _configuration.Services.GetHttpControllerTypeResolver(); 
        ICollection<Type> controllerTypes = controllersResolver.GetControllerTypes(assembliesResolver); 

        // simple alternative? in case you want to map maybe "UserAPI" instead of "UserController"
        // var controllerTypes = System.Reflection.Assembly.GetExecutingAssembly().GetTypes()
        // .Where(t => t.IsClass && t.IsVisible && !t.IsAbstract && typeof(IHttpController).IsAssignableFrom(t));

        var controllers = new Dictionary<string, HttpControllerDescriptor>(StringComparer.OrdinalIgnoreCase);
        foreach (Type t in controllerTypes)
        {
            var controllerName = t.Name;

            // ASP.NET by default removes "Controller" suffix, let's keep that convention
            if (controllerName.EndsWith(ControllerSuffix))
                controllerName = controllerName.Remove(controllerName.Length - ControllerSuffix.Length);

            // For controllers with [RoutePrefix] we'll register full name (namespace+name). 
            // Those routes when matched they provide the full type name, so we can match exact controller type.
            // For other controllers we'll register as usual
            bool hasroutePrefixAttribute = t.GetCustomAttributes(typeof(RoutePrefixAttribute), false).Any();
            if (hasroutePrefixAttribute)
                controllerName = t.Namespace + "." + controllerName;

            if (!controllers.Keys.Contains(controllerName))
                controllers[controllerName] = new HttpControllerDescriptor(_configuration, controllerName, t);
        }
        return controllers;
    }

    /// <summary>
    /// For "regular" MVC routes we will receive the "{controller}" value in route, and we lookup for the controller as usual.
    /// For attribute-based routes we receive the ControllerDescriptor which gives us 
    /// the full name of the controller as registered (with namespace), so we can version our APIs
    /// </summary>
    /// <param name="request"></param>
    /// <returns></returns>
    public override HttpControllerDescriptor SelectController(HttpRequestMessage request)
    {
        HttpControllerDescriptor controller;
        IDictionary<string, HttpControllerDescriptor> controllers = GetControllerMapping();
        IDictionary<string, HttpControllerDescriptor> controllersWithoutAttributeBasedRouting =
            GetControllerMapping().Where(kv => !kv.Value.ControllerType
               .GetCustomAttributes(typeof(RoutePrefixAttribute), false).Any())
            .ToDictionary(kv => kv.Key, kv => kv.Value);

        var route = request.GetRouteData();

        // regular routes are registered explicitly using {controller} route - and in case we'll match by the controller name,
        // as usual ("CourseController" is looked up in dictionary as "Course").
        if (route.Values != null && route.Values.ContainsKey("controller"))
        {
            string controllerName = (string)route.Values["controller"];
            if (controllersWithoutAttributeBasedRouting.TryGetValue(controllerName, out controller))
                return controller;
        }

        // For attribute-based routes, the matched route has subroutes, 
        // and we can get the ControllerDescriptor (with the exact name that we defined - with namespace) associated, to return correct controller
        if (route.GetSubRoutes() != null)
        {
            route = route.GetSubRoutes().First(); // any sample route, we're just looking for the controller

            // Attribute Routing registers a single route with many subroutes, and we need to inspect any action of the route to get the controller
            if (route.Route != null && route.Route.DataTokens != null && route.Route.DataTokens["actions"] != null)
            {
                // if it wasn't for attribute-based routes which give us the ControllerDescriptor for each route, 
                // we could pick the correct controller version by inspecting version in accepted mime types in request.Headers.Accept
                string controllerTypeFullName = ((HttpActionDescriptor[])route.Route.DataTokens["actions"])[0].ControllerDescriptor.ControllerName;
                if (controllers.TryGetValue(controllerTypeFullName, out controller))
                    return controller;
            }
        }

        throw new HttpResponseException(HttpStatusCode.NotFound);
    }

}

How to register this - before auto-mapping your routes (in your Startup.cs, or RouteConfig.cs, or Global.asax.cs, or similar) you register your new IHttpControllerSelector:

config.Services.Replace(typeof(IHttpControllerSelector), new NamespaceHttpControllerSelector(config));
config.MapHttpAttributeRoutes();