Windows Subsystem for Linux (WSL)

If you’re a developer that needs to compile Linux binaries, then the Windows Subsystem for Linux (WSL) is for you. WSL is a tool for developers and sysadmins that need Linux interoperability in Windows.

The most important thing to understand about WSL is that it isn’t based on virtual machines or containers. When you download a supported Linux distro from the Microsoft Store, it doesn’t mean that a virtual machine is created. WSL provides a layer for mapping Windows kernel system calls to Linux kernel system calls. This allows Linux binaries to run in Windows unmodified. WSL also maps Windows services, like the filesystem and networking, as devices that Linux can access.

Instead of using a VM or container, WSL virtualizes a Linux kernel interface on top of the Windows kernel. This means that running WSL only requires a minimal amount of RAM. And when not in use, the WSL driver isn’t loaded into memory, making it much more efficient than a solution based on a VM or container.

Installing WSL and Ubuntu in a Windows Server

Run this in a Powershell Administrator Prompt (you’ll have to reboot after this):

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Run this in a Powershell Administrator Prompt to install Ubuntu 18.04:

curl.exe -L -o ubuntu-1804.appx https://aka.ms/wsl-ubuntu-1804
Rename-Item ubuntu-1804.appx ubuntu-1804.zip
Expand-Archive ubuntu-1804.zip ubuntu1804

cd ubuntu1804
.\ubuntu1804.exe 

You’ll be asked to choose a Linux username and password.

Installing Redis on Ubuntu (under Windows Server WSL)

Invoke a bash shell as superuser (for the next commands which require root):

sudo bash

(you’ll be asked for administrator’s password created earlier)

Update apt-get and install redis:

apt-get update && apt-get upgrade
apt-get install redis-server

Configuring Redis for external access (Optional)

If you’ll only use Redis in your internal protected network you don’t need this.

WSL only provides a translation layer between the Linux apps and the Windows kernel, so some core parts of the Ubuntu system (including networking) are just not there - WSL just translates the Linux system calls into windows ones so the Ubuntu network data flows through the exact same TCP/IP stack as the windows data.

This means that to open Redis server to other servers (or to the public internet) you just have to configure Redis to listen on the correct interfaces, and open the appropriate ports (there’s no need to do “port forwarding” since this is not a VM with its own networking interfaces).

By default Redis will only bind to loopback (localhost) interfaces. If you open /etc/redis/redis.conf (by running nano /etc/redis/redis.conf) you’ll find a line like bind 127.0.0.1 ::1 which means that Redis by default listens on ipv4 loopback (127.0.0.1) and ipv6 loopback (::1). Just change it to bind 0.0.0.0 ::1 to make Redis listen in all interfaces (including public IPs), or if it’s just for internal network you can add the internal ips in which this server should listen on.

And in case you’re exposing it through the internet you’ll also have to open your Windows Firewall ports:

netsh advfirewall firewall add rule name="Redis Port 6379" dir=in action=allow protocol=TCP localport=6379

Authentication

If you’re exposing your server to public internet you’ll have to configure a password, because by default Redis does not accept external connections without a password.
Yet in /etc/redis/redis.conf you just have to uncomment the line requirepass and set next to it a very-strong password (since it’s very fast to run a brute-force attack on a Redis server).

Start and test server

Start service:

sudo service redis-server restart

To test the service, run redis-cli and use the following commands:

AUTH yourstrongpassword  (if you have enabled authentication, above)
set user:1 "Oscar"
get user:1

To stop your server you can use:

sudo service redis-server stop

References:

We all love Dapper micro-ORM.

Many developers have realized that although Entity Framework offers some features that Dapper doesn’t, Dapper is still more flexible (and much much faster) than EF. And many have realized that the right tool for the right job sometimes involving using both EF for some things and Dapper for others. And one of the usages where Dapper is the king is for building dynamic queries.

Dynamic SQL

Old-school developers (those who have been coding since the late 90s or early 2000s, probably in VB6, ASP3, or PHP) will probably remember code like this:

string sql = "SELECT * FROM [Product] WHERE 1=1";

if (!string.IsNullOrEmpty(Request["ProductName"]))
   sql += " AND Name LIKE '" + Request["ProductName"].Replace("'", "''") + "'"; // this replace!
if (!string.IsNullOrEmpty(Request["SubCategoryId"]))
    sql += " AND ProductSubcategoryID = " + Request["SubCategoryId"].Replace("'", "''");  // this replace!

// here is where you pray that you've correctly sanitized inputs against sql-injection
var products = cn.Query<Product>(sql);

Basically we were all sending dynamic SQL statements directly to our databases. And since user-input can be malicious (see below), we had to manually sanitize user-input to avoid SQL-Injection attacks.

Hello Bobby Tables

If you weren’t sanitizing your input correctly, you were at serious risk. SQL-injection is probably the most popular web-hacking technique. If you forgot to sanitize something like an “id” querystring, you could be opening a huge vulnerability and possibly exposing all your database, not only for reading but also for modifications.

Parameterized SQL

Even if you’re old school (hey VB6 guy, I’m looking at you again) you probably know that Dynamically building SQL like that is not only error-prone (vulnerable to SQL-injection) but also does not benefit from SQL Server Cached Execution Plans. You probably used some kind of code-generation tool which helped you to build code like this (maybe to invoke a stored procedure, since 3-tier-architecture and “Windows DNA” were in the hype in the early 2000s):

Dim strSQL As String
Dim cmd As New ADODB.Command

strSQL = "UPDATE MyTable SET " & vbNewLine _
    & " NEEDS_ID     = @NEEDS_ID, " & vbNewLine _
    & " OBJ_ID       = @OBJ_ID, " & vbNewLine _
    & " OBJ_COMMENTS = @OBJ_COMMENTS, " & vbNewLine _
    & " TIME21_ID    = @TIME21_ID, " & vbNewLine _
    & " WHERE ID     = @WHEREID"

With cmd
    .ActiveConnection = Cn
    .CommandText = strSQL
    .Parameters.Append .CreateParameter("@NEEDS_ID", adInteger, adParamInput, 2, 12)
    .Parameters.Append .CreateParameter("@OBJ_ID", adInteger, adParamInput, 2, 23)
    .Parameters.Append .CreateParameter("@OBJ_COMMENTS", adVarChar, adParamInput, 250, "Some text")
    .Parameters.Append .CreateParameter("@TIME21_ID", adInteger, adParamInput, 2, 34)
    .Parameters.Append .CreateParameter("@ID", adInteger, adParamInput, 18, 456)
    .Execute
End With 

I hope the millennial developers are not so surprised that parameterized SQL already existed in the past century.

Back to modern software

Time flew by, Java and .NET emerged (and maybe submerged a little?), Reflection, Bytecode emission, Full-ORMs, Micro-ORMs, 3-tier was deprecated in favor of dozens of modern architectures, and now database access is much easier, right?

Now we don’t have to manually describe each one of our parameters to SQL. Our favorite ORMs will do that for us.

Dapper Example:

var dynamicParams = new DynamicParameters();

string sql = "SELECT * FROM [Product] WHERE 1=1";

if (productName != null)
{
    sql += " AND Name LIKE @productName"; 
    dynamicParams.Add("productName", productName);
}


if (subCategoryId != null)
{
    sql += " AND ProductSubcategoryID = @subCategoryId"; 
    dynamicParams.Add("subCategoryId", subCategoryId);
}

var products = cn.Query<Product>(sql, dynamicParams);

Don’t Repeat Yourself

The example below makes me a little upset for 2 reasons:

  • I have to pass productName twice, both to the sql string and to the dynamicParams dictionary.
    Their names should match.
  • I have to keep those two variables (the SQL statement and the list of parameters) separated, even though they are closely related to each other.

String Interpolation Internals

String Interpolation was introduced in C# in 2016.

Instead of doing code like this:

string name = "Rick";
int accesses = 10;
string output = string.Format("{0}, you've been here {1:n0} times.",
                              name, accesses);

You could do like this:

string name = "Rick";
int accesses = 10;
string output = $"{name}, you've been here {accesses:n0} times.";

Internally, when you write an interpolated string (starting with $), the compiler generates a FormattableString class, which contains both the template (as if it was {0}, you've been here {1:n0} times.), and also the list of parameters (string name and int accesses).

If your method expects a regular string, the FormattableString will be implicitly converted into a regular string, and you get the same behavior as if you just passed a string.format to your method. However, if your method expects a FormattableString class, then you have access to format and arguments isolated from each other.

FormattableString can be useful for example if we wanted to build parameterized SQL statements while letting users build their strings as if it was just regular string concatenation:

QueryBuilder ParseSQL(FormattableString query)
{
    QueryBuilder myWrapper = new QueryBuilder();

    string dapperSql = query.Format;
    // replace {0} by "@p0", 
    // replace {1} by "@p1", etc..
    myWrapper.SQL = dapperSql;

    var dapperArgs = new Dapper.DynamicParameters();
    var args = query.GetArguments();
    // dapperArgs.Add("p0", args[0]);
    // dapperArgs.Add("p1", args[1]); ...
    myWrapper.Arguments = dapperArgs;

    return myWrapper;
    // now anyone can use Dapper like  
    // var pocos = connection.Query<POCO>(myWrapper.SQL, myWrapper.Parameters);
}

DapperQueryBuilder

Based on the idea above, I’ve created DapperQueryBuilder - which is a simple wrapper around Dapper which allows us to pass SQL parameters using string interpolation.

You can code like this:

var query = cn.QueryBuilder($"SELECT * FROM [Product] WHERE 1=1");

if (productName != null)
    query.Append($"AND Name LIKE {productName}"); 

if (subCategoryId != null)
    query.Append($"AND ProductSubcategoryID = {subCategoryId}"); 

var products = query.Query<Product>(); 

Although it looks like you’re just building a dynamic SQL (with values as inline literals), actually what you get is parameterized SQL.

In case, query will have this underlying statement: SELECT * FROM [Product] WHERE 1=1 AND Name LIKE @p0 AND ProductSubcategoryId = @p1, and will also hold the parameters @p0 = productName and @p1 = subCategoryId.

To sum, instead of using Dapper’s extension .Query<T> which extends IDbConnection and accepts a SQL string and a list of parameters, you use QueryBuilder() extension which creates a QueryBuilder where you can dynamically (and in a single statement) add new parameters and add the associated SQL clause.

Quickstart

If you liked and want to start using right now,

  1. Install the NuGet package Dapper-QueryBuilder
  2. Start using like this:
using DapperQueryBuilder;
// ...

cn = new SqlConnection(connectionString);

// If you have all your parameters at once and just want to benefit from string interpolation:
var products = cn.QueryBuilder($@"
    SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    WHERE [ListPrice] <= {maxPrice}
    AND [Weight] <= {maxWeight}
    AND [Name] LIKE {search}
    ORDER BY ProductId").Query<Product>();

Or building dynamic conditions like this:

using DapperQueryBuilder;
// ...

cn = new SqlConnection(connectionString);

// If you want to dynamically add conditions
var q = cn.QueryBuilder($@"
    SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    WHERE 1=1 ");

q.AppendLine("AND [ListPrice] <= {maxPrice}");
q.AppendLine("AND [Weight] <= {maxWeight}");
q.AppendLine("AND [Name] LIKE {search}");
q.AppendLine("ORDER BY ProductId");

var products = q.Query<Product>();

Filters list

Some people believe incorrectly that WHERE 1=1 causes a performance hit. It doesn’t. So using this fake condition is a great trick so that you can just append additional conditions (if any) like shown in the example above.

Another option is that you can build your whole query but leave the filters to be dynamically defined (and rendered) later. DapperQueryBuilder has this special command Where() which is used to save the filters internally, and later they are replaced.

int maxPrice = 1000;
int maxWeight = 15;
string search = "%Mountain%";

var cn = new SqlConnection(connectionString);

// You can build the query manually and just use QueryBuilder to replace "where" filters (if any)
var q = cn.QueryBuilder(@"SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    /**where**/
    ORDER BY ProductId
    ");
    
// You just pass the parameters as if it was an interpolated string, 
// and QueryBuilder will automatically convert them to Dapper parameters (injection-safe)
q.Where($"[ListPrice] <= {maxPrice}");
q.Where($"[Weight] <= {maxWeight}");
q.Where($"[Name] LIKE {search}");

// Query() will automatically build your query and replace your /**where**/ (if any filter was added)
var products = q.Query<Product>();

You can also create OR filters like this:

q.Where(new Filters(Filters.FiltersType.OR)
{
    new Filter($"[Weight] <= {maxWeight}"),
    new Filter($"[Name] LIKE {search}")
});

Or you can mix and match OR/AND like this:

q.Where(new Filters(Filters.FiltersType.OR)
{
    new Filters(Filters.FiltersType.AND)
    {
        $"[Weight] <= {maxWeight}",
        $"[Weight] >= {minWeight}",
    }
    new Filter($"[Name] LIKE {search}")
});

And even arrays can be used as interpolated strings:

var categories = new string[] { "Components", "Clothing", "Acessories" };
q.Append($"WHERE c.[Name] IN {categories}");

Fluent API (Chained-methods)

For those who like method-chaining guidance (or for those who allow end-users to build their own queries), there’s a Fluent API that allows you to build queries step-by-step mimicking dynamic SQL concatenation.

So, basically, instead of starting with a full query and just appending new filters (.Where()), the QueryBuilder will build the whole query for you:

var q = cn.QueryBuilder()
    .Select($"ProductId")
    .Select($"Name")
    .Select($"ListPrice")
    .Select($"Weight")
    .From($"[Product]")
    .Where($"[ListPrice] <= {maxPrice}")
    .Where($"[Weight] <= {maxWeight}")
    .Where($"[Name] LIKE {search}")
    .OrderBy($"ProductId");
    
var products = q.Query<Product>();

You would get this query:

SELECT ProductId, Name, ListPrice, Weight
FROM [Product]
WHERE [ListPrice] <= @p0 AND [Weight] <= @p1 AND [Name] LIKE @p2
ORDER BY ProductId

Hope you enjoyed this post as much as I did when I wrote this library!

Full source code here.

This is the second part of a multi-part post series I’m going to write about Code Generation using C#. In the previous part I’ve shown how to invoke C# Scripts (and CSX files) from PowerShell and Visual Studio. In this part, I’ll create a simple extractor to read the physical schema of a SQL Server Database and save it as a JSON file.

Creating the Project

The first step is creating a .NET project.

Running CSX scripts do not require csproj (or Visual Studio), but it’s very helpful to have a csproj for many reasons:

  • You get intellisense (mostly on CS, not so much on CSX - so try to use CSX as little as possible)
  • You can check for compilation-errors
  • You can configure a Console App (or any other entry point) to launch and debug your scripts
  • csproj is required for automatically restoring NuGet packages (although if you already have the NuGet packages you may just use them without csproj)

All libraries which I’ll use in my generator (Dapper and Newtonsoft) are compatible with netstandard2.0.
This means that the project type can be either .NET Framework (net472, which used to be called “full framework”) or .NET Core (netcore21 or netcore31) - both should work.

NuGet Packages

I’ll use two third-party libraries: Dapper (micro-ORM, for loading objects from SQL Queries) and Newtonsoft.Json (JSON library, just for serialization). All we need is their dlls, but the easiest way to getting those is by installing their NuGet packages to our csproj (which will automatically install those packages to anyone who tries to build our csproj, so it’s hassle-free collaboration among multiple developers).
If you’re not using Visual Studio IDE you can install those packages by running
dotnet add package Dapper and dotnet add package Newtonsoft.Json
in the same folder of your csproj.
If you’re using Visual Studio you can use the Package Manager Console and run this:

Install-Package Dapper
Install-Package Newtonsoft.Json

Sql Server Schema objects

Next, I’ll create my objects that will represent my physical SQL database. One important thing to remember is that C# scripting engines do NOT allow the use of namespaces.

SqlServerTable.cs:

using System;
using System.Collections.Generic;

public class SqlServerTable
{
    public string Database { get; set; }
    public string TableSchema { get; set; }
    public string TableName { get; set; }

    /// <summary>
    /// Can be "TABLE" or "VIEW"
    /// </summary>
    public string TableType { get; set; }

    public string TableDescription { get; set; }

    public List<SqlServerColumn> Columns { get; set; } = new List<SqlServerColumn>();

    /// <summary>
    /// FKs which point from THIS (Child) table to the primary key of OTHER (Parent) tables
    /// </summary>
    public List<SqlServerForeignKey> ForeignKeys { get; set; } = new List<SqlServerForeignKey>();

    /// <summary>
    /// FKs which point from OTHER (Child) tables to the primary key of THIS (Parent) table
    /// </summary>
    public List<SqlServerForeignKey> ChildForeignKeys { get; set; } = new List<SqlServerForeignKey>();

}

I’ll omit other classes for brevity, but you can refer to all code here (classes SqlServerColumn.cs, SqlServerForeignKey.cs, SqlServerForeignKeyMember.cs ).

Schema Reader

Finally, I’ll create the class which will read SQL metadata - SqlServerSchemaReader.cs:

using Dapper;
using System;
using System.Data;
using System.IO;
using System.Linq;

public class SqlServerSchemaReader
{
  public Func<IDbConnection> CreateDbConnection { get; set; }

  public SqlServerSchemaReader(Func<IDbConnection> createDbConnection)
  {
    CreateDbConnection = createDbConnection;
  }

  public void ExportSchemaToJSON(string outputJsonSchema)
  {
    Console.WriteLine("Reading Database...");

    using (var cn = CreateDbConnection())
    {
      var tables = cn.Query<SqlServerTable>(@"
        SELECT 
          t.TABLE_CATALOG as [Database], 
          t.TABLE_SCHEMA as [TableSchema], 
          t.TABLE_NAME as [TableName], 
          CASE WHEN t.TABLE_TYPE='VIEW' THEN 'VIEW' ELSE 'TABLE' END as [TableType],
          ep.value as [TableDescription]
		    FROM  INFORMATION_SCHEMA.TABLES t
		    INNER JOIN sys.schemas sc ON t.TABLE_SCHEMA = sc.[name]
          ... full code omitted for brevity - please refer to: 
          ... https://github.com/Drizin/CodegenCS/tree/master/src/CodegenCS.SqlServer
      ").AsList();

      var allColumns = cn.Query<SqlServerColumn>(@"
          ... full code omitted for brevity... 
      ").AsList();

      var fks = cn.Query<SqlServerForeignKey>(@"
          ... full code omitted for brevity... 
      ").AsList();

      var fkCols = cn.Query<SqlServerForeignKeyMember>(@"
          ... full code omitted for brevity... 
      ").AsList();
      foreach (var fk in fks)
      {
        fk.Columns = fkCols.Where(c => 
            c.ForeignKeyConstraintName == fk.ForeignKeyConstraintName && 
            c.FKTableSchema == fk.FKTableSchema
        ).OrderBy(c => c.PKColumnOrdinalPosition).ToList();
      }

      foreach (var table in tables)
      {
        table.Columns = allColumns.Where(c => c.TableSchema == table.TableSchema && c.TableName == table.TableName).ToList();
        foreach(var column in table.Columns)
          column.ClrType = GetClrType(table, column);
        table.Columns.ForEach(c => { c.Database = null; c.TableSchema = null; c.TableName = null; });

        // We copy FKs and remove redundant properties of the parent object (table) which we're attaching this FK into
        table.ForeignKeys = Clone(fks.Where(fk => fk.FKTableSchema == table.TableSchema && fk.FKTableName == table.TableName).ToList());
        table.ForeignKeys.ForEach(fk => { fk.FKTableSchema = null; fk.FKTableName = null; });

        // We copy FKs and remove redundant properties of the parent object (table) which we're attaching this FK into
        table.ChildForeignKeys = Clone(fks.Where(fk => fk.PKTableSchema == table.TableSchema && fk.PKTableName == table.TableName).ToList());
        table.ChildForeignKeys.ForEach(fk => { fk.PKTableSchema = null; fk.PKTableName = null; });

      }

      SqlServerDatabaseSchema schema = new SqlServerDatabaseSchema()
      {
        LastRefreshed = DateTimeOffset.Now,
        Tables = tables,
      };

      Console.WriteLine($"Saving into {outputJsonSchema}...");
      File.WriteAllText(outputJsonSchema, Newtonsoft.Json.JsonConvert.SerializeObject(schema, Newtonsoft.Json.Formatting.Indented));
    }

    Console.WriteLine("Success!");
  }

  string GetClrType(SqlServerTable table, SqlServerColumn column)
  {
    string sqlDataType = column.SqlDataType;
    switch (sqlDataType)
    {
      case "bigint":
        return typeof(long).FullName;
      case "smallint":
        return typeof(short).FullName;
      case "int":
        return typeof(int).FullName;
      case "uniqueidentifier":
        return typeof(Guid).FullName;
      case "smalldatetime":
      case "datetime":
      case "datetime2":
      case "date":
      case "time":
        return typeof(DateTime).FullName;
      case "datetimeoffset":
        return typeof(DateTimeOffset).FullName;
      case "float":
        return typeof(double).FullName;
      case "real":
        return typeof(float).FullName;
      case "numeric":
      case "smallmoney":
      case "decimal":
      case "money":
        return typeof(decimal).FullName;
      case "tinyint":
        return typeof(byte).FullName;
      case "bit":
        return typeof(bool).FullName;
      case "image":
      case "binary":
      case "varbinary":
      case "timestamp":
        return typeof(byte[]).FullName;
      case "nvarchar":
      case "varchar":
      case "nchar":
      case "char":
      case "text":
      case "ntext":
      case "xml":
        return typeof(string).FullName;
      default:
        Console.WriteLine($"Unknown sqlDataType for {table.TableName}.{column.ColumnName}: {sqlDataType}");
        return null;

      // Vendor-specific types
      case "hierarchyid":
        return "Microsoft.SqlServer.Types.SqlHierarchyId"; // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.34+)
      case "geography":
        return "Microsoft.SqlServer.Types.SqlGeography";  // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.32+)
      case "geometry":
        return "Microsoft.SqlServer.Types.SqlGeometry";  // requires Microsoft.SqlServer.Types.dll (EF or Dapper 1.33)+
    }
  }

  public static T Clone<T>(T source)
  {
    var serialized = JsonConvert.SerializeObject(source);
    return JsonConvert.DeserializeObject<T>(serialized);
  }

}

CSX

The idea is that we should put as much as possible into .cs files (and as little as possible in csx script), since intellisense and compile-time checks work better than in the CSX. The script file RefreshDatabaseSchema.csx will basically load libraries, define connection strings and paths, and invoke SqlServerSchemaReader.cs:

/// <summary>
/// This CSX Script will invoke SqlServerSchemaReader, which extracts the schema of SQL database and saves into a JSON file.
/// The easiest way to launch csi.exe (which is shipped with Visual Studio) to run this script is by using PowerShell script RefreshDatabaseSchema.ps1
/// You can do that from Visual Studio (see instructions in RefreshDatabaseSchema.ps1) or you can just execute "Powershell RefreshDatabaseSchema.ps1"
/// </summary>

// System libraries
#r "System.Data.dll"

// Load 3rd-party libraries by their relative paths, relative to "$Env:userprofile\.nuget\packages\"
#r "dapper\2.0.35\lib\netstandard2.0\Dapper.dll"
#r "newtonsoft.json\12.0.3\lib\netstandard2.0\Newtonsoft.Json.dll"

// CS files are better than CSX because Intellisense and Compile-time checks works better. 
#load "SqlServerTable.cs"
#load "SqlServerColumn.cs"
#load "SqlServerForeignKey.cs"
#load "SqlServerForeignKeyMember.cs"
#load "SqlServerDatabaseSchema.cs"
#load "SqlServerSchemaReader.cs"

using System;
using System.IO;
using System.Runtime.CompilerServices;
using System.Data;
using System.Data.SqlClient;

// Helpers to get the location of the current CSX script
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);


// location relative to the CSX script
string outputJsonSchema = Path.GetFullPath(Path.Combine(GetScriptFolder(), "AdventureWorksSchema.json")); 
string connectionString = @"Data Source=MYWORKSTATION\SQLEXPRESS;
                            Initial Catalog=AdventureWorks;
                            Integrated Security=True;";

Func<IDbConnection> connectionFactory = () => new SqlConnection(connectionString);
var reader = new SqlServerSchemaReader(connectionFactory);
reader.ExportSchemaToJSON(outputJsonSchema);

Powershell

Last, I’ll create a PowerShell to invoke the CSX file, which is useful because it can locate the csi.exe in multiple locations and because it can provide to csi.exe the location of per-user NuGet packages, so that CSX can load libraries by their relative-locations, without having to hard-code user-specific folders. RefreshDatabaseSchema.ps1:

# To Execute Powershell Scripts from Visual Studio:
# 1) Right-button PS1 file - "Open With...""
# 2) Configure:
#      Program: Powershell.exe
#      Arguments: -noexit -File %1
#      Friendly Name: Execute PowerShell Script

# To execute CSX scripts you'll need CSI.EXE (C# REPL) which is shipped with Visual Studio
# but can also be installed by using the NuGet package Microsoft.Net.Compilers.Toolset - https://www.nuget.org/packages/Microsoft.Net.Compilers.Toolset/

# For more info about launching CSX scripts from PowerShell or from Visual Studio, check https://drizin.io/code-generation-csx-scripts-part1/

$dir = Split-Path $MyInvocation.MyCommand.Path 
$script = Join-Path $dir ".\RefreshDatabaseSchema.csx"


# Locate CSI.EXE by searching common paths
$csi = ( 
    "$Env:userprofile\.nuget\packages\microsoft.net.compilers.toolset\3.6.0\tasks\net472\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\Roslyn\csi.exe",
    "$Env:programfiles (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Roslyn\csi.exe"
) | Where-Object { Test-Path $_ } | Select-Object -first 1

if (!$csi)
{
    Write-Host "---------------------------------------" -for red
    Write-Host "Can't find csi.exe" -for red
    Write-Host "Please fix search paths above, or install NuGet Microsoft.Net.Compilers.Toolset" -for red
    Write-Host "---------------------------------------" -for red
    Exit 1
}


$stopwatch = [System.Diagnostics.Stopwatch]::StartNew()

Write-host "Starting csi.exe $script ..." -for yellow
& $csi /lib:"$Env:userprofile\.nuget\packages\" $script

Write-Host "Finished in $($stopwatch.Elapsed.TotalMilliSeconds) milliseconds"

# Since I configured "-noexit" parameter in Visual Studio I don't need this
#if ($host.Name -notmatch 'ISE') { Write-Host -NoNewLine "(Just press Enter to exit)" -for cyan; read-host; }  

Running Powershell:

Result file AdventureWorksSchema.json:

Full Source code available here

This is the second part of a multi-part post series I’m going to write about Code Generation using C#: