This is the first in a series of posts looking at Product Extensibility in .NET Framework. In this post, I’m going to talk about business models for IT vendors, and how you can run a profitable SaaS business without sticking to the traditional models, by making some allowances in your architecture that will let you make your customers happy while keeping low maintenance costs.

Enterprise Software, Custom Development, and SaaS

Enterprise Software companies (like Oracle, Microsoft, SAP, etc) have their value proposition centered on the products they own, but they usually sell professional services attached to those products, aimed at training, support, product configuration, and some degree of customization. They will charge tons of money for those professional services, but their core value still resides in their products. You can purchase Oracle Database or Siebel CRM, and they will gladly customize that for you for a few hundreds of thousands of dollars, but they surely won’t modify anything in the core of their product, no matter how much you beg (or pay) for that.

In the opposite way, Custom Development companies (including Consultancy companies which offer custom development services, like Accenture or Tata Consulting Services), they provide their value by doing exactly what their customers want, usually backed by a team of senior consultants/engineers, and some well-established processes. Their value proposition is that they have a large network of consultants who can help in any problem you may have, and you also have someone to blame in case things go wrong.

Custom Development companies don’t really care on how much customization you ask for. For anything that you ask for, they will overcharge you, and happily develop it for you. For those companies, body shopping is exactly what they sell, and their technology or product (if any) is definetly not relevant to their business. If by any chance they already have a codebase that they can reuse(*), they won’t bother to completely modify (or even rewrite it) if you ask for that, because that’s what they are there for. Your codebase won’t be reused for any other customer because it´s probably so tied to your business that it won’t make sense to anyone else. Similarly to Enterprise vendors, Custom Development companies will also cost you an arm and a leg. Similarly, they also have large costs for each new customer.

(*) I won’t call it a product - custom development companies and consultancies do not have products, even if they can reuse their past projects to some extent.

Finally, there are the Software-as-a-Service companies, which are based on economies of scale, where each new customer should increase their revenue at some very low marginal costs. The value proposition for those companies is usually their technology, combined with their low adoption barriers, the lack of vendor lock-in, and the aforementioned scalability. For a “pure” SaaS company, the more customers you have the better - but only as long as they don’t bother you too much or give you too much work, because the major income source is NOT body shopping, but the subscription fees (recurring revenue).

Although SaaS vendors tried hard to commoditize application funcionalities for their customers, each customer is different and they are not happy with out-of-the-box solutions - they need customizations and extensions. The problem is that SaaS sits between Custom Development and Enterprise Software, in the sense that they can modify their core products for their customers, but they also must maintain a stable product codebase which should evolve for all customers, since they are responsible for hosting and maintaning their customers. Unfortunately most SaaS companies can’t find the correct balance between the customizations maintenance and the product management.

Let’s Play Microsoft

Developing a highly configurable and truly extensible product is very difficult and expensive. Making something configurable has a fixed cost that will only make up for when you have a large number of customers that will need that configuration. Making everything configurable not only has a large cost, but is also pointless since you don’t want to sell a framework (much less a compiler), but a stable and well-defined product. Making something extensible to end-users requires a well-designed plugin architecture (or a well designed API) and that comes with a price tag which usually is only compensated when you have a huge number of customers. Large companies like Microsoft and Oracle have teams with hundreds of engineers working exclusively on Product Development (or R&D like you learn in MBA), and they still have limitations. A SaaS company cannot affort that model, unless it has tens of thousands of customers willing to pay for non-commodity SaaS.

I’ve worked for a few years in a SaaS company that followed that Enterprise model and that was very unproductive. When a customer requested something to the Projects/Consultancy department, the request was reviewed by a consultant, which usually had to discuss that with the Product Development team, which would evaluate if that request made sense to be incorporated in the product. In case it didn’t fit into the product they would (at best case) provide a new extension point so that the Projects team could develop a new extension and plug that into a DLL into the product. Does this sound reasonable? If it sounds reasonable for you, it’s because you didn’t picture that most of the requests were cosmetic changes like: remove this button, rename that label, change that color, make this field read-only, etc. All those small changes had to pass through many areas. THAT was costly.

Even for non-trivial things (like pricing rules and discount rules), each customer had rules so specific that the product configurations were not enough for the majority of them. It’s obvious that there were some changes that were possible without requiring a product modification (afterall there was a whole product development team for that), but usually that wasn’t the rule, but the exception. As as consequence of that structure, we could only reach enterprise customers (which usually are the ones who have more money than sense).

At some point the product team also decided to develop their own scripting language (VBA-like). Yes, that’s right - a SaaS company, with no more than a hundred customers, decided to develop their own scripting language, for internal use, because no one could touch the holy product, which was supposed to be so well designed that no educated developer would be able to modify the source code without fucking up everything.

If your company is doing like that, remember: You are not Microsoft. In other words, don’t develop a plugin architecture or a scripting language or a complex API unless you are either a Platform as a Service company (like SalesForce) or unless you have tens of thousands of customers who will extend your product on their own. If you are developing developer-tools for internal use, not only you are not adding value to the company, but also your product managers should leave technology passion aside and start thinking about business. Microsoft cannot develop automation for the worksheets of all their customers, SAS cannot develop ABAP customizations for all their customers, but your SaaS company can customize the instance of each customer, as long as you don’t want to make commodity SaaS.

There is a similar mistake which I’ve seen a few times that is to completely ignore that your product should solve some very clear business needs, and turn your product into a general tool for creating any program, like a 4th generation programming language. Unless you are competing with Microsoft (Visual Studio) or Borland, and unless you are trying to make your own GeneXus / FoxPro / Informix / Clipper/ Progress / etc, then please focus on your product, with some clear business value, and not on applications to make applications.

If you are a SaaS company and your engineers love building compilers and development tools, you should either teach them Business 101 or hire a results-oriented team.

Independent Codebases for each Customer

When you run a multi-tenant SaaS application, all your customers run on the same instance (and consequently on the same codebase). A multi-tenant application must surely provide some level of configuration: a customer can probably add their name, their logo, their colors and visual identity, and probably set up the users, roles and permissions. But the codebase is shared among all customers, so unless you have a quite elaborated plugin architecture (which has a price tag), the customizations are usually expensive and with lots of limitations. Remember: You are not Microsoft.

On the other side, when you run single-tenant applications (each customer with a dedicate instance - not necessarily with his own server), you can have an independent codebase for each customer, and that provides an unlimited level of customization.

Perfect. So I’ll run all my customers in single-tenant instances, and each one will have their own codebase repository, totally disconnected from my baseline product. If that works for Accenture, it can work for me, right?

Not really. As I explained before, Consultancy Body Shops focus is selling man-hour projects. They don’t have a core product, and they don’t care about your source code. Also, they do not host your software. A SaaS company has a core product which is their most valuable asset, they host the application of their customers, and they should provide them with updates and fixes. They cannot just copy the codebase for each customer, customize it, and keep that code running unattended. Well, actually they can, but that’s not scalable, and they would be leaving money on the table: Customers need upgrades, customers need bug fixes, customers don’t want to depend on a single engineer who understands their customizations. When you have a SaaS application with more than a few dozens of customers, meeting all those requirements with independent codebases is either impossible or very expensive and naive.

Supposing that you are brave enought to run your customers with independent codebases, as soon as you reach a few hundred customers, every bug which is discovered and fixed will be difficult to be applied to each of those hundreds of customers, not only because you would have to modify a hundred different places, but also because those projects by now are probably very different from each other. Each new feature developed for one customer (or for the product) won’t be easily available to existing customers. Each new engineer who joins your team will have a learning curve trying to understand the customizations for that client. Everytime some engineer goes on vacation, the person who covers him will have a hard time understanding the new project. As time goes by, it becomes almost impossible to distinguish between what was part of your original product and what was a client customization. So you’ll end up with hundreds of different loosely-related applications, and you lose your most valuable asset: having a coherent product that is mastered by your engineers.

Remember, you don’t want to be as huge as Microsoft - you want to scale to as many customers as possible with as few engineers as possible.

Bottom line is: the major problem in managing a SaaS Product is the tradeoff between Extensibility vs Maintenance.

Extensibility for SaaS Companies who don’t want to be Microsoft

In my next posts I’ll describe some ideas for developing an architecture that is a balance between the “Holy Product” model and the “Repository per Client” model. You’ll be able to offer product enhancements and upgrades to your customers, while being able to completely customize the product programatically for each customer. Customizing the application for each customer will be easier than ever, and that translates into less costs for your customers and more revenue for you.

I’ll use my favorite technology stack for those posts (C#, SQL Server and Entity Framework) but you should be able to adapt most ideas to any other language/framework. I’ll explore how some tools/technologies/paradigms can be used and how some of them don’t make sense or are not practical. On the top of my head I can think of topics like POCO inheritance, code generation, branches per customer, publish-subscriber pattern, Dependency Injection, partial classes, but probably I’ll add more topics as I write the next articles and as I develop my code samples.

Links to all posts of this series

When I was a kid, most computers here in Brazil were using US QWERTY keyboards, and with some help from this nice teacher (on my MS-DOS 5.0) I taught myself typing on that keyboard layout.

One day, someone who should probably be a strong nationalist (or maybe a lobbyist, or maybe just someone suffering from NIH syndrome) invented the Brazilian Portuguese Keyboard and all retailers and companies started using that new standard. I resisted for many years, and remained buying myself only QWERTY keyboards, until I finally gave up and embraced the Brazilian keyboard, since it was getting harder to buy a QWERTY keyboard in Brazil.

The problem is that although we have created our own power plug standard, our own digital TV standard, our own TV middleware and even our own analog color TV standard in the 70s, we still have communication and interaction with other countries who follow other standards.
(If you don’t know Brazil, I must explain that all those standards are NOT about nationalism, patriotism or about Not-Invented-Here syndrome - it’s all about spending lots of public funds so that our politicians can divert huge amounts of money into their own pockets)

But jokes aside (no, the 3 standards were not a joke), sometimes we use Laptops that were purchased abroad (that don’t have our keyboard layout), and we share servers that are hosted or used by people in other countries. That keyboard mismatch usually happens when you are using some keyboard to control a computer which is also controlled by people with different keyboards.

Windows allows you to set-up multiple languages and multiple keyboards for each language, and you can also toggle keyboards/languages with a hotkey (default alt+shift). The problem is that if you type as fast as Mavis taught me to, probably you should frequently toggle keyboards by mistake as I do.

To solve that, I decided that I would disable the hotkey for toggling keyboards, and more than that, I decided to create a Powershell script that would completely remove one keyboard/language and configure another one.

The script below (just save as .ps1 extension) will toggle (uninstalling and reinstalling the other) between Portuguese Brazil ABNT2 keyboard (10416) and US Qwerty (409), but always keeping English language (409).

# This scripts toggles keyboards between "Keyboard Portuguese Brazil ABNT2" and "US Qwerty"
# based on
# and

$currentLangAndKeyboard = (Get-WinUserLanguageList).InputMethodTips

if ($currentLangAndKeyboard -eq "0409:00000409")
  $langList = New-WinUserLanguageList en-US
  $langList[0].InputMethodTips.Add('0409:00010416') # English (United States) - Keyboard Portuguese Brazil ABNT2
  Set-WinUserLanguageList $langList
  $langList = New-WinUserLanguageList en-US
  $langList[0].InputMethodTips.Add('0409:00000409') # English (United States) - US Qwerty
  Set-WinUserLanguageList $langList

# If you want to have two languages (or keyboards) at same time, just .Add() both, but disable toggle language hotkey -
# $HKCUKeyboardLayoutToggle = 'HKCU:\Keyboard Layout\Toggle\'
# Set-ItemProperty -Path $HKCUKeyboardLayoutToggle -Name 'Language Hotkey' -Value 3
# Set-ItemProperty -Path $HKCUKeyboardLayoutToggle -Name 'Layout Hotkey' -Value 3
# Set-ItemProperty -Path $HKCUKeyboardLayoutToggle -Name 'Hotkey' -Value 3

As I said, I wanted to have only one language at a time to avoid the language hotkey, but later I realized that I could have just disabled that annoying hotkey. If you want to just disable the hotkey, just uncomment the final block in the script, which will add some keys to your registry. And if you want to use a powershell script to configure multiple keyboards (simultaneously), just call InputMethodTips.Add for each language/keyboard you want to add.

Auditing Tables

Auditing tables are used to track transactions against a particular table or tables, and sometimes can be used to audit even read-only queries (SELECTS) on your tables (but this is not the subject of this post). SQL Server has an out-of-the-box audit feature, and some other alternatives, however you may prefer a custom solution, where you can have more control and better understanding of the audit tables.

One popular trigger-based solution for this problem is described in this article (with SQL scripts for generating the audit tables (also called shadow tables) and triggers. This solution (from the article) creates one record for each operation (Insert, Update, Delete), obviously with some added columns like the date of the operation, and the user who made the operation. The problem with that design is that it’s difficult to find the state of a particular record at a given time - obviously you can use TOP 1 and ORDER BY to find the state of a single record at any point in time, but that gets difficult when you have to join versioned tables, or even finding a set of records that existed at that given time. So it’s not a good solution for versioning data.

Another problem is that unless your application uses Windows Authentication (or that you are still in 90’s when it was common that each user of client-server applications had dedicated database connections to the database), logging the database user that made the operation is useless - you probably want to know which application user made the operation.

There are other solutions that may (or may not) save some space by tracking only the modified columns, but they also face the same problems that I’ve mentioned earlier.

Personally I believe it’s much better to waste some disk space in favor of something that gives me a better performance and makes daily development easier, because developer’s time is usually much more expensive than disk space.

Data Versioning

When people think about versioning data, they usually think of storing the versioned records in the same table as your main table (where your active records exist). Please don’t. That would radically increase the complexity on your queries in order to make sure the latest version of each record is being used and that you are not duplicating results because of past records. That’s very error-prone. It will also hurt performance of your database. The most common error of versioning in database design is to keep past prices in the same table as current prices. The best place to store past versions of your data is in a separate table.

Similarly, using soft deletes (that famous IsDeleted flag) is a bad idea for the same reasons. The correct place to place your historical data is in a separate report database, and not inside your transactional application. If you understand this design and follow this rule, be careful on what you consider a deleted record: what should be deleted (and possibly moved into your audit/historical table) are records that shouldn’t exist and were created by mistake or similar acts. A duplicate customer is a good example of something that should be deleted, specially because it would force you to either merge the related entities into the correct record or cascade delete them. A batch of accounting entries (posts) that were incorrectly calculated (and were not yet used in your monthly balance or taxes) should be deleted (and should be probably logged into your audit table). A student that is inactive (because he is not enrolled into any courses) should not be deleted.
In general, if an entity can come back to life (like the student) it shouldn’t be deleted (and flagging as inactive is perfectly correct and should not be confused with a soft delete), but if the entity was just wrong and could be recreated somehow (like the accounting entries) or is already created (like versioned records, or like the duplicated customer) then it should be deleted (and not soft deleted, which will cause you more harm than good). A good smell that you are incorrectly using soft deletes is when your application button says “Delete” and you are not really deleting. If you just inactivating the record, probably the button should reflect that.

In summary, your transactional tables should keep only active data - not deleted records, and not past revisions. Don’t be lazy: create new tables (and new CRUDs) for historical data - It will take you a few minutes but will save you countless hours later, so I’m sure it’s a good investment.

Let’s kill two birds with a stone, and use audit tables also for versioning data

Since Versioning Tables and Audit Tables have much in common, I decided that I would use a single structure for both. And that led me to make a few changes in the triggers/audit tables from this article.

This is my proposed design for audit-tables:

  • Like other solutions, each audit table has the same columns as the audited table, but a new identity primary key.
  • For tracking the user who made an operation I keep both ID (int) and Username (varchar) columns. When I can identify the application user I have both his ID and his Name or Login. When I can’t identify the user who is doing the transaction (when something happens outside the application) I track the SQL user that was used, his hostname and IP.
  • Each audit row has both columns for tracking when that record revision started existing (it could be either a new record, or modified from a previous state), and also for tracking when that record revision stopped existing (it could be either a deleted record, or modified to a newer state).
  • AuditStartDate tracks the starting moment of the record revision, AuditEndDate tracks the ending moment for that revision.
  • AuditStartUserID, AuditStartUserName, AuditEndUserID and AuditEndUserName are the User ID and User Name that respectively put the record into that state and the one that removed the record from that state.
  • AuditStartOperation is I (INSERTED) if the tracked record is new (first revision) or U (UPDATED) if the tracked record already existed before and was just updated.
  • AuditEndOperation is D (DELETED) if the tracked record ceased existing because it was deleted, or U if the tracked record just was updated to a new state.
  • AuditStartTransactionGUID and AuditEndTransactionGUID are just unique identifiers that I use to know which operations happened in the same transaction, and mostly for connecting the previous state of a record to the next state. (more on that later).
  • As you may have noticed, I don’t have RevisionID for numbering the revisions of each record. That would force me to refer to the audit table itself, and maybe it could even generate some deadlocks. I just decided that I don’t need it. I can renumber my audit records whenever I need.

A visual example to make things clear

Richard Drizin created the product.
When some record is created, the auditing table will create a record which will track all the information that was inserted into the audited table (highlighted fields in screenshot), and will also add some tracking information (non-highlighted fields) that contain the operation (Insert), the date when it was inserted, and the user who inserted.

Mickey Mouse updated the product (changed the unit price).
When some record is updated, the auditing table will create a new record which will track the new state for the record, and should also mark that the previous revision is not valid anymore. The highlighted fields on the top-right are tracking information for the new revision, which is the same tracking information that is used for updating the end-of-life of the previous revision (highlighted on bottom left). Please note that the date and the transaction which were used on the new revision are exactly the same that were used for marking the end of the lifetime of the past revision - this gives you an easy and elegant way to link the previous state to the new state, and using the exact same datetime is important to have contiguous time periods. Also note that the EndOperation of revision 1 was marked as “U” since that revision was not deleted, but updated into a new state.

Donald Duck deleted the product.
When some record is deleted, no new revisions are created, however the previous active revision must be marked to inform that it’s not valid anymore. The highlighted fields are the ones which were updated in previous revision, and show the user who deleted, and the deletion date.

Queries will be as simple as this:

-- To find the ACTIVE version
SELECT * FROM [Audit_Products] WHERE GETDATE() BETWEEN AuditStartDate AND AuditEndDate

 -- To find the version that existed at any given time
SELECT * FROM [Audit_Products] WHERE @SomeDate BETWEEN AuditStartDate AND AuditEndDate.
-- AuditEndOperation would indicate if that version is still active (NULL), if it was DELETED ('D') or if it was somehow UPDATED ('U')

-- To find the first version
SELECT * FROM [Audit_Products] WHERE AuditStartOperation='I'

-- To find the last version (even if deleted)
SELECT * FROM [Audit_Products] WHERE AuditEndDate='9999-12-31' OR AuditEndOperation='D'

Please note that some of those queries suppose that you are using surrogate keys, which guarantee that under normal conditions each key will have only one insert and at most one delete. And afterall, using surrogate keys is almost always (if not always) a good choice.

Please also note that SQL BETWEEN is INCLUSIVE, so if you use BETWEEN it is possible (although very unlikely, because you would have to search for the exact moment that some record was updated) that you get 2 different revisions. I used BETWEEN to make the article easier to understand, but when searching for the state of records in a past point in time it’s safer to use half-open interval [AuditStartDate, AuditEndDate) : @SomeDate >= AuditStartDate AND @SomeDate < AuditEndDate.

Also, please note that if you run GETDATE() >= AuditStartDate AND GETDATE()< AuditEndDate you could still have problems, because each time you run GETDATE() (even in a single statement) could lead to a different value. But if you want to get current records instead of using GETDATE() you can also check for the AuditEndOperation.

So in summary these are fail-proof queries:

-- To find the ACTIVE version
SELECT * FROM [Audit_Products] WHERE AuditEndOperation IS NULL

 -- To find the version that existed at any given time
SELECT * FROM [Audit_Products] WHERE @SomeDate>=AuditStartDate AND @SomeDate<AuditEndDate.
-- AuditEndOperation would indicate if that version is still active (NULL), if it was DELETED ('D') or if it was somehow UPDATED ('U')

-- To find the first version
SELECT * FROM [Audit_Products] WHERE AuditStartOperation='I'

-- To find the last version (even if deleted)
SELECT * FROM [Audit_Products] WHERE AuditEndDate='9999-12-31' OR AuditEndOperation='D'
-- or even
SELECT * FROM [Audit_Products] WHERE AuditEndOperation IS NULL OR AuditEndOperation='D'

The implementation

For tracking in your tables which system user (and not database user) made an operation, you must somehow pass that information from your application to your database connection. At first I was using SQL Context Info for passing information about the current logged user, but then I decided to use temporary tables for that, to avoid the complexity of binary serialization. This is how I pass information to my triggers:

CREATE PROCEDURE [dbo].[sp_SetContextInfo]
 @UserID INT,
 @Username varchar(128) = NULL
	CREATE TABLE #session ([Username] varchar(128), [UserID] int NOT NULL)
	INSERT INTO #session VALUES (@Username, @UserID)

This is how I receive information from my triggers:

CREATE PROCEDURE [dbo].[sp_GetContextInfo]
 @Username varchar(128) OUTPUT,
	SET @UserID = 0
	SET @Username = NULL
	SET @TransactionGUID = NEWID()
	IF OBJECT_ID('tempdb..#session') IS NOT NULL BEGIN -- Get @Username and @UserID given by the application
		SELECT @Username = Username, @UserID = COALESCE(UserID, 0), @TransactionGUID = COALESCE(TransactionGUID, NEWID())
		FROM #session
	IF (@Username IS NULL) -- if no application user was given, get sql user, hostname and ip
		SELECT @Username = '[' + SYSTEM_USER   + '] ' + RTRIM(CAST(hostname AS VARCHAR))
                + ' (' + RTRIM(CAST(CONNECTIONPROPERTY('client_net_address') AS VARCHAR)) + ')'
		from master..sysprocesses where spid = @@spid

This is how I pass information about the current user to the database connections (using C# and Entity Framework 6), so that every change can be tracked down to the correct user:

namespace NorthwindAudit
    partial class NorthwindAuditDB

        /// <summary>
        /// Currently logged user that is using the connection. For auditing purposes.
        /// </summary>
        public string Username { get; set; }
        /// <summary>
        /// Currently logged user that is using the connection. For auditing purposes.
        /// </summary>
        public int UserID { get; set; }

        // modify your constructor to force developer to pass the username and userid.
        public NorthwindAuditDB(string Username, int UserID) : this()
            this.Username = Username;
            this.UserID = UserID;
            this.Configuration.LazyLoadingEnabled = true;

            // you may want to disable this if you have some batch jobs that dont run on users context... but I like to enforce that caller always provide some user
            if (this.UserID == 0 || this.Username == null)
                throw new ArgumentNullException("You must provide the application user, for auditing purposes");

            this.Database.Connection.StateChange += new System.Data.StateChangeEventHandler(Connection_StateChange);

        //pass the application user to the SQL when the connection opens (because the connection could already have been used by another DbContext)
        void Connection_StateChange(object sender, System.Data.StateChangeEventArgs e)
            // State changed to Open
            if (e.CurrentState == ConnectionState.Open && e.OriginalState != ConnectionState.Open)
                SetConnectionUser(this.UserID, this.Username);
        void SetConnectionUser(int userID, string username)
            // Create local temporary context table
            var cmd = this.Database.Connection.CreateCommand();
            cmd.CommandText = "IF OBJECT_ID('tempdb..#session') IS NOT NULL DROP TABLE #session";

            if (userID != 0 && username != null)
                cmd.CommandText = "CREATE TABLE #session ([Username] varchar(128), [UserID] int NOT NULL, [TransactionGUID] UNIQUEIDENTIFIER NOT NULL)";
                cmd.CommandText = "INSERT INTO #session ([Username], [UserID], [TransactionGUID]) VALUES (@Username, @UserID, NEWID())";
                cmd.Parameters.Add(new SqlParameter("@UserID", userID));
                cmd.Parameters.Add(new SqlParameter("@Username", username ?? ""));

        // This probably is not necessary, but I like to check that the session table matches the provided user. 
        // I haven't made stress testing for concurrency issues, so better safe than sorry.
        public override int SaveChanges()

            if (this.UserID == 0 || this.Username == null)
                throw new ArgumentNullException("You must provide a user for the connection, for auditing purposes");

            #region Just in case! Double checking that table #session was created and that it matches the user for the context
            bool wasClosed = false;
            if (this.Database.Connection.State == ConnectionState.Closed)
                wasClosed = true;
            var cmd = this.Database.Connection.CreateCommand();
            cmd.CommandText = "EXEC [dbo].[sp_GetContextInfo] @UserID OUTPUT, @Username OUTPUT, @TransactionGUID OUTPUT";
            var parm1 = new SqlParameter("@UserID", SqlDbType.Int); parm1.Direction = ParameterDirection.Output; cmd.Parameters.Add(parm1);
            var parm2 = new SqlParameter("@Username", SqlDbType.VarChar, 128); parm2.Direction = ParameterDirection.Output; cmd.Parameters.Add(parm2);
            var parm3 = new SqlParameter("@TransactionGUID", SqlDbType.UniqueIdentifier); parm3.Direction = ParameterDirection.Output; cmd.Parameters.Add(parm3);

            //Error: ExecuteNonQuery requires an open and available Connection

            if (wasClosed)

            if (parm1.Value == null || ((int)parm1.Value) == 0 || parm2.Value == null || string.IsNullOrEmpty((string)parm2.Value))
                throw new ArgumentNullException("You must provide a user for the connection, for auditing purposes");
            if (((int)parm1.Value) != this.UserID || ((string)parm2.Value) != this.Username)
                throw new ArgumentNullException("The user provided in #session table does not match the user provided on the connection (DbContext)");

            return base.SaveChanges();


This is a sample of Audit Table for Northwind Orders table:

CREATE TABLE [audit].[Audit_dboProducts](
	[Audit_dboProductsID] [int] IDENTITY(1,1) NOT NULL,
	[ProductID] [int] NOT NULL,
	[CategoryID] [int] NULL,
	[Discontinued] [bit] NOT NULL,
	[ProductName] [nvarchar](40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
	[QuantityPerUnit] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
	[ReorderLevel] [smallint] NULL,
	[SupplierID] [int] NULL,
	[UnitPrice] [money] NULL,
	[UnitsInStock] [smallint] NULL,
	[UnitsOnOrder] [smallint] NULL,
	[AuditStartDate] [datetime] NOT NULL,
	[AuditEndDate] [datetime] NOT NULL,
	[AuditStartOperation] [char](1) COLLATE Latin1_General_CI_AS NOT NULL,
	[AuditEndOperation] [char](1) COLLATE Latin1_General_CI_AS NULL,
	[AuditStartUserID] [int] NOT NULL,
	[AuditStartUsername] [varchar](128) COLLATE Latin1_General_CI_AS NOT NULL,
	[AuditEndUserID] [int] NULL,
	[AuditEndUsername] [varchar](128) COLLATE Latin1_General_CI_AS NULL,
	[AuditStartTransactionGUID] [uniqueidentifier] NOT NULL,
	[AuditEndTransactionGUID] [uniqueidentifier] NULL,
	[Audit_dboProductsID] ASC

This is a sample of Audit Trigger for Northwind Orders table:

ALTER TRIGGER [dbo].[trAuditProducts] ON [dbo].[Products]
WITH EXECUTE AS 'audituser'
	SET NOCOUNT ON -- Trigger cannot affect the "rows affected" counter, or else it would break Entity Framework
	-- Logged User
	DECLARE @Username varchar(128)
	DECLARE @Now datetime
	EXEC [dbo].[sp_GetContextInfo] @UserID OUTPUT, @Username OUTPUT, @TransactionGUID OUTPUT
	SET @infinite = '9999-12-31'
	-- InsertUpdate
	DECLARE @Action varchar(1)
	SET @Action = 'D'

    -- Defining if it's an UPDATE (U), INSERT (I), or DELETE ('D')
    IF (SELECT COUNT(*) FROM inserted) > 0 BEGIN
		IF (SELECT COUNT(*) FROM deleted) > 0  
			SET @Action = 'U'
			SET @Action = 'I'

    -- Closing the lifetime of the current revisions (EndDate=infinite) for records which were updated or deleted
    IF (@Action='D' OR @Action='U')
		UPDATE [audit].[Audit_dboProducts]
		SET [AuditEndDate] = @Now, 
		[AuditEndUserID] = @UserID,
		[AuditEndUsername] = @Username,
		[AuditEndTransactionGUID] = @TransactionGUID,
		[AuditEndOperation] = @Action 
		FROM [audit].[Audit_dboProducts] aud
		INNER JOIN deleted tab
		ON [tab].[ProductID] = [aud].[ProductID]
		AND aud.[AuditEndDate] = @infinite

    -- Creating new revisions for records which were inserted or updated
    IF (@Action='I' OR @Action='U') BEGIN
		INSERT INTO [audit].[Audit_dboProducts] ([ProductID], [ProductName], [SupplierID], [CategoryID], [QuantityPerUnit], [UnitPrice], [UnitsInStock], [UnitsOnOrder], [ReorderLevel], [Discontinued],  [AuditStartDate], [AuditEndDate], [AuditStartOperation], [AuditStartUserID], [AuditStartUsername], [AuditStartTransactionGUID])
		SELECT 	[inserted].[ProductID], [inserted].[ProductName], [inserted].[SupplierID], [inserted].[CategoryID], [inserted].[QuantityPerUnit], [inserted].[UnitPrice], [inserted].[UnitsInStock], [inserted].[UnitsOnOrder], [inserted].[ReorderLevel], [inserted].[Discontinued],  
		FROM inserted


Let’s try another test on Northwind database:

        static void Main(string[] args)
            // creating product, order and orderitem
            var db = new NorthwindAuditDB("Richard Drizin", 27);
            var product = new Product()
                ProductName = "3/4 inches screw",
                UnitPrice = 9.99m,
                UnitsInStock = 23
            var order = new Order()
                CustomerID = "FRANK", // Customers PK is varchar in Northwind ... yeah I know
                EmployeeID = 1,
                OrderDate = DateTime.Now,
            order.Order_Details.Add(new Order_Detail()
                Product = product,
                UnitPrice = product.UnitPrice.Value,
                Quantity = 3,

            // updating quantity of items
            db = new NorthwindAuditDB("Mickey Mouse", 31);
            var lastOrder = db.Orders.Where(x => x.CustomerID == "FRANK").OrderByDescending(x=>x.OrderID).First();

            // deleting order and orderitem
            db = new NorthwindAuditDB("Donald Duck", 33);
            var lastOrder2 = db.Orders.Where(x => x.CustomerID == "FRANK").OrderByDescending(x => x.OrderID).First();



Product was created once, never modified or deleted. (I’ve hidden null columns so the screenshot could fit the article):

Order was inserted, and later deleted (it’s a single row, but I made it vertical so the screenshot could fit the article):

Order item was inserted, updated, and later deleted. (there are 2 rows, but I made it vertical so the screenshot could fit the article):

If I manually update (on SQL Management Studio) the table, it will track the SQL user, Hostname an IP:

Last comments and some advanced techniques:

  • You can create the AUDIT tables on a separate file group (because of growth?). However that will give you the freedom to restore audit tables individually, and I think it is more dangerous than helpful.
  • I created the AUDIT tables under a different schema, so the triggers must run “WITH EXECUTE AS” on some user which has permission on that schema. The regular database user for my application cannot access the auditing tables.
  • Instead of using “infinite” you could use NULL. I prefer to leave infinite so that my queries can use BETWEEN instead of checking for nulls or using COALESCE.
  • You cannot use text, ntext, or image columns in the ‘inserted’ tables. You can circumvent that by looking for the data in the real table, since the trigger runs after the insert/update happens. Just join the inserted with the real table, and refer to those columns on the real table.
  • For the updates I’m not checking if something really changed. I really don’t need it because Entity Framework only sends updates when something was really changed. If you need to check for modifications (with small performance penalty) you can also join the inserted table with the real table, and only insert when something was modified.
  • For updates you could also track only changed columns by keeping NULLs on every UPDATE that didn’t modify that column, but then for a nullable column you wouldn’t be able to tell when it’s a NULL or when it was a “not modified”. Even for non-nullable columns I still don’t think it’s worth - I prefer to have a simple and homogeneous design, in the sense that the audit tables reflect the exact same state as my transactional tables. If I need to make a human-readable log of what’s changed that’s responsibility of another algorithm, and not responsibility of the table.
  • I’m tracking modifications on every column. Again, I prefer to have a simple and homogeneous solution better than saving some disk space.