Entity Framework Core 3.1 Bug vs 2.2, Speed and Memory During Streaming

Full source code available here.

A while ago I wrote a blog post about the advantages of streaming results from Entity Framework Core as opposed to materializing them inside a controller and the returning the results. I saw very significant improvements in memory consumption when streaming as shown in the chart below.

For this post I updated the code to use Entity Framework Core 3.1.8 on .NET Core 3.1 and was very surprised to see that streaming data did not work as before. Memory usage was high and the speed of the response was much poorer than I expected.

I tried Entity Framework Core 2.2.6 on .NET Core 3.1, and it behaved as I expected, fast and low memory.

Below is how I carried out my comparison.

The Setup
A simple Web API application using .NET Core 3.1.

A local db seeded with 10,000 rows of data using AutoFixture.

An API controller with a single action method that returns data from the database. Tracking of Entity Framework entities is turned off.

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SalesContext _salesContext;

    public ProductsController(SalesContext salesContext)
    {
        _salesContext = salesContext;
    }

    [HttpGet("streaming/{count}")]
    public ActionResult GetStreamingFromService(int count)
    {
        IQueryable<Product> products = _salesContext.Products.OrderBy(o => o.ProductId).Take(count).AsNoTracking();
        return Ok(products);
    }
}

It’s a very simple application.

The Test
I “warm up” the application by making a few requests.

Then I use Fiddler to hit the products endpoint 20 times, each request returns 8,000 products.

I do the same for both Entity Framework Core 3.1.8 and Entity Framework Core 2.2.6.

The Results
Framework Core 2.2.6 out performs Framework Core 3.1.8 in both speed of response and memory consumption.

Here is the comparison of how quickly they both load the full set of results.

Time to First Byte (TTFB) –

  • EF 2.2.6 returns data before we even reach 10ms in all cases and before 2ms in most cases.
  • EF 3.1.8 is never faster than 80ms and six requests take more than 100ms.

Overall Elapsed –

  • EF 2.2.6 returns most requests in the 26ms to 39ms range, with only two taking more than 40ms.
  • EF 3.1.8 returns four requests in less than 100ms. The rest are in the 100ms to 115ms range.

The speed difference between EF Core 2.2.6 and EF Core 3.1.8 is significant, to say the least.

Memory Usage –

Here is the graph of memory usage during the test.

  • EF Core 2.2.6 maintains low memory usage.
  • EF Core 3.1.8 consumes significantly more memory to do the same job.

Remember, they are both using the same application code, it’s only the versions of Entity Framework that differ.

I also performed tests with other versions of Entity Framework Core 3.1.*, and Entity Framework 2.* and same very similar results. It seems something in the 3.1 stack is done very differently than the 2.* stack.

As always, you can try it for yourself with the attached zip.

Full source code available here.

Getting Started with ElasticSearch, Part 2 – Searching with a HttpClient

Full source code available here.

In the previous blog post I showed how to setup ElasticSearch, create and index and seed the index with some sample documents. That is not a lot of use without the ability to search it.

In this post I will show how to use a typed HttpClient to perform searches. I’ve chosen not to use the two libraries provided by the Elasticsearch company because I want to stick with JSON requests that I can write and test with any tool like Fiddler, Postman or Rest Client for Visual Studio Code.

If you haven’t worked with HttpClientFactory you can check out my posts on it or the Microsoft docs page.

The Typed HttpClient
A typed HttpClient lets you, in a sense, hide away that you are using a HttpClient at all. The methods the typed client exposes are more business related than technical – the the type of request, the body of the request, how the response is handled are all hidden away from the consumer. Using a typed client feels like using any other class with exposed methods.

This typed client will expose three methods, one to search by company name, one to search my customer name and state, and one to return all results in a paged manner.

Start with an interface that specifies the methods to expose –

public interface ISearchService
{
    Task<string> CompanyName(string companyName);
    Task<string> NameAndState(string name, string state);
    Task<string> All(int skip, int take, string orderBy, string direction);
}

Create the class that implements that interface and takes a HttpClient as a constructor parameter –

public class SearchService : ISearchService
{
    private readonly HttpClient _httpClient;
    public SearchService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }
    //snip...

Implement the search functionality (and yes, I don’t like the amount of escaping I need to send a simple request with a JSON body) –

public async Task<string> CompanyName(string companyName)
{
    string query = @"{{
                        ""query"": {{
                            ""match_phrase_prefix"": {{ ""companyName"" : ""{0}"" }} 
                        }}
                    }}";
    string queryWithParams = string.Format(query, companyName);
    return await SendRequest(queryWithParams);
}

public async Task<string> NameAndState(string name, string state)
{
    string query = @"{{
                        ""query"": {{
                            ""bool"": {{
                                ""must"": [
                                    {{""match_phrase_prefix"": {{ ""fullName"" : ""{0}"" }} }}
                                    ,{{""match"": {{ ""address.state"" : ""{1}""}}}} 
                                ]
                            }}
                        }}
                    }}";
    string queryWithParams = string.Format(query, name, state);
    return await SendRequest(queryWithParams);
}

public async Task<string> All(int skip, int take, string orderBy, string direction)
{
    string query = @"{{
                    ""sort"":{{""{2}"": {{""order"":""{3}""}}}},
                    ""from"": {0},
                    ""size"": {1}
                    }}";

    string queryWithParams = string.Format(query, skip, take, orderBy, direction);
    return await SendRequest(queryWithParams);
}

And finally send the requests to the ElasticSearch server –

private async Task<string> SendRequest(string queryWithParams)
{
    var request = new HttpRequestMessage()
    {
        Method = HttpMethod.Get,
        Content = new StringContent(queryWithParams, Encoding.UTF8, "application/json")
    };
    var response = await _httpClient.SendAsync(request);
    var content = await response.Content.ReadAsStringAsync();
    return content;
}

The Setup
That’s the typed client taken care of, but it has be added to the HttpClientFactory, that is done in Startup.cs.

In the ConfigureServices(..) method add this –

services.AddHttpClient<ISearchService, SearchService>(client =>
{
    client.BaseAddress = new Uri("http://localhost:9200/customers/_search");
});

I am running ElasticSearch on localhost:9200.

That’s all there is to registering the HttpClient with the factory. Now all that is left is to use the typed client in the controller.

Searching
The typed client is passed to the controller via constructor injection –

[ApiController]
[Route("[controller]")]
public class SearchController : ControllerBase
{
    private readonly ISearchService _searchService;
    public SearchController(ISearchService searchService)
    {
        _searchService = searchService;
    }
    //snip...

Add some action methods to respond to API requests and call the methods on the typed client.

[HttpGet("company/{companyName}")]
public async Task<ActionResult> GetCompany(string companyName)
{
    var result = await _searchService.CompanyName(companyName);
    return Ok(result);
}

[HttpGet("nameAndState/")]
public async Task<ActionResult> GetNameAndState(string name, string state)
{
    var result = await _searchService.NameAndState(name, state);
    return Ok(result);
}

[HttpGet("all/")]
public async Task<ActionResult> GetAll(int skip = 0, int take = 10, string orderBy = "dateOfBirth", string direction = "asc")
{
    var result = await _searchService.All(skip, take, orderBy, direction);
    return Ok(result);
}

That’s it. You are up and running with ElasticSearch, a seeded index, and an API to perform searches.

In the next post I’ll show you how to deploy ElasticSearch to AWS with some nice infrastructure as code.

Full source code available here.

DynamoDb, Reading and Writing Data with .Net Core – Part 2

Full source code available here.

A few weeks ago I posted about reading and writing data to DynamoDb. I gave instruction on how to get create tables on localstack and how to use the AWS Document Model approach. I also pointed out that I was not a big fan of this, reading data look like –

[HttpGet("{personId}")]
public async Task<IActionResult> Get(int personId)
{
    Table people = Table.LoadTable(_amazonDynamoDbClient, "People");
    var person = JsonSerializer.Deserialize<Person> ((await people.GetItemAsync(personId)).ToJson());
    return Ok(person);
}

You have to cast to JSON, then deserialize, I think you should be able be able to do something more like – people.GetItemAsync(personId), but you can’t

And writing data looked like –

[HttpPost]
public async Task<IActionResult> Post(Person person)
{
    Table people = Table.LoadTable(_amazonDynamoDbClient, "People");
    
    var document = new Document();
    document["PersonId"] = person.PersonId;
    document["State"] = person.State;
    document["FirstName"] = person.FirstName;
    document["LastName"] = person.LastName;
    await people.PutItemAsync(document);
    
    return Created("", document.ToJson());
}

For me this feels even worse, having to name the keys in the document, very error prone and hard.

Luckily there is another approach that is a little better. You have to create a class with attributes that indicate what table the class represents and what properties represent the keys in the table.

using Amazon.DynamoDBv2.DataModel;

namespace DynamoDbApiObjectApproach.Models
{
    
    [DynamoDBTable("People")]
    public class PersonDynamoDb
    {

        [DynamoDBHashKey]
        public int PersonId {get;set;}
        public string State {get;set;}
        public string FirstName {get;set;}
        public string LastName {get;set;}
    }
}

Because of these attributes I don’t want to expose this class too broadly, so I create a simple POCO to represent a person.

public class Person
{
    public int PersonId {get;set;}
    public string State {get;set;}
    public string FirstName {get;set;}
    public string LastName {get;set;}
}

I use AutoMapper to map between the two classes, never exposing the PersonDynamoDb to the outside world. If you need help getting started with AutoMapper I wrote a couple of posts recently on this.

Here’s how reading and writing looks now –

[HttpGet("{personId}")]
public async Task<IActionResult> Get(int personId)
{
    var personDynamoDb = await _dynamoDBContext.LoadAsync<PersonDynamoDb>(personId);
    var person = _mapper.Map<Person>(personDynamoDb);
    return Ok(person);
}

[HttpPost]
public async Task<IActionResult> Post(Person person)
{
    var personDynamoDb = _mapper.Map<PersonDynamoDb>(person);
    await _dynamoDBContext.SaveAsync(personDynamoDb);
    return Created("", person.PersonId);
}

This is an improvement, but still not thrilled with the .NET SDK for DynamoDb.

Full source code available here.

DynamoDb, Reading and Writing Data with .Net Core – Part 1

Full source code available here.

A few weeks ago I started playing with DynamoDb in a .NET application. I read through the AWS documentation but felt it was incomplete and a little out of date. This made it quite hard to figure out the “right” way of using the AWS DynamoDb libraries. For example, there is no good example of using dependency injection to pass a DynamoDb client into a controller, and no guidance on whether that client should be request, transient of singleton scoped.

In this and following posts I’m going to describe approaches that work, they may not be “right” either.
This post is will show how to read and write from DynamoDb using the document interface and pass in a DynamoDb client to controller

Getting Started

Getting started with AWS can feel like a bit of a hurdle (especially due to credential complications), and a good way to get your feet wet is to use localstack – https://github.com/localstack/localstack, consider installing https://github.com/localstack/awscli-local too.

I’m not going to describe how to get localstack running, I’m going to assume you have done that or you have an AWS account and you know how to set the required credentials.

Once you have localstack installed or you AWS account working, run the following to create the DynamoDB table.

aws --endpoint-url=http://localhost:4569 dynamodb create-table --table-name People  --attribute-definitions AttributeName=PersonId,AttributeType=N --key-schema AttributeName=PersonId,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

You can add data to the table with the following –

aws --endpoint-url=http://localhost:4569 dynamodb put-item --table-name People  --item '{"PersonId":{"N":"1"},"State":{"S":"MA"}, "FirstName": {"S":"Alice"}, "LastName": {"S":"Andrews"}}'
aws --endpoint-url=http://localhost:4569 dynamodb put-item --table-name People  --item '{"PersonId":{"N":"2"},"State":{"S":"MA"}, "FirstName": {"S":"Ben"}, "LastName": {"S":"Bradley"}}'
aws --endpoint-url=http://localhost:4569 dynamodb put-item --table-name People  --item '{"PersonId":{"N":"3"},"State":{"S":"MA"}, "FirstName": {"S":"Colin"}, "LastName": {"S":"Connor"}}'

To see all the items you just stored –

aws --endpoint-url=http://localhost:4569 dynamodb scan --table-name People

The Web API application

I’m going to show you how to retrieve and item by an id and to store a new item using the document interface, in the next post I’ll show how to do the same with the object interface.

But first things first, configuration.

Open the appsettings.json file and replace what’s there with –

{
  "AWS": {
    "Profile": "localstack-test-profile",
    "Region": "us-east-1",
    "ServiceURL": "http://localhost:4566"
  }
}

Add the AWSSDK.DynamoDBv2 nuget package to the project.

In Startup.cs add a using Amazon.DynamoDBv2;

In ConfigureServices() add the following –

public void ConfigureServices(IServiceCollection services)
{
    var awsOptions = Configuration.GetAWSOptions();

    services.AddDefaultAWSOptions(awsOptions);
    services.AddAWSService<IAmazonDynamoDB>();
    services.AddControllers();
}

Inside the controller I’m going to use an AmazonDynmoDbClient to execute calls against DynamoDb, but the SDK doesn’t seem to provide a way to inject one directly, so I came up with a workaround.

//snip...
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
//snip.. 

public class PersonController : ControllerBase
{
    private readonly AmazonDynamoDBClient _amazonDynamoDbClient;
    public PersonController(IAmazonDynamoDB amazonDynamoDB)
    {
        _amazonDynamoDbClient = (AmazonDynamoDBClient) amazonDynamoDB;
    }
    //snip..

Line 8 declares an AmazonDynamoDBClient.
Line 9, the constructor takes an IAmazonDynamoDB as a parameter.
Line 11 casts this IAmazonDynamoDB to an AmazonDynamoDBClient.
Again, I don’t know if this is the best approach, but it works fine.

Here is how to get an item from the people table.

[HttpGet("{personId}")]
public async Task<IActionResult> Get(int personId)
{
    Table people = Table.LoadTable(_amazonDynamoDbClient, "People");
    var person =  JsonSerializer.Deserialize<Person> ((await people.GetItemAsync(personId)).ToJson());
    return Ok(person);
}

This is how you add an item to the People table.

[HttpPost]
public async Task<IActionResult> Post(Person person)
{
    Table people = Table.LoadTable(_amazonDynamoDbClient, "People");
    
    var document = new Document();
    document["PersonId"] = person.PersonId;
    document["State"] = person.State;
    document["FirstName"] = person.FirstName;
    document["LastName"] = person.LastName;
    await people.PutItemAsync(document);
    
    return Created("", document.ToJson());
}

I’m not a fan of all the hardcoded attributes for properties of the person, but this is a way of doing it.
Another approach is to use the object model, and I’ll show that in a later blog post.

To exercise the GET method open your browser and go to http://localhost:5000/people/1.

To exercise the POST you can use Fiddler, Curl, Postman or the very nice extension for REST Client extension for Visual Studio Code.

If you are using that extension, here is the request –

POST http://localhost:5000/person HTTP/1.1
content-type: application/json

{
    "personId": 6,
    "state": "MA",
    "firstName": "Frank",
    "lastName" : "Freeley"
}

Full source code available here.

Streaming Results from Entity Framework Core and Web API Core – Part 2

Full source code available here.

Some time ago I wrote a post showing how to stream results from Entity Framework over Web API. This approach a few benefits – the results would not be materialized in the API code, a small amount of memory would be used irrespective of the size of the data returned, the results would being steaming as soon as possible and the speed of the request was faster than doing something like .ToList() or .ToListAsync().

In the example code I directly accessed the DbContext from the API controller, a reader of the blog got in touch to ask the database access code could be kept away from the API controller. This blog post shows a simple way of achieving this.

In the related post I had a method like this –

[HttpGet("streaming/")]
public IActionResult GetStreaming()
{
    IQueryable<Product> products = _salesContext.Products.AsNoTracking();
    return Ok(products);
}

The _salesContext was passed into the constructor of the ProductsController class using dependency injection and access directly.

In this post I’m going to pass a ProductsService into the ProductsController and use it to make the request to the database.

Here is how the controller now looks –

    [Route("api/[controller]")]
    [ApiController]
    public class ProductsController : ControllerBase
    {
        private readonly ProductsService _productsService;

        public ProductsController(ProductsService productsService)
        {
            _productsService = productsService;
        }

        [HttpGet("streamingFromService/{count}")]
        public ActionResult GetStreamingFromService(int count)
        {
            return Ok(_productsService.GetStreaming(count));
        }

        [HttpGet("streamingFromServiceWithProjection/{count}")]
        public ActionResult GetStreamingFromServiceWithProjection(int count)
        {
            return Ok(_productsService.GetStreamingWithProjection(count));
        }

        [HttpGet("nonStreamingFromService/{count}")]
        public async Task<ActionResult> GetNonStreamingFromService(int count)
        {
            return Ok(await _productsService.GetNonStreaming(count));
        }
    }

The ProductService is very simple, it makes the relevant calls to the database and returns the Products or ProductModels. For good measure I’ve added Automapper to project the Product to ProductModel, separating the underlying data type from the one presented to the caller.

    public class ProductsService
    {
        private readonly SalesContext _salesContext;
        private readonly IMapper _mapper;

        public ProductsService(SalesContext salesContext, IMapper mapper)
        {
            _salesContext = salesContext;
            _mapper = mapper;
        }

        public IQueryable<Product> GetStreaming(int count)
        {
            IQueryable<Product> products = _salesContext.Products.OrderBy(o => o.ProductId).Take(count).AsNoTracking();
            return products;
        }

        public IQueryable<ProductModel> GetStreamingWithProjection(int count)
        {
            var productModels = _mapper.ProjectTo<ProductModel>(_salesContext.Products.OrderBy(o => o.ProductId).Take(count).AsNoTracking());
            return productModels;
        }

        public async Task<IList<Product>> GetNonStreaming(int count)
        {
            IList<Product> products = await _salesContext.Products.OrderBy(o => o.ProductId).Take(count).ToListAsync();
            return products;
        }
    }

Now, some people might be concerned about returning an IQueryable, if you are see this post by Mark Seemann https://blog.ploeh.dk/2012/03/26/IQueryableTisTightCoupling/ to start digging into the topic.

While writing this I came across what seems like a severe performance bug in Entity Framework Core 3.x, I’m working on another post which will cover this is detail.

Full source code available here.

Fluent Validation in ASP.NET Core 3.1

Full source code available here.

This is an update to a post I wrote in 2017 talking about Fluent Validation in ASP.NET Core.

The example is the same but there has been few updates. The first is how you setup the FluentValidation in Startup.cs, and the second is that you don’t need a ActionFilterAttribute anymore. I have included an example of how to call the action method using Fiddler.

Step 1

Firstly add the package to your Web Api project.
Install-Package FluentValidation.AspNetCore.
If you are going to keep your validators in the Web Api project that is all you need to add.

But if you put the validators in a different project you need to add the FluentValidation package to that project
Install-Package FluentValidation.

Step 2

In startup.cs add –

public void ConfigureServices(IServiceCollection services)
{
     services.AddControllers()
        .AddFluentValidation(fv => fv.RegisterValidatorsFromAssemblyContaining<PersonModelValidator>());}
Step 3

Add a model and a validator.

public class PersonModel
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class PersonModelValidator : AbstractValidator<PersonModel>
{
    public PersonModelValidator()
    {
        RuleFor(p => p.FirstName).NotEmpty();
        RuleFor(p => p.LastName).Length(5);
    }
}

Example Usage

In Fiddler, Postman or any other tool you POST to localhost:5000/person with the following header –

Content-Type: application/json

And body –

{
    "firstName": "",
    "lastName": "This is too long"
}

Here is how the request looks in Fiddler –

The response will look like this –

{
    "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
    "title": "One or more validation errors occurred.",
    "status": 400,
    "traceId": "|84df05e2-41e0d4841bb61293.",
    "errors": {
        "LastName": [
            "'Last Name' must be 5 characters in length. You entered 16 characters."
        ],
        "FirstName": [
            "'First Name' must not be empty."
        ]
    }
}

Easy to read that there are two errors.

Full source code available here.

Passing Configuration Options Into Middleware, Services and Controllers in ASP.NET Core 3.1

Full source code here.

I recently hit a problem where I needed to reload configuration settings as they changed, fortunately this is relatively straightforward when using the IOptionsMonitor, in .NET Core.

Add a few lines to ConfigureServices, pass the configuration options as a scoped service to the controller or into another service and everything works.

Here is all you need in the ConfigureServices method –

public void ConfigureServices(IServiceCollection services)
{
	services.AddOptions();
	services.Configure<WeatherOptions>(Configuration.GetSection("WeatherOptions"));
	services.AddScoped<ISomeService, SomeService>();

Here’s the service that takes the IOptionsMonitor

public class SomeService : ISomeService
{
	private readonly WeatherOptions _weatherOptions;

	public SomeService(IOptionsMonitor<WeatherOptions> weatherOptions)
	{
		_weatherOptions = weatherOptions.CurrentValue;
	}
// snip..

And here is the controller taking the IOptionsMonitor and the service that also uses the options, this is of course contrived and you almost certainly would not want to take both constructor parameters –

public class WeatherForecastController : ControllerBase
{
	private readonly WeatherOptions _weatherOptions;
	private readonly ISomeService _someService;
	public WeatherForecastController(IOptionsMonitor<WeatherOptions> weatherOptions, ISomeService someService)
	{
		_someService = someService;
		_weatherOptions = weatherOptions.CurrentValue;
	}

But what if you also had to pass those configuration settings into a piece of middleware that executes on each request and if they configuration values change you want to be reflected in the execution of the middleware. This is not so easy because the constructor of a middleware class can only have singletons injected into it, same applies to the Configure method Startup.

Fortunately, the Invoke method can have a scoped service injected into it, see here for more.

public class SomeMiddleware
{
	private readonly RequestDelegate _next;

	public SomeMiddleware(RequestDelegate next)
	{
		_next = next;
	}

	public async Task InvokeAsync(HttpContext context, ISomeService someService)
	{
		someService.DoSomething("Middleware - ");
		await _next(context);
	}
}

To use the middleware, add a single line to the Configure method –

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
	app.UseMiddleware<SomeMiddleware>();
	//snip..

That’s it.

Full source code here.

Simmy Chaos Engine for .NET – Part 5, Breaking Your Own Code

Full source code here.

The blog posts I have written so far about Simmy all deal with the scenario where you don’t have control over the source code that you want to break, in these cases Simmy policies are applied to the calling code.

In this post I’m going to show a very simple way of generating faults inside your own code. (For those of you familiar with Polly, you could use a policy registry, but I’m not going to in this example).

I have a Web Api application that has a very contrived RandomValueGenerator to return numbers and strings.

The RandomValueGenerator has fault policy that throws exceptions and a behavior policy that replaces good data with bad data.

The fault policy looks like this –

InjectOutcomePolicy _faultPolicy = MonkeyPolicy.InjectFault( 
    new Exception("Simmy threw an exception"),
    injectionRate: .5,
    enabled: () => true
);

50% of the time it will throw an exception.

The behavior policy looks like this –

InjectOutcomePolicy<NameAndLength> _corruptDataPolicy = MonkeyPolicy.InjectFault<NameAndLength>(
    new NameAndLength { Name = "xxxxx", Lenght = -100 },
    injectionRate: .5,
    enabled: () => true
);

50% of the time it will change the name to “xxxxx” and the length to -100.

Here’s how to use these policies inside the RandomValueGenerator

public string GetRandomName()
{
    return _faultPolicy.Execute(() => names[_random.Next(0, 5)]);
}

public int GetRandomNumber()
{
    return _faultPolicy.Execute(() => _random.Next(0, 100));
}

public NameAndLength GetNameAndLength()
{
    string name = names[_random.Next(0, 5)];
    NameAndLength nameAndLength = new NameAndLength { Name = name, Lenght = name.Length };
    return _corruptDataPolicy.Execute(() => nameAndLength);
}

As mentioned earlier, a policy registry might be a better way of passing and reusing the Simmy policies and this will be covered in the next post on Simmy.

Full source code here.

Registering Multiple Implementations of an Interface in ASP.NET Core with Autofac

Full source code here.

A few weeks ago I wrote a post about using dependency injection to pick between two implementations of an interface. It was a solution I was not very happy with because it meant I had to new up the implementations inside a factory or I had to use service collection to instantiate all implementations of the interface and then use a piece of code to return the one the was wanted.

As of this writing service collection does not natively support choosing between two implementations of a interface, but Autofac does and it can be used in place of service collection, or alongside it.

In this post I’m going to show its use alongside the service collection just for the dependencies that have multiple implementations, I’ll use service collection for dependencies that have a single collections. The examples are for ASP.NET Web Api Core 2.x and Core 3; there are small variations in how to use Autofac in those two versions of Core.

The Scenario
I have three controllers, the NegativeValuesController and PositiveValuesController take the same IValuesService and the third controller ProductsController takes an IProductsService.

The NegativeValuesController should get a NegativeValuesService and the PositiveValuesController should get a PositiveValuesService.

I’m going to use Autofac to provide the dependency injection for these controllers.

The ProductsController will use the built in ServiceCollection to fulfill its dependency.

Core 2.x
Start by adding the Autofac.Extensions.DependencyInjection package to the application.

Inside Program.cs add add .ConfigureServices(services => services.AddAutofac()) to the CreateWebHostBuilder statement.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
	WebHost.CreateDefaultBuilder(args)
	.ConfigureServices(services => services.AddAutofac())
	.UseStartup<Startup>();

In Startup.cs add this to the ConfigureServices(..) method –

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
	 .AddControllersAsServices(); // these two lines have to come before  
services.AddScoped<IProductsService, ProductsService>(); // you populte the builder a below

var builder = new ContainerBuilder();
builder.Populate(services);

builder.RegisterType<NegativeValuesService>().As<IValuesService>().AsSelf();
builder.RegisterType<PositiveValuesService>().As<IValuesService>().AsSelf();
builder.Register(ctx => new NegativeValuesController(ctx.Resolve<NegativeValuesService>()));
builder.Register(ctx => new PositiveValuesController(ctx.Resolve<PositiveValuesService>()));

AutofacContainer = builder.Build();

return new AutofacServiceProvider(AutofacContainer);

The controllers work in the normal, familar way –

public NegativeValuesController(IValuesService valuesService)
{
    _valuesService = valuesService;
}
public PositiveValuesController(IValuesService valuesService)
{
    _valuesService = valuesService;
}

That’s it, you can now choose the implementation you want for a controller.

Core 3
If you want to the same in Core 3, there are few small differences.

As before, start by adding the Autofac.Extensions.DependencyInjection package to the application.

In Program.cs add .UseServiceProviderFactory(new AutofacServiceProviderFactory()) to the CreateWebHostBuilder statement.

Back in Startup.cs add this method –

public void ConfigureContainer(ContainerBuilder builder)
{
    builder.RegisterType<NegativeValuesService>().As<IValuesService>().AsSelf();
    builder.RegisterType<PositiveValuesService>().As<IValuesService>().AsSelf();

    builder.Register(ctx => new NegativeValuesController(ctx.Resolve<NegativeValuesService>()));
    builder.Register(ctx => new PositiveValuesController(ctx.Resolve<PositiveValuesService>()));
}

Edit the ConfigureServices method to include the following –

public void ConfigureServices(IServiceCollection services)
{
	//snip..
	services.AddScoped<IProductsService, ProductsService>();

	services.AddControllers();
	services.AddMvc().AddControllersAsServices();
}

The usage inside the controllers does not change.

There you go, injecting multiple implementations of a interface in ASP.NET Core 2 and 3.

Full source code here.

Simmy Chaos Engine for .NET – Part 4, Doing Some Real Damage, Dropping a Table

Full source code here.

Up until now the Simmy examples I’ve written have thrown exceptions, changed successes to failures or slowed down a request. None of these did any real damage to your system, and your application would probably have recovered when the next request came along.

But what about dropping a table? Poof, all the data is gone. Now what does your application do?

Doing this is very easy with the Behavior clause. Like the other clauses it takes a probability, an enabled flag, and action or func that executes any piece of code.

The Scenario
I have Web API application with a products controller that queries the database for a list of products.

During application start up the database is created and populated in Configure method of Startup.cs using the EnsureCreated() method and custom seed class.

Inside the controller the action method uses the SalesContext to query the database.

Inside the controller’s constructor I have the chaos policy, it is set to drop the Products table. The drop action of the behavior policy will execute 25% of the time the policy is run. If get an error when you run this saying that the table is absent, it means the chaos policy dropped the table, you’ll have to restart the application to run through the database seeding and creation again.

The policy looks like this –

public ProductsController(SalesContext salesContext)
{
	_behaviorPolicy = MonkeyPolicy.InjectBehaviourAsync(
		 async () =>
		 {
			 await _salesContext.Database.ExecuteSqlCommandAsync("DROP TABLE Products");
			 Console.WriteLine("Dropped the Products table.");
		 },
		 0.25, // 25% of the time this policy will execute
		async () => await Task.FromResult(true));
	_salesContext = salesContext;
}

The request to the _salesContext is made inside the policy, but the policy executes before the call the db is made and in 25% of such calls the table will be dropped.

[HttpGet]
public async Task<ActionResult> Get()
{
	List<Product> products =  await _behaviorPolicy.ExecuteAsync(async () => await _salesContext.Products.Take(10).ToListAsync());

	return Ok(products);
}

You can of course execute and code from inside the policy, wipe a cache, kill a service, delete a file, etc.

This example might be a little excessive in the damage it does, you can decide if it is unreasonable to expect you application to continue working even when the database is unavailable.

Full source code here.