Monthly Archives: December 2014

Whats new in C# 6.0

  • Exception Filters : Exception filters are a CLR capability that is exposed in Visual Basic and F#, but hasn’t been in C# – until now. If the parenthesized expression after ‘if’ evaluates to true, the catch block is run, otherwise the exception keeps going. Basically Exception Filter is one of the new features of C# 6.0 that allows us to specify a conditional clause for each catch block. In other words now we can write a catch block that will handle the exception of a specific type only when a certain condition is true that is written in an exception filter clause. First we see a code snippet of an Exception Filter then we will learn more about it.
        static void Main(string[] args)
        {
            try
            {
                throw new ArgumentNullException("Arg1");
                //throw new ArgumentNullException("Arg2");
            }
            catch (ArgumentNullException ex) if (ex.Message.Contains("Arg1"))
            {
                Console.WriteLine("Error Message : " + ex);
            }
            catch (ArgumentNullException ex) if (ex.Message.Contains("Arg2"))
            {
                Console.WriteLine("Error Message : " + ex);
            }
        }
    }
  • Await in catch and finally blocks : This has been disallowed until now, now with C# 6.0 we can use await int catch and finally block.
        public async void Process()
        {
            ILogger logger = new Logger();
            try
            {
     
            }
            catch (ArgumentNullException ex)
            {
                Console.WriteLine("Error Message : " + ex);
                await logger.LogError(ex);
            }
            catch (Exception ex)
            {
                Console.WriteLine("Error Message : " + ex);
                await logger.LogError(ex);
            }
            finally
            {
                await logger.Log("Method Execution Process() Completed");    
            }
        }
    }

    Logger Implementation looks some thing like this

    public interface ILogger
     {
         Task<bool> LogError(Exception ex);
     
         Task<bool> Log(string message);
     }
     public class Logger : ILogger
     {
         public async Task<bool> LogError(Exception ex)
         {
             await Task.Run(() => {
                 // Log to Windows Log or do custom logging
             });
             return true;
         }
     
         public async Task<bool> Log(string message)
         {
             await Task.Run(() => {
                 // Log to Windows Log or do custom logging
             });
             return true;
         }
     }
  • Auto-property initializers: These are similar to initializers on fields. Initialize property is repetitive task, and cannot be done in the same line as we can can done for fields.  Property can be initialized only in the constructor, beside the filed which can be initialized in the same line where it is declared.
     public class Employee
     {
         private string _firstName = "Default Name";
         public string FirstName { getset; }
         public Employee()
         {
             FirstName = _firstName;
         }
     
     }

    The new feature in C# 6.0 defines Auto-Property initializer alowing property to be initialized like fields. The following code snippet shows the Auto-Property Initializer;

    public class Employee
     {
         private static string _lastName = "Gote";
         public string FirstName { getset; } = "Aamol";
     
         public string LastName { getset; } = "Gote";
     
         //Property with Private Setter
         public string Email { get; } = _lastName;
     }
  • Expression-bodied function members: It allow methods, properties and other kinds of function members to have bodies that are expressions instead of statement blocks, just like with lambda expressions.Expression-bodied function members allow properties, methods, operators and other function members to have bodies as lambda like expressions instead of statement blocks. Thus reducing lines of codes and clear view of the expressions. Now in C# 6.0, instead of writing the whole property body using getters/setters, you can now just use the lambda arrow (“=>”) to return values. Yu can write expressions for methods/functions which returns value to the caller example below
    public string GetFullName(string firstName, string lastname, string middleInitial) => string.Concat(firstName, " ", lastname, " ", middleInitial);
    

    Same can be applied to properties as well

    public string StreetAddress => "1 Microsoft Way Redmond";
    
  • Import Features : using static is a new kind of using clause that lets you import static members of types directly into scope.
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Console;
     
    namespace WhatNewInCharp6
    {
        class Program
        {
            static void Main(string[] args)
            {
                WriteLine("Inside Main");
            }
        }
    }
  • Null Conditional Operator: The NullReferenceException is night mare for any developer. Almost every created object must be check against null value before we call some of its member.C# 6.0 also introduced Null-Conditional operator that enable developers to check the null value with in an object reference chain. The null – conditional operator ( ?.) , returns null if anything in the object reference chain in null. This avoid checking null for each and every nested objects if they are all in referenced and having null values.
    For E.g. Consider below classes

    public class Person
    {
        public Address Address { getset; }
    }
     
    public class Address
    {
        public Address HomeAddress { getset; }
        public Address OfficeAddress { getset; }
        public string StreetAddress { getset; }
        public string City { getset; }
        public string State { getset; }
        public string Zip { getset; }
    }

    I need to access Home Address for Person Object Passed and then print it, in older appraoch i had to do all nested null check like shown below

    public void PrintHomeAddress(Person person)
    {
        if (person != null && person.Address != null & person.Address.HomeAddress != null)
        {
            Console.WriteLine(string.Concat(person.Address.HomeAddress.StreetAddress, Environment.NewLine, person.Address.HomeAddress.State, person.Address.HomeAddress.Zip));
        }
    }

    But with C# 6.0 null conditional operator same can be achieved like show below

    public void PrintHomeAddressNewWay(Person person)
    {
        if (person?.Address?.HomeAddress != null)
        {
            Console.WriteLine(string.Concat(person.Address.HomeAddress.StreetAddress, Environment.NewLine, person.Address.HomeAddress.State, person.Address.HomeAddress.Zip));
        }
    }

    Instead of check each and individual objects, using ?. we can check entire chain of reference together and whenever there is a null values, it return null.
    The “?.” operator is basically saying, if the object to the left is not null, then fetch what is to the right, otherwise return null and halt the access chain.

  • nameOf Expressions: nameof expressions are a new form of string literal that require more syntax and are more restricted than the existing kinds. Oftentimes you need to provide a string that names some program element: when throwing an ArgumentNullException you want to name the guilty argument; when raising a PropertyChanged event you want to name the property that changed, etc. Using ordinary string literals for this purpose is simple, but error prone. You may spell it wrong, or a refactoring may leave it stale. nameof expressions are essentially a fancy kind of string literal where the compiler checks that you have something of the given name, and Visual Studio knows what it refers to, so navigation and refactoring will work.
    Earlier for arguement null exception you would do some thing like shown below, but what if parameter name person changes, then you change the exception message as well

    public void PrintOfficeAddress(Person person)
     {
        if (person == nullthrow new ArgumentNullException("person is null");
     }

    With C# 6.0 you use nameOf Operator which will automatically factor in your variable name change in case code gets re-factored or parameter name changes

    public void PrintOfficeAddressNewWay(Person person)
    {
        if (person == nullthrow new ArgumentNullException(nameof(person) + " is null");
    }
    
  • String Interpolation: We regularly use string.Format or string.Concat or “+” operator to do all kinds of string manipulations like shown below
    public void PrintPersonName(Person person)
    {
        Console.WriteLine(string.Format("Full Name: {0} {1} {2}", person.FirstName, person.LastName, person.MiddleInitial));
    }

    String interpolation lets you more easily format strings. String.Format and its cousins are very versatile, but their use is somewhat clunky and error prone. Particularly unfortunate is the use of numbered placeholders like {0} in the format string, which must line up with separately supplied arguments

    With C# 6.0 above can be achieved easily as shown below

    public void PrintPersonNameNewWay(Person person)
    {
        Console.WriteLine("Full Name: \{person.FirstName} \{person.LastName} \{person.MiddleInitial}");
    }
  • Dictionary Initializer – You can initialize dictionary in following manner
    Dictionary<stringPerson> persons = new Dictionary<stringPerson>()
     {
         ["EMPID1"] = new Person() { FirstName = "Aamol", LastName = "Gote" },
         ["EMPID2"] = new Person() { FirstName = "John", LastName = "Doe" },
         ["EMPID3"] = new Person() { FirstName = "Mary", LastName = "Lamb" }
     };

    If the Key is not string but some object then syntax would be some thing like below

    Dictionary<IdentifierPerson> personsWithIdentifiers = new Dictionary<IdentifierPerson>()
    {
        [new Identifier() { Id = new Guid(), SSN = "111-222-2345" }] = new Person() { FirstName = "Aamol", LastName = "Gote" },
        [new Identifier() { Id = new Guid(), SSN = "345-222-2345" }] = new Person() { FirstName = "John", LastName = "Doe" },
        [new Identifier() { Id = new Guid(), SSN = "999-222-2345" }] = new Person() { FirstName = "Mary", LastName = "Lamb" }
    };

Following features have not made in to VS2015 Preview

  • Primary constructors in C# (along with initializers in structs)
  • Declaration expressions in C# / Out parameters in VB

Source Code be downloaded from here

Repository + Unit Of Work Pattern Demystified

The repository and unit of work patterns are intended to create an abstraction layer between the data access layer and the business logic layer of an application. Implementing these patterns can help insulate your application from changes in the data store and can facilitate automated unit testing or test-driven development (TDD).

Creating a repository class for each entity type could result in a lot of redundant code, and it could result in partial updates. For example, suppose you have to update two different entity types as part of the same transaction. If each uses a separate database context instance, one might succeed and the other might fail. One way to minimize redundant code is to use a generic repository, and one way to ensure that all repositories use the same database context (and thus coordinate all updates) is to use a unit of work class.

A repository is nothing but a class defined for an entity, with all the operations possible on that specific entity. For example, a repository for an entity Customer, will have basic CRUD operations and any other possible operations related to it. A Repository Pattern can be implemented in Following ways:

  • One repository per entity (non-generic) : This type of implementation involves the use of one repository class for each entity. For example, if you have two entities Order and Customer, each entity will have its own repository.
  • Generic repository : A generic repository is the one that can be used for all the entities, in other words it can be either used for Order or Customer or any other entity.

Unit of Work in the Repository Pattern

Unit of Work is referred to as a single transaction that involves multiple operations of insert/update/delete and so on kinds. To say it in simple words, it means that for a specific user action, all the transactions like insert/update/delete and so on are done in one single transaction, rather then doing multiple database transactions. This means, one unit of work here involves insert/update/delete operations, all in one single transaction.

Repository pattern minus the Unit of Work Pattern provides of lot challenges an anti patterns

  • Each repository requires one interface and one concrete class per “provider” (“entity framework” or “in memory”). If you are starting to have quite some repositories, then interfaces and implementation grows. If you have “custom” specialized methods for each repository, it also means implementing them for all the providers.
  • One abstract type per repository equals one parameter in your controller constructor per repository that needs to be accessed by the controller. If a controller is performing operations on multiple repositories, this can quickly become a mess with a lot of parameters in the constructor (even if constructor will never be called explicitly by your code but by the DI container -constructor injection- it still looks kind of crapy). Moreover if you need to add access to a new repository in a controller, it means adding a new parameter to the constructor and doing a new binding in your DI container.
  • By injecting repositories individually in each controller, the true power of the unit of work pattern is completely bypassed.
    Indeed, through this basic design, to illustrate via EF Code First, a DbContext (EF UnitOfWork) is usually instantiated per repository and “at best” a Commit method is present in each repository to call SaveChanges on the DbContext (and apply all modifications to DB) once all operations have been done on the repository. At worse the call to SaveChanges on the DbContext is done in each repository method performing modifications in the repository. You are using a nice loose coupled design, playing nicely with dependency injection and unit testing but you are shooting yourself a bullet in the head by not correctly using the UnitOfWork pattern.The main problem with this incorrect use of the unit of work pattern is that for a specific user request, triggering call to action method and potentially accessing multiple repositories, doing work on them, you are creating multiple unit of works whereas a single unit of work should be used !The definition of the Unit Of Work pattern is rather clear : “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.” (http://martinfowler.com/eaaCatalog/unitOfWork.html).The business transaction here is typically triggered by the end user, indirectly calling an action method. It starts when the end-user triggers the operation, and ends when the operation is completed, whatever the number of repositories accessed and the number of CRUD operations performed on them. This means that a single unit of work should be used in the context of the operation/transaction (the request) and not many different ones.Typically, to use Entity Framework as a provider example, following this bad design would result in calling SaveChanges multiple times, meaning multiple round trips to DB through multiple transactions which is typically not the behavior wanted (and absolutely not the Unit Of Work philosophy).Apart from the performance aspect, it also leads to a problem when an error/exception happens in the middle of an operation in an action method. If you already made some changes in some repositories and commited the changes, but the global operation is not complete (potentially other repositories should have been updated as well but have not been), it will leave your persisted data part of the operation in an incoherent state (I wish you good luck to rollback each changes). Whereas if you only use a single UnitOfWork for the operation, if it fails before completing (before reaching the end of the action method), then no data is updated at all part of the operation, your data store stays clean (and it also does a single round trip to the DB, in a single transaction for changes done accross all repositories).

Design and Approach

Define an generic repository type interface, containing very basic atomic operations

public interface IRepository<T> where T : class
{
    IQueryable<T> AsQueryable();
 
    IEnumerable<T> GetAll();
    IEnumerable<T> Find(Expression<Func<T, bool>> predicate);
    T Single(Expression<Func<T, bool>> predicate);
    T SingleOrDefault(Expression<Func<T, bool>> predicate);
    T First(Expression<Func<T, bool>> predicate);
    T GetById(int id);
 
    void Add(T entity);
    void Delete(T entity);
    void Attach(T entity);
}

Define an unit of work interface , containing all the generic repositories being part of the unit of work, along with a single Commit() method used to persist all changes done in the repositories to the underlying data store

public interface IUnitOfWork
{
    IRepository<Organiazation> OrderRepository { get; }
    IRepository<Employee> CustomerRepository { get; }
    
    void Commit();
}

Employee and Organization are pure POCO classes typical entity framework entities

Add class implementing the abstract generic repository, which just delegates all calls to the associated Entity Framework DbSet

public class EntityFrameworkRepository<T> : IRepository<T>
                                   where T : class
 {
     private readonly DbSet<T> _dbSet;
 
     public EntityFrameworkRepository(DbSet<T> dbSet)
     {
         _dbSet = dbSet;
     }
 
        #region IGenericRepository<T> implementation
 
     public virtual IQueryable<T> AsQueryable()
     {
         return _dbSet.AsQueryable();
     }
 
     public IEnumerable<T> GetAll()
     {
         return _dbSet;
     }
 
     public IEnumerable<T> Find(Expression<Func<T, bool>> predicate)
     {
         return _dbSet.Where(predicate);
     }
 
     public T Single(Expression<Func<T, bool>> predicate)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public T SingleOrDefault(Expression<Func<T, bool>> predicate)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public T First(Expression<Func<T, bool>> predicate)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public T GetById(int id)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public void Add(T entity)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public void Delete(T entity)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
 
     public void Attach(T entity)
     {
         //TODO: To Be Implemented
         throw new NotImplementedException();
     }
        #endregion
 }

Add class implementing IUnitOfWork and also inherits from DbContext (EF Code First unit of work).
It contains DbSets, which can be seen as repositories from EF point of view (in fact in EF Code First, you can substitute in your mind the word “Unit Of Work” with “DbContext” and “Repository” with “DbSet”).
The constructor just instantiate all the repositories by passing them the corresponding DbSet in their constructor. This can be improved by instantiating repositories only when they are accessed. Indeed if your unit of work contains 20 repositories and your controller is just going to use one, this is a lot of useless instantiation.

public class EntityFrameworkUnitOfWork : DbContextIUnitOfWork
 {
     private readonly EntityFrameworkRepository<Organization> _organizationRepo;
     private readonly EntityFrameworkRepository<Employee> _employeeRepo;
 
     public DbSet<Organization> Organizations { getset; }
     public DbSet<Employee> Employees { getset; }
 
     public EntityFrameworkUnitOfWork()
     {
         _organizationRepo = new EntityFrameworkRepository<Organization>(Organizations);
         _employeeRepo = new EntityFrameworkRepository<Employee>(Employees);
     }
 
        #region IUnitOfWork Implementation
 
     public IRepository<Organization> OrganizationRepository
     {
         get { return _organizationRepo; }
     }
 
     public IRepository<Employee> EmployeeRepository
     {
         get { return _employeeRepo; }
     }
 
     public void Commit()
     {
         this.SaveChanges();
     }
 
        #endregion
 }

Now if you need to implement another provider, let’s say “InMemory” (usefull for unit testing) all you have to do is to create two other classes InMemoryRepository and InMemoryUnitOfWork

Let’s say your HomeController needs to access all three different repositories.
With the old design, your HomeController constructor signature would have looked as

public HomeController(IEmployeeRepository empRepo, IOrganizationRepository orgRepo)
        { 
            
        }

And of course new repository to be used by HomeController, means new parameter + new private field + new binding in DI container.

With the improved design, now the signature looks much cleaner

public HomeController(IUnitOfWork unitOfWork)
        { 
        
        }

If we add new repositories that we want to use in the controller we don’t have to change anything in the controller class, nor bindings in DI container. We can just use the new repository directly from the controller through the unit of work.
No more thinking about which repositories should the controller have access too and customizing the constructor as such. Now all controllers constructors needing to access repositories just need a single unitOfWork parameter.

On DI container side, all that is needed is to bind the abstract IUnitOfWork to the desired provider implementation (EF, InMem, other …). You’ll also want to make sure that the dependency is created in a “per request scope” (considering your DI container allow this), meaning that a single unit of work will be instantiated per request and not each time the decency is required.

Let’s say you need in multiple places in your client code to run a complex query on a specific repository (IEmployeeRepository). Through the old design it’s relatively straightforward, you would just define a method in IEmployeeRepository and implement it for all the concrete providers (what a pain !).
However with the improved design we can’t add this method to IGenericRepository.

Extension methods can help in this scenario

O Notation with C# (CSharp) – My Take

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm. Big O notation is a measure of the complexity of your program in terms of the size of the input. The size of the input is usually denote as ‘n’. There are usually 2 ways to measure the complexity:

  • Time: The number of calculations your program must perform on the input data to get the ouput. For example finding the biggest number in a list of n numbers requires you to check every number against against your current maximum and see if the new number is higher. Thus for n numbers you need n checks, or calculations. This has linear time complexity because there is a 1 to 1 relationship between input the number of calculations to obtain the output. Linear time complexity is denoted as O(n).
  • Space: The amount of storage space needed by the program to run its calculations. We will the above example of finding the biggest number in a list of n numbers. We need a variable to store our current maximum number value. If we have 10 numbers, we need 1 variable, if we have a million numbers, we still need 1 variable. The number of variables we need is constant compared to the input size, so our program has constant space complexity, denoted as O(1).

O(1)

O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

private static bool IsFirstElementNullOrEmpty(List<string> elements)
{
    if (elements == null)
        throw new ArgumentNullException("elements");
 
    if (elements.Count > 0)
    {
        if (string.IsNullOrEmpty(elements[0]))
        {
            return true;
        }
    }
    return false;
}

O(n)

O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example below also demonstrates how Big O favours the worst-case performance scenario; a matching string could be found during any iteration of the for loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.

private static bool ContainsValue(List<string> elements, string elementToBeFound)
{
    if (elements == null)
        throw new ArgumentNullException("elements");
 
    if (elements.Count > 0)
    {
        for (int count = 0; count < elements.Count; count++)
        {
            if (elements[count].Equals(elementToBeFound))
            {
                return true;
            }
        }
    }
    return false;
}

O(N2) (n Square)

O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.

private static bool ContainsDuplicate(List<string> elements)
{
    if (elements == null)
        throw new ArgumentNullException("elements");
 
    if (elements.Count > 0)
    {
        for (int count = 0; count < elements.Count; count++)
        {
            for (int innerCount = 0; innerCount < elements.Count; innerCount++)
            {
                if (count == innerCount)
                {
                    continue;
                }
                if (elements[count].Equals(elements[innerCount]))
                {
                    return true;
                }
            }
        }
    }
    return false;
}

Above program can be more efficient to find duplicates like shown below

private static bool ContainsDuplicateEfficient(List<string> elements)
{
    if (elements == null)
        throw new ArgumentNullException("elements");
 
    if (elements.Count > 0)
    {
        for (int count = 0; count < elements.Count; count++)
        {
            for (int innerCount = count; innerCount < elements.Count; innerCount++)
            {
                if (elements[count].Equals(elements[innerCount]))
                {
                    return true;
                }
            }
        }
    }
    return false;
}

Another classic example for same O Notation (O(N2)) is bubble sort
The algorithm works by comparing each item in the list with the item next to it, and swapping them if required. In other words, the largest element has bubbled to the top of the array. The algorithm repeats this process until it makes a pass all the way through the list without swapping any items. The worst-case run time complexity is O(n2).

/// <summary>
/// Bubble Sorting
/// </summary>
/// <param name="scrambledArray">This Parameter Could be array as well, 
/// but for simplicity I have just taken list, I am aware that u can peform binary search on list e.g. numberElements.BinarySearch(55);</param>
/// <returns></returns>
private static int[] BubbleSort(int[] scrambledArray)
{
    for (int count = scrambledArray.Length - 1; count >= 0; count--)
    {
        for (int innercount = 1; innercount <= count; innercount++)
        {
            if (scrambledArray[innercount - 1] > scrambledArray[innercount])
            {
                int temp = scrambledArray[innercount - 1];
                scrambledArray[innercount - 1] = scrambledArray[innercount];
                scrambledArray[innercount] = temp;
            }
        }
    }
    return scrambledArray;
}

The worst-case runtime complexity is O(n2). See explanation below

bubbleSort

O(log n)

An algorithm is said to take logarithmic time if T(n) = O(log n). Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.An O(log n) algorithm is considered highly efficient, as the operations per instance required to complete decrease with each instance.The most common attributes of logarithmic running-time function are that:

  • the choice of the next element on which to perform some action is one of several possibilities, and
  • only one will need to be chosen.

or

  • the elements on which the action is performed are digits of n

This is why, for example, looking up people in a phone book is O(log n). You don’t need to check every person in the phone book to find the right one; instead, you can simply divide-and-conquer, and you only need to explore a tiny fraction of the entire space before you eventually find someone’s phone number. Of course, a bigger phone book will still take you a longer time, but it won’t grow as quickly as the proportional increase in the additional size.
What does it mean to say that the height of a complete binary tree is O(log n)?. The following drawing depicts a binary tree. Notice how each level contains the double number of nodes compared to the level above (hence binary):
BinaryTree

Binary search is an example with complexity O(log n). Let’s say that the nodes in the bottom level of the tree in figure 1 represents items in some sorted collection. Binary search is a divide-and-conquer algorithm, and the drawing shows how we will need (at most) 4 comparisons to find the record we are searching for in this 16 item dataset.

Assume we had instead a dataset with 32 elements. Continue the drawing above to find that we will now need 5 comparisons to find what we are search for, as the tree has only grown one level deeper when we multiplied the amount of data. As a result, the complexity of the algorithm can be described as a logarithmic order.

Plotting log(n) on a plain piece of paper, will result in a graph where the rise of the curve decelerates as n increases:

BinaryTreeGraph
Algorithms based on binary trees are often O(logn). This is because a perfectly balanced binary search tree has logn layers, and to search for any element in a binary search tree requires traversing a single node on each layer.

The binary search algorithm is another example of a O(log n) algorithm. In a binary search, one is searching through an ordered array and, beginning in the middle of the remaining space to be searched, whether to search the top or the bottom half. You can divide the array in half only logn times before you reach a single element, which is the element being searched for, assuming it is in the array.

BinarySearch optimizes searches on sorted collections. We evaluate the BinarySearch method on List and arrays. We may have a variable number of elements. Binary search is an amazing algorithm that hones in on values in sorted collections. Complexity: Binary search has O(log n) complexity, making it more efficient than others.

Binary search is one of the most basic yet very useful algorithms. It can operate on sorted arrays or range of values. Some people consider it divide and conquer algorithm, others don`t, but it does not really matter. The goal of binary search is to find a specified value (or its index) within an array or a range of values, it can also be used to search for a unknown value which must meet certain known conditions. It defines start, end and middle points for the sorted array or range in which it will be performing a search. On each iteration, depending on how the value at the middle point compares to the value we are searching for, it redefines the start or end point and subsequently the middle point. This way the size of the array or range we are searching in is effectively halved every iteration until we find what we search for or the exit conditions are met before that (we have found nothing). The exit condition is start point > end point.

/// <summary>
 /// Find ValuePosition Using Binary Search
 /// Binary Search can be performed on sorter elements.
 /// Assumption : No Duplicate Elements.
 /// </summary>
 /// <param name="numberElements">This Parameter Could be array as well, 
 /// but for simplicity I have just taken list, I am aware that u can peform binary search on list e.g. numberElements.BinarySearch(55);</param>
 /// <param name="valuesToBeSearched"></param>
 /// <returns></returns>
 private static int FindValuePositionUsingBinarySearch(List<int> numberElements, int valuesToBeSearched)
 {
     if (numberElements == null)
         throw new ArgumentNullException("numberElements");
 
     if (numberElements[0] > valuesToBeSearched)
         return -1;
 
     if (valuesToBeSearched > numberElements[numberElements.Count - 1])
         return -1;
 
     int upperBound = numberElements.Count;
     int lowerBound = 0;
 
     while (lowerBound < upperBound)
     {
         int mid = (upperBound + lowerBound) / 2;
         if (numberElements[mid] < valuesToBeSearched)
         {
             lowerBound = mid;
         }
         else if (numberElements[mid] > valuesToBeSearched)
         {
             upperBound = mid;
         }
         else
         {
             return mid;
         }
     }
     return -1;
 }

O (n log n)

Often, good sorting algorithms are roughly O(nlogn). An example of an algorithm with this efficiency is merge sort, which breaks up an array into two halves, sorts those two halves by recursively calling itself on them, and then merging the result back into a single array. Because it splits the array in half each time, the outer loop has an efficiency of logn, and for each “level” of the array that has been split up (when the array is in two halves, then in quarters, and so forth), it will have to merge together all of the elements, an operations that has order of n.
Complexity can b explained below
mergesort_complexity

/// <summary>
/// Merge Sort
/// </summary>
/// <param name="inputItems">Array to be Sorted</param>
/// <param name="lowerBound">Lower Bound</param>
/// <param name="upperBound">Upper Bound</param>
/// <returns>Sorted Array</returns>
public static int[] MergeSort(int[] inputItems, int lowerBound, int upperBound)
{
    if (lowerBound < upperBound)
    {
        int middle = (lowerBound + upperBound) / 2;
 
        MergeSort(inputItems, lowerBound, middle);
        MergeSort(inputItems, middle + 1, upperBound);
 
        //Merge
        int[] leftArray = new int[middle - lowerBound + 1];
        int[] rightArray = new int[upperBound - middle];
 
        Array.Copy(inputItems, lowerBound, leftArray, 0, middle - lowerBound + 1);
        Array.Copy(inputItems, middle + 1, rightArray, 0, upperBound - middle);
 
        int i = 0;
        int j = 0;
        for (int count = lowerBound; count < upperBound + 1; count++)
        {
            if (i == leftArray.Length)
            {
                inputItems[count] = rightArray[j];
                j++;
            }
            else if (j == rightArray.Length)
            {
                inputItems[count] = leftArray[i];
                i++;
            }
            else if (leftArray[i] <= rightArray[j])
            {
                inputItems[count] = leftArray[i];
                i++;
            }
            else
            {
                inputItems[count] = rightArray[j];
                j++;
            }
        }
    }
    return inputItems;
}

Attached is the Source Code Over Here

LRU Cache Implementation – Algorithm – Data Structure.

Least Recently Used (LRU) is a family of caching algorithms, which discards the least recently used items first. This algorithm requires keeping track of when the item was used, which is expensive if one wants to make sure the algorithm always discards the least recently used item. A LRU cache is a container that ensures its maximum capacity is never exceeded by using a Least Recently Used strategy to discard elements. The LRU algorithm keeps track of the order used to access the cache elements in order to know the ones to discard when the container is full.

public interface ICacheRepository
{
    bool Add(string key, object cacheItem);
    object GetCacheItem(string key);
    bool Remove(string key);
}
 
public class LRUCacheNode
{
    public string Key { getset; }
    public object Value { getset; }
    public LRUCacheNode Next { getset; }
    public LRUCacheNode Previous { getset; }
}
 
public class LRUCacheRepository : ICacheRepository
{
    public LRUCacheRepository(int numberOfCacheItems)
    {
        this.capacity = numberOfCacheItems;
        this.cacheMap = new ConcurrentDictionary<stringLRUCacheNode>();
    }
    private ConcurrentDictionary<stringLRUCacheNode> cacheMap;
    private int capacity { getset; }
    private LRUCacheNode head;
    private LRUCacheNode tail;
 
    public bool Add(string key, object cacheItem)
    {
        LRUCacheNode cacheNode = null;
        if (cacheMap.TryGetValue(key, out cacheNode))
        {
            return false;
        }
        cacheNode = new LRUCacheNode()
        {
            Key = key,
            Value = cacheItem
        };
        if (head == null)
        {
            head = cacheNode;
            tail = cacheNode;
        }
        else
        {
            if (cacheMap.Count >= capacity)
            {
                this.RemoveTailNode();
            }
            cacheNode.Next = head;
            head.Previous = cacheNode;
            head = cacheNode;
        }
        return cacheMap.TryAdd(key, cacheNode);
    }
 
    public object GetCacheItem(string key)
    {
        LRUCacheNode cacheNode;
        if (this.cacheMap.TryGetValue(key, out cacheNode))
        {
            if (cacheNode.Previous != null)
                cacheNode.Previous.Next = cacheNode.Next;
 
            if (cacheNode.Next != null)
                cacheNode.Next.Previous = cacheNode.Previous;
 
            cacheNode.Next = head;
            head.Previous = cacheNode;
            head = cacheNode;
            return cacheNode.Value;
        }
        return null;
    }
 
    public bool Remove(string key)
    {
        LRUCacheNode cacheNode;
        if (this.cacheMap.TryGetValue(key, out cacheNode))
        {
            if (cacheNode.Previous != null)
                cacheNode.Previous.Next = cacheNode.Next;
            else
                head = cacheNode.Next;
 
            if (cacheNode.Next != null)
                cacheNode.Next.Previous = cacheNode.Previous;
            else
                tail = cacheNode.Previous;
 
            cacheNode.Next = null;
            cacheNode.Previous = null;
            return this.cacheMap.TryRemove(key, out cacheNode);
        }
        return false;
    }
 
    private void RemoveTailNode()
    {
        LRUCacheNode cacheNode;
        if (this.cacheMap.TryRemove(tail.Key, out cacheNode))
        {
            tail.Previous.Next = null;
            tail = tail.Previous;
        }
    }
 
    public void DisplayCacheMap()
    {
        StringBuilder sb = new StringBuilder();
        LRUCacheNode cacheNode = head;
        while (cacheNode != null)
        {
            sb.Append(cacheNode.Key);
            cacheNode = cacheNode.Next;
            if (cacheNode != null)
                sb.Append("==>");
        }
        Console.WriteLine(sb.ToString());
    }
}

Below is code snippet for adding items and verifying Cache Respository

class Program
{
    static void Main(string[] args)
    {
        LRUCacheRepository cacheRepository = new LRUCacheRepository(5);
        cacheRepository.Add("A""A");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("B""B");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("C""C");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("D""D");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("E""E");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("F""F");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("G""G");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("H""H");
        cacheRepository.DisplayCacheMap();
        object obj = cacheRepository.GetCacheItem("E");
        cacheRepository.DisplayCacheMap();
        obj = cacheRepository.GetCacheItem("F");
        cacheRepository.DisplayCacheMap();
        bool result = cacheRepository.Remove("G");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("A""A");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("B""B");
        cacheRepository.DisplayCacheMap();
        cacheRepository.Add("C""C");
        Console.ReadLine();
    }
}

Attached is the Source Code

Async and Await Sample

If you specify that a method is an async method by using an Async or async modifier, you enable the following two capabilities. The marked async method can use Await or await to designate suspension points. The await operator tells the compiler that the async method can’t continue past that point until the awaited asynchronous process is complete. In the meantime, control returns to the caller of the async method. The suspension of an async method at an await expression doesn’t constitute an exit from the method, and finally blocks don’t run. The marked async method can itself be awaited by methods that call it.
An async method typically contains one or more occurrences of an await operator, but the absence of await expressions doesn’t cause a compiler error. If an async method doesn’t use an await operator to mark a suspension point, the method executes as a synchronous method does, despite the async modifier. The compiler issues a warning for such methods. Async and await keywords in C# are intended to help with offloading long IO operations off the UI thread.

public class Customer
{
    public string FirstName { getset; }
 
    public string LastName { getset; }
    public async void GetCustomer()
    {
        Customer cust = await this.GetCustomerFromService();
        Console.WriteLine(cust.FirstName);
        Console.WriteLine(cust.LastName);
    }
 
    private Task<Customer> GetCustomerFromService()
    {
        Task<Customer> taskGetCustomer = new Task<Customer>(() =>
        {
            Console.WriteLine("Calling Customer Service");
            Customer cust = new Customer()
            {
                FirstName = "Aamol",
                LastName = "Gote"
            };
            System.Threading.Thread.Sleep(10000);
            Console.WriteLine("Customer Service returned Customer");
            return cust;
        });
        taskGetCustomer.Start();
        return taskGetCustomer;
    }
}
 
class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Main UI Thread");
        Customer cust = new Customer();
        cust.GetCustomer();
        Console.WriteLine("Main UI Thread - Post Get Customer Call");
        Console.ReadLine();
    }
}

Factory and Abstract Factory…….

  • Factory method is a single method, and an abstract factory is an object.
  • Factory Method pattern uses inheritance and relies on a subclass to handle the desired object instantiation. Abstract Factory pattern, a class delegates the responsibility of object instantiation to another object via composition.
  • Factory Method is used to create one product only but Abstract Factory is about creating families of related or dependent products.
  • Factory Method pattern exposes a method to the client for creating the object whereas in case of Abstract Factory they expose a family of related objects which may consist of these Factory methods.
  • Factory Method pattern hides the construction of single object where as Abstract factory method hides the construction of a family of related objects. Abstract factories are usually implemented using (a set of) factory methods.
  • Abstract Factory pattern uses composition to delegate responsibility of creating object to another class while Factory design pattern uses inheritance and relies on derived class or sub class to create object.
  • The idea behind the Factory Method pattern is that it allows for the case where a client doesn’t know what concrete classes it will be required to create at runtime, but just wants to get a class that will do the job while Abstract Factory pattern is best utilized when your system has to create multiple families of products or you want to provide a library of products without exposing the implementation details.

Factory Pattern Example

 

public interface IVehicle
 {
     void Start();
     void Drive();
     void Stop();
 }
 
 public class Car : IVehicle
 {
 
     public void Start()
     {
         Console.WriteLine("Car Started");
     }
 
     public void Drive()
     {
         Console.WriteLine("Car Driving");
     }
 
     public void Stop()
     {
         Console.WriteLine("Car Stopped");
     }
 }
 
 public class Suv : IVehicle
 {
 
     public void Start()
     {
         Console.WriteLine("Suv Started");
     }
 
     public void Drive()
     {
         Console.WriteLine("Suv Driving");
     }
 
     public void Stop()
     {
         Console.WriteLine("Suv Stopped");
     }
 }
 
 public enum VehicleType
 { 
     Car,
     Suv
 }
 public class VehicleFactory
 {
     public static IVehicle GetVehicle(VehicleType vehicleType)
     {
         IVehicle vehicle;
         switch (vehicleType)
         {
             case VehicleType.Car:
                 vehicle = new Car();
                 break;
             case VehicleType.Suv:
                 vehicle = new Suv();
                 break;
             defaultreturn null;
         }
         return vehicle;
     }
 }

Client Consuming Code looks like

class Program
{
    static void Main(string[] args)
    {
        IVehicle vehicle = VehicleFactory.GetVehicle(VehicleType.Car);
        vehicle.Start();
        vehicle.Drive();
        vehicle.Start();
        Console.ReadLine();
    }
}

Source Code can be downloaded from Here

Abstract Factory Example

public class Car : IVehicle
{
 
    public virtual void Start()
    {
        Console.WriteLine("Car Started");
    }
 
    public virtual void Drive()
    {
        Console.WriteLine("Car Driving");
    }
 
    public virtual void Stop()
    {
        Console.WriteLine("Car Stopped");
    }
}
 
public class HondaAccord : Car
{
    public override void Start()
    {
        Console.WriteLine("Honda Accord Started");
    }
 
    public override void Drive()
    {
        Console.WriteLine("Honda Accord Driving");
    }
 
    public override void Stop()
    {
        Console.WriteLine("Honda Accord Stopped");
    }
}
 
public class HondaCrv : Suv
{
    public override void Start()
    {
        Console.WriteLine("Honda Crv Started");
    }
 
    public override void Drive()
    {
        Console.WriteLine("Honda Crv Driving");
    }
 
    public override void Stop()
    {
        Console.WriteLine("Honda Crv Stopped");
    }
}
 
public class ToyotaCorolla : Car
{
    public override void Start()
    {
        Console.WriteLine("Toyota Corolla Started");
    }
 
    public override void Drive()
    {
        Console.WriteLine("Toyota Corolla Driving");
    }
 
    public override void Stop()
    {
        Console.WriteLine("Toyota Corolla Stopped");
    }
}
 
public class ToyotaRav4 : Suv
{
    public override void Start()
    {
        Console.WriteLine("Toyota Rav4 Started");
    }
 
    public override void Drive()
    {
        Console.WriteLine("Toyota Rav4 Driving");
    }
 
    public override void Stop()
    {
        Console.WriteLine("Toyota Rav4 Stopped");
    }
}
 
public class Suv : IVehicle
{
 
    public virtual void Start()
    {
        Console.WriteLine("Suv Started");
    }
 
    public virtual void Drive()
    {
        Console.WriteLine("Suv Driving");
    }
 
    public virtual void Stop()
    {
        Console.WriteLine("Suv Stopped");
    }
}
public abstract class VehicleFactory
{
    public abstract IVehicle GetVehicle(VehicleType vehicleType);
 
    public static VehicleFactory GetFactory(MakeType make)
    {
        VehicleFactory vehicleFactory;
        switch (make)
        {
            case MakeType.Honda:
                vehicleFactory = new HondaFactory();
                break;
            case MakeType.Toyota:
                vehicleFactory = new ToyotaFactory();
                break;
            defaultreturn null;
        }
        return vehicleFactory;
    }
    
}
 
public class ToyotaFactory : VehicleFactory
{
    public override IVehicle GetVehicle(VehicleType vehicleType)
    {
        IVehicle vehicle;
        switch (vehicleType)
        {
            case VehicleType.Car:
                vehicle = new ToyotaCorolla();
                break;
            case VehicleType.Suv:
                vehicle = new ToyotaRav4();
                break;
            defaultreturn null;
        }
        return vehicle;
    }
}
 
public class HondaFactory : VehicleFactory
{
    public override IVehicle GetVehicle(VehicleType vehicleType)
    {
        IVehicle vehicle;
        switch (vehicleType)
        {
            case VehicleType.Car:
                vehicle = new HondaAccord();
                break;
            case VehicleType.Suv:
                vehicle = new HondaCrv();
                break;
            defaultreturn null;
        }
        return vehicle;
    }
}
 
public enum VehicleType
{
    Car,
    Suv
}
 
public enum MakeType
{
    Toyota,
    Honda
}

Client Consuming Code looks like

class Program
{
    static void Main(string[] args)
    {
        VehicleFactory factory = VehicleFactory.GetFactory(MakeType.Honda);
        IVehicle vehicle = factory.GetVehicle(VehicleType.Car);
        vehicle.Start();
        vehicle.Drive();
        vehicle.Stop();
        Console.ReadLine();
    }
}

Source Can be downloaded from Here

Strategy Pattern with example

Strategy pattern defines a family of algorithms, encapsulates each one of them and makes them interchangeable

  • Family of Algorithms– The definition says that the pattern defines the family of algorithms- it means we have functionality (in these algorithms) which will do the same common thing for our object, but in different ways.
  • Encapsulate each one of them– The pattern would force you to place your algorithms in different classes (encapsulate them). Doing so would help us in selecting the appropriate algorithm for our object.
  • Make them interchangeable – The beauty with strategy pattern is we can select at run time which algorithm we should apply to our object and can replace them with one another

For example we need to develop a simple shipping cost calculation service where the calculation will depend on the type of the carrier: FedEx, UPS, DHL and USPS.

public class Address
{
    public string ContactName { getset; }
    public string AddressLine1 { getset; }
    public string AddressLine2 { getset; }
    public string AddressLine3 { getset; }
    public string City { getset; }
    public string Region { getset; }
    public string Country { getset; }
    public string PostalCode { getset; }
}
 
 
public enum ShippingOptions
{
    UPS = 10,
    FedEx = 20,
    USPS = 30,
    DHL = 40
}
 
public class Order
{
    public ShippingOptions ShippingMethod { getset; }
    public Address Destination { getset; }
    public Address Origin { getset; }
}
 
public class ShippingCostCalculatorService
{
    public double CalculateShippingCost(Order order)
    {
        switch (order.ShippingMethod)
        {
            case ShippingOptions.FedEx:
                return CalculateForFedEx(order);
            case ShippingOptions.UPS:
                return CalculateForUPS(order);
            case ShippingOptions.USPS:
                return CalculateForUSPS(order);
            case ShippingOptions.DHL:
                return CalculateForDHL(order);
            default:
                throw new Exception("Unknown carrier");
 
        }
    }
 
    double CalculateForDHL(Order order)
    {
        return 4.00d;
    }
 
    double CalculateForUSPS(Order order)
    {
        return 3.00d;
    }
 
    double CalculateForUPS(Order order)
    {
        return 4.25d;
    }
 
    double CalculateForFedEx(Order order)
    {
        return 5.00d;
    }
}

It is perfectly reasonable that we may introduce a new carrier in the future, say XYZ. If we pass an order with this shipping method to the CalculateShippingCost method then we’ll get an exception. We’d have to manually extend the switch statement to account for the new shipment type. In case of a new carrier we’d have to come back to this domain service and modify it accordingly. That breaks the Open/Closed principle of SOLID: a class is open for extensions but closed for modifications. In addition, if there’s a change in the implementation of one of the calculation algorithms then again we’d have to come back to this method and modify it. That’s generally not a good practice: if you make a change to one of your classes, then you should not have to go an modify other classes and public methods just to accommodate that change. Methods that calculate the costs are of course ridiculously simple in this demo – in reality there may well be calls to other services, the weight of the package may be checked etc., so the ShippingCostCalculatorService class may grow very large and difficult to maintain. The calculator class becomes bloated with logic belonging to UPS, FedEx, DHL etc, violating the Single Responsibility Principle. The service class is trying to take care of too much. The solution is basically to create a class for each calculation – we can call each implemented calculation a strategy. Each class will need to implement the same interface. If you check the calculation methods in the service class then the following interface will probably fit our needs:

public interface IShippingStrategy
{
    double Calculate(Order order);
}

Next we will implement strategies

public class USPSShippingStrategy : IShippingStrategy
{
    public double Calculate(Order order)
    {
        return 3.00d;
    }
}
 
public class UpsShippingStrategy : IShippingStrategy
{
    public double Calculate(Order order)
    {
        return 4.25d;
    }
}
 
 
public class FedexShippingStrategy : IShippingStrategy
{
    public double Calculate(Order order)
    {
        return 5.00d;
    }
}
 
public class DHLShippingStrategy : IShippingStrategy
{
    public double Calculate(Order order)
    {
        return 4.00d;
    }
}

The cost calculation service is now ready to accept the strategy from the outside. The new and improved service looks as follows:

public class ShippingCostCalculatorServiceWithStrategy
{
    private readonly IShippingStrategy _shippingStrategy;
 
    public ShippingCostCalculatorServiceWithStrategy(IShippingStrategy shippingStrategy)
    {
        _shippingStrategy = shippingStrategy;
    }
 
    public double CalculateShippingCost(Order order)
    {
        return _shippingStrategy.Calculate(order);
    }
}

You can now implement the IShippingStrategy as new carriers come into the picture and the calculation service can continue to function without knowing anything about the concrete strategy classes. The concrete strategy classes are self-containing, they can be tested individually and they can be mocked

Attached is the source code over here