Monthly Archives: September 2014

Azure Web Role V/S Azure Web Sites

With Windows Azure Websites, you don’t have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the “web servers” in the farm so there is no replication or anything like that required.

Web Roles give you several features beyond Web Sites:

  • Ability to run elevated startup scripts to install apps, modify registry settings, install performance counters, fine-tune IIS, etc.
  • Ability to split an app up into tiers (maybe Web Role for front end, Worker Role for backend processing) and scale independently
  • Ability to RDP into your VM for debugging purposes
  • Network isolation
  • Support for Virtual Networks
  • Dedicated virtual IP address, which allows web role instances in a cloud service to access IP-restricted Virtual Machines
  • ACL-restricted endpoints (added in Azure SDK 2.3, April 2014)
  • Support for any TCP/UDP ports (Web Sites are restricted to TCP 80/443)

Web Sites have advantages over Web Roles though:

  • Near-instant deployment with deployment history / rollbacks
  • Visual Studio Online, github, local git, ftp, CodePlex, DropBox, BitBucket deployment support
  • Ability to roll out one of numerous CMS’s and frameworks, (like WordPress, Joomla, Django, MediaWiki, etc.)
  • Use of SQL Database or MySQL
  • Simple and fast to scale from free tier to shared tier to dedicated tier
  • Web Jobs
  • Backups of Web Site content
  • Built-in web-based debugging tools (simple cmd/powershell debug console, process explorer, diagnostic tools like log streaming, etc.)

WhentoUseAzureWebSites

WhentoUseAzureWebRoles

AzureWebRoleVSWebSites

Advertisements

C# Dynamic Keyword

dynamic is a new static type that acts like a placeholder for a type not known until runtime. Once the dynamic object is declared, it is possible to call operations, get and set properties on it, even pass the dynamic instance pretty much as if it were any normal type.

The dynamic keyword influences compilation. A dynamic variable, parameter or field can have any type. Its type can change during runtime. The downside is that performance suffers and you lose compile-time checking.

Dynamic is advanced functionality. It can be useful. But usually it should be avoided. It erases many benefits of the C# language.

Comparison with var
dynamic variable and var variable both can store any type of value but its required to initialize ‘var’ at the time of declaration.

Compiler doesn’t have any information about the ‘dynamic’ type of variable. var is compiler safe i.e compiler has all information about the stored value, so that it doesn’t cause any issue at run-time.

Dynamic type can be passed as function argument and function also can return it. Var type can not be passed as function argument and function can not return object type. This type of variable can work in the scope where it defined.

dynamic: Useful when coding using reflection or dynamic language support or with the COM objects, because we require to write less amount of code.

“throw” V/S “throw ex”

Is there a difference between “throw” and “throw ex”?

Yes – throw re-throws the exception that was caught, and preserves the stack trace. throw ex throws the same exception, but resets the stack trace to that method. Unless you want to reset the stack trace (i.e. to shield public callers from the internal workings of your library), throw is generally the better choice, since you can see where the exception originated

In the “throw ex”, the stack trace is truncated, what this means is that when you look at the stack trace, it will look as if the exception originated in your code. This isn’t always the case, particularly if you are bubbling up a CLR generated exception (like a SqlException). This is a problem known as “breaking the stack”, because you no longer have the full stack trace information. This happens because you are in essence creating a new exception to throw.

By using “throw” by itself, you preserve the stack trace information. You can confirm this by looking at the IL generated for these two code blocks. This makes the difference very obvious since in the first example the IL instruction called is “throw” while in the second the instruction is called “rethrow”.

Before you run and change all of your code, there are still places where “throw ex” is appropriate. There are times when you want to add information to the exception that was caught or change it into a more meaningful exception.

To Summarize

  • Only catch exceptions if they are important to you and you need to do some sort of cleanup as a result.
  • If you need to bubble an exception up the chain, use “throw” by itself.
  • If you need to add information to the exception or repackage it, always pass the original exception as the inner exception.

IEnumerable and IQueryable + Entity Framework

Linq to SQL and Linq to Objects queries are not the same.

LINQ to Objects queries operate on IEnumerable collections. The query iterates through the collection and executes a sequence of methods (for example, Contains, Where etc) against the items in the collection.

LINQ to SQL queries operate on IQueryable collections. The query is converted into an expression tree by the compiler and that expression tree is then translated into SQL and passed to the database.

IQueryable inherits from IEnumerable

All LINQ to Objects queries return IEnumerable or a derivative of IEnumerable, all IEnumerable expressions and executed in memory against the full dataset

IQueryable uses a DbQueryProvider (IQueryProvider) to translate the expression (the chained extension methods) into a single database query (in this case, it generates T-SQL to run against the database). Once the query is invoked (by say, enumerating it), the query is executed against the database and the results are returned back to be consumed

All of your queries for data when using Entity Framework are written against DbSet
public class DbSet : DbQuery, IDbSet, IQueryable, IEnumerable, IQueryable, IEnumerable where TEntity : class
{

}

Optimistic and Pessimistic Concurrency

A concurrency conflict occurs when one user displays an entity’s data in order to edit it, and then another user updates the same entity’s data before the first user’s change is written to the database. If you don’t enable the detection of such conflicts, whoever updates the database last overwrites the other user’s changes. In many applications, this risk is acceptable: if there are few users, or few updates, or if isn’t really critical if some changes are overwritten, the cost of programming for concurrency might outweigh the benefit. In that case, you don’t have to configure the application to handle concurrency conflicts.

Pessimistic Concurrency (Locking)

If your application does need to prevent accidental data loss in concurrency scenarios, one way to do that is to use database locks. This is called pessimistic concurrency. For example, before you read a row from a database, you request a lock for read-only or for update access. If you lock a row for update access, no other users are allowed to lock the row either for read-only or update access, because they would get a copy of data that’s in the process of being changed. If you lock a row for read-only access, others can also lock it for read-only access but not for update.

Managing locks has disadvantages. It can be complex to program. It requires significant database management resources, and it can cause performance problems as the number of users of an application increases. For these reasons, not all database management systems support pessimistic concurrency. The Entity Framework provides no built-in support for it, and this tutorial doesn’t show you how to implement it.

Optimistic Concurrency

The alternative to pessimistic concurrency is optimistic concurrency. Optimistic concurrency means allowing concurrency conflicts to happen, and then reacting appropriately if they do.

Detecting Concurrency Conflicts

You can resolve conflicts by handling OptimisticConcurrencyException exceptions that the Entity Framework throws. In order to know when to throw these exceptions, the Entity Framework must be able to detect conflicts. Therefore, you must configure the database and the data model appropriately. Some options for enabling conflict detection include the following:

In the database table, include a tracking column that can be used to determine when a row has been changed. You can then configure the Entity Framework to include that column in the Where clause of SQL Update or Delete commands.

The data type of the tracking column is typically rowversion. The rowversion value is a sequential number that’s incremented each time the row is updated. In an Update or Delete command, the Where clause includes the original value of the tracking column (the original row version) . If the row being updated has been changed by another user, the value in the rowversion column is different than the original value, so the Update or Delete statement can’t find the row to update because of the Where clause. When the Entity Framework finds that no rows have been updated by the Update or Delete command (that is, when the number of affected rows is zero), it interprets that as a concurrency conflict.

Configure the Entity Framework to include the original values of every column in the table in the Where clause of Update and Delete commands.

As in the first option, if anything in the row has changed since the row was first read, the Where clause won’t return a row to update, which the Entity Framework interprets as a concurrency conflict. For database tables that have many columns, this approach can result in very large Where clauses, and can require that you maintain large amounts of state. As noted earlier, maintaining large amounts of state can affect application performance. Therefore this approach is generally not recommended, and it isn’t the method used in this tutorial.

If you do want to implement this approach to concurrency, you have to mark all non-primary-key properties in the entity you want to track concurrency for by adding the ConcurrencyCheck attribute to them. That change enables the Entity Framework to include all columns in the SQL WHERE clause of UPDATE statements.

Dependency Injection and Lazy Loading

When using dependency injection you might get to a point where you need to supply a dependency on something that is expensive to create, for example:

A database connection
An object that takes long to create.
This would be fine if you were definitely using the dependency, but there are scenarios where the dependency might just sometimes be used.

Even though you can inject an abstract factory or a lambda function to evaluate the dependency, this didn’t feel so nice. And also felt like extra effort to define and also to manage the instance variable.

But it turns out that this problem can be solved elegantly in .NET 4 with a new class called:

Lazy

Lazy solves this problem in 2 ways:
Lazy load the instance by automatically activating only when used.
Takes care of recycling the same instance variable.

Below is the Code sample

 

public interface ILogger
{
    void Log();
}
 
public class FileLogger : ILogger
{
    public FileLogger()
    {
        Console.WriteLine("Inside File Logger Constructor");
    }
    public void Log()
    {
        Console.WriteLine("In File Logger Log Method..");
    }
}
 
public class Customer
{
    Lazy<ILogger> logger;
 
    public Customer()
    {
        this.logger = new Lazy<ILogger>();
    }
 
    public Customer(Lazy<ILogger> log)
    {
        Console.WriteLine("Inside Customer Constructor");
        this.logger = log;
    }
 
    public void PlaceOrder()
    {
        try
        {
            Console.WriteLine("Iniside Place Order");
            this.logger.Value.Log();
        }
        catch
        {
            this.logger.Value.Log();
        }
    }
}
class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Instantiating Lazy<ILogger>");
        Lazy<ILogger> logger = new Lazy<ILogger>(() => { return new FileLogger(); });
        Customer cust = new Customer(logger);
        cust.PlaceOrder();
        Console.ReadLine();
    }
}

Attached is the source code Here