Monthly Archives: September 2014

Azure Web Role V/S Azure Web Sites

With Windows Azure Websites, you don’t have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the “web servers” in the farm so there is no replication or anything like that required.

Web Roles give you several features beyond Web Sites:

  • Ability to run elevated startup scripts to install apps, modify registry settings, install performance counters, fine-tune IIS, etc.
  • Ability to split an app up into tiers (maybe Web Role for front end, Worker Role for backend processing) and scale independently
  • Ability to RDP into your VM for debugging purposes
  • Network isolation
  • Support for Virtual Networks
  • Dedicated virtual IP address, which allows web role instances in a cloud service to access IP-restricted Virtual Machines
  • ACL-restricted endpoints (added in Azure SDK 2.3, April 2014)
  • Support for any TCP/UDP ports (Web Sites are restricted to TCP 80/443)

Web Sites have advantages over Web Roles though:

  • Near-instant deployment with deployment history / rollbacks
  • Visual Studio Online, github, local git, ftp, CodePlex, DropBox, BitBucket deployment support
  • Ability to roll out one of numerous CMS’s and frameworks, (like WordPress, Joomla, Django, MediaWiki, etc.)
  • Use of SQL Database or MySQL
  • Simple and fast to scale from free tier to shared tier to dedicated tier
  • Web Jobs
  • Backups of Web Site content
  • Built-in web-based debugging tools (simple cmd/powershell debug console, process explorer, diagnostic tools like log streaming, etc.)

WhentoUseAzureWebSites

WhentoUseAzureWebRoles

AzureWebRoleVSWebSites

C# Dynamic Keyword

dynamic is a new static type that acts like a placeholder for a type not known until runtime. Once the dynamic object is declared, it is possible to call operations, get and set properties on it, even pass the dynamic instance pretty much as if it were any normal type.

The dynamic keyword influences compilation. A dynamic variable, parameter or field can have any type. Its type can change during runtime. The downside is that performance suffers and you lose compile-time checking.

Dynamic is advanced functionality. It can be useful. But usually it should be avoided. It erases many benefits of the C# language.

Comparison with var
dynamic variable and var variable both can store any type of value but its required to initialize ‘var’ at the time of declaration.

Compiler doesn’t have any information about the ‘dynamic’ type of variable. var is compiler safe i.e compiler has all information about the stored value, so that it doesn’t cause any issue at run-time.

Dynamic type can be passed as function argument and function also can return it. Var type can not be passed as function argument and function can not return object type. This type of variable can work in the scope where it defined.

dynamic: Useful when coding using reflection or dynamic language support or with the COM objects, because we require to write less amount of code.

“throw” V/S “throw ex”

Is there a difference between “throw” and “throw ex”?

Yes – throw re-throws the exception that was caught, and preserves the stack trace. throw ex throws the same exception, but resets the stack trace to that method. Unless you want to reset the stack trace (i.e. to shield public callers from the internal workings of your library), throw is generally the better choice, since you can see where the exception originated

In the “throw ex”, the stack trace is truncated, what this means is that when you look at the stack trace, it will look as if the exception originated in your code. This isn’t always the case, particularly if you are bubbling up a CLR generated exception (like a SqlException). This is a problem known as “breaking the stack”, because you no longer have the full stack trace information. This happens because you are in essence creating a new exception to throw.

By using “throw” by itself, you preserve the stack trace information. You can confirm this by looking at the IL generated for these two code blocks. This makes the difference very obvious since in the first example the IL instruction called is “throw” while in the second the instruction is called “rethrow”.

Before you run and change all of your code, there are still places where “throw ex” is appropriate. There are times when you want to add information to the exception that was caught or change it into a more meaningful exception.

To Summarize

  • Only catch exceptions if they are important to you and you need to do some sort of cleanup as a result.
  • If you need to bubble an exception up the chain, use “throw” by itself.
  • If you need to add information to the exception or repackage it, always pass the original exception as the inner exception.

IEnumerable and IQueryable + Entity Framework

Linq to SQL and Linq to Objects queries are not the same.

LINQ to Objects queries operate on IEnumerable collections. The query iterates through the collection and executes a sequence of methods (for example, Contains, Where etc) against the items in the collection.

LINQ to SQL queries operate on IQueryable collections. The query is converted into an expression tree by the compiler and that expression tree is then translated into SQL and passed to the database.

IQueryable inherits from IEnumerable

All LINQ to Objects queries return IEnumerable or a derivative of IEnumerable, all IEnumerable expressions and executed in memory against the full dataset

IQueryable uses a DbQueryProvider (IQueryProvider) to translate the expression (the chained extension methods) into a single database query (in this case, it generates T-SQL to run against the database). Once the query is invoked (by say, enumerating it), the query is executed against the database and the results are returned back to be consumed

All of your queries for data when using Entity Framework are written against DbSet
public class DbSet : DbQuery, IDbSet, IQueryable, IEnumerable, IQueryable, IEnumerable where TEntity : class
{

}

Optimistic and Pessimistic Concurrency

A concurrency conflict occurs when one user displays an entity’s data in order to edit it, and then another user updates the same entity’s data before the first user’s change is written to the database. If you don’t enable the detection of such conflicts, whoever updates the database last overwrites the other user’s changes. In many applications, this risk is acceptable: if there are few users, or few updates, or if isn’t really critical if some changes are overwritten, the cost of programming for concurrency might outweigh the benefit. In that case, you don’t have to configure the application to handle concurrency conflicts.

Pessimistic Concurrency (Locking)

If your application does need to prevent accidental data loss in concurrency scenarios, one way to do that is to use database locks. This is called pessimistic concurrency. For example, before you read a row from a database, you request a lock for read-only or for update access. If you lock a row for update access, no other users are allowed to lock the row either for read-only or update access, because they would get a copy of data that’s in the process of being changed. If you lock a row for read-only access, others can also lock it for read-only access but not for update.

Managing locks has disadvantages. It can be complex to program. It requires significant database management resources, and it can cause performance problems as the number of users of an application increases. For these reasons, not all database management systems support pessimistic concurrency. The Entity Framework provides no built-in support for it, and this tutorial doesn’t show you how to implement it.

Optimistic Concurrency

The alternative to pessimistic concurrency is optimistic concurrency. Optimistic concurrency means allowing concurrency conflicts to happen, and then reacting appropriately if they do.

Detecting Concurrency Conflicts

You can resolve conflicts by handling OptimisticConcurrencyException exceptions that the Entity Framework throws. In order to know when to throw these exceptions, the Entity Framework must be able to detect conflicts. Therefore, you must configure the database and the data model appropriately. Some options for enabling conflict detection include the following:

In the database table, include a tracking column that can be used to determine when a row has been changed. You can then configure the Entity Framework to include that column in the Where clause of SQL Update or Delete commands.

The data type of the tracking column is typically rowversion. The rowversion value is a sequential number that’s incremented each time the row is updated. In an Update or Delete command, the Where clause includes the original value of the tracking column (the original row version) . If the row being updated has been changed by another user, the value in the rowversion column is different than the original value, so the Update or Delete statement can’t find the row to update because of the Where clause. When the Entity Framework finds that no rows have been updated by the Update or Delete command (that is, when the number of affected rows is zero), it interprets that as a concurrency conflict.

Configure the Entity Framework to include the original values of every column in the table in the Where clause of Update and Delete commands.

As in the first option, if anything in the row has changed since the row was first read, the Where clause won’t return a row to update, which the Entity Framework interprets as a concurrency conflict. For database tables that have many columns, this approach can result in very large Where clauses, and can require that you maintain large amounts of state. As noted earlier, maintaining large amounts of state can affect application performance. Therefore this approach is generally not recommended, and it isn’t the method used in this tutorial.

If you do want to implement this approach to concurrency, you have to mark all non-primary-key properties in the entity you want to track concurrency for by adding the ConcurrencyCheck attribute to them. That change enables the Entity Framework to include all columns in the SQL WHERE clause of UPDATE statements.

Dependency Injection and Lazy Loading

When using dependency injection you might get to a point where you need to supply a dependency on something that is expensive to create, for example:

A database connection
An object that takes long to create.
This would be fine if you were definitely using the dependency, but there are scenarios where the dependency might just sometimes be used.

Even though you can inject an abstract factory or a lambda function to evaluate the dependency, this didn’t feel so nice. And also felt like extra effort to define and also to manage the instance variable.

But it turns out that this problem can be solved elegantly in .NET 4 with a new class called:

Lazy

Lazy solves this problem in 2 ways:
Lazy load the instance by automatically activating only when used.
Takes care of recycling the same instance variable.

Below is the Code sample

 

public interface ILogger
{
    void Log();
}
 
public class FileLogger : ILogger
{
    public FileLogger()
    {
        Console.WriteLine("Inside File Logger Constructor");
    }
    public void Log()
    {
        Console.WriteLine("In File Logger Log Method..");
    }
}
 
public class Customer
{
    Lazy<ILogger> logger;
 
    public Customer()
    {
        this.logger = new Lazy<ILogger>();
    }
 
    public Customer(Lazy<ILogger> log)
    {
        Console.WriteLine("Inside Customer Constructor");
        this.logger = log;
    }
 
    public void PlaceOrder()
    {
        try
        {
            Console.WriteLine("Iniside Place Order");
            this.logger.Value.Log();
        }
        catch
        {
            this.logger.Value.Log();
        }
    }
}
class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Instantiating Lazy<ILogger>");
        Lazy<ILogger> logger = new Lazy<ILogger>(() => { return new FileLogger(); });
        Customer cust = new Customer(logger);
        cust.PlaceOrder();
        Console.ReadLine();
    }
}

Attached is the source code Here

Viewbag, ViewData and Tempdata

ViewBag & ViewData
Helps to maintain data when you move from controller to view.
Used to pass data from controller to corresponding view.
Short life means value becomes null when redirection occurs. This is because their goal is to provide a way to communicate between controllers and views. It’s a communication mechanism within the server call.

Difference between ViewBag & ViewData:
ViewData is a dictionary of objects that is derived from ViewDataDictionary class and accessible using strings as keys.
ViewBag is a dynamic property that takes advantage of the new dynamic features in C# 4.0.
ViewData requires typecasting for complex data type and check for null values to avoid error.
ViewBag doesn’t require typecasting for complex data type.

TempData

TempData is also a dictionary derived from TempDataDictionary class and stored in short lives session and it is a string key and object value. The difference is that the life cycle of the object. TempData keep the information for the time of an HTTP Request. This mean only from one page to another. This also work with a 302/303 redirection because it’s in the same HTTP Request. Helps to maintain data when you move from one controller to other controller or from one action to other action. In other words when you redirect, “Tempdata” helps to maintain data between those redirects. It internally uses session variables. Temp data use during the current and subsequent request only means it is use when you are sure that next request will be redirecting to next view. It requires typecasting for complex data type and check for null values to avoid error. generally used to store only one time messages like error messages, validation messages.

Html.Partial and Html.RenderPartial Html.Action and Html.renderAction

Difference Html.Partial and Html.RenderPartial

While one can store the output of Html.Partial in a variable or return it from a method, one cannot do this with Html.RenderPartial. The result will be written to the Response stream during execution/evaluation.

Difference is Html.Partial returns an MvcHtmlString and  Html.RenderPartial outputs straight to the response.

Html.RenderPartial: directly render/write on output stream and returns void and it’s very fast in comparison to Html.Partial

You can store the output of Html.Partial in a variable, or return it from a function. You cannot do this with Html.RenderPartial. The result will be written to the Response stream during the execution

Difference Html.Action and Html.RenderAction

The same is true for Html.Action and Html.RenderAction.

The return type of Html.RenderAction is void that means it directly render the responses in View where return type of Html.Action is MvcHtmlString you can catch its render view in controller and modify it

REST Principles

REST ( REpresentational State Transfer)is a set of principles that define how Web standards, such as HTTP and URIs, are supposed to be used.

There are five important REST principle as mentioned below –

  • Addressable Resources – Everything is a resource, each resource should be identified by a URI (unique identifier)
  • Simple and Uniform Interfaces – REST is based on HTTP protocol so use HTTP GET, POST, PUT and DELETE method to perform actions. This make REST simple and uniform.
  • Representation Oriented- Communication are done by representation. Representation of resources are exchanged. GET is used to return a representation and PUT, POST passes representation to the server so that underlying resources may change. Representation may be in many formats like XML, JSON etc.
  • Communicate Stateless – An application may has state but there is no client session data stored on the server. Any session specific data should be held and maintained by the client and transferred to the server with each request as needed.
  • Cacheable – Clients should be able to cache the responses for further use.