Monthly Archives: November 2014

Why is composition favored over inheritance

Object-Oriented Programing (OOP) has too well known candidates for the reuse of functionality: Inheritance (whitebox reuse) and Composition (blackbox reuse). Just to revise, composition and Inheritance are ways to reuse code to get additional functionality. In Inheritance, a new class, which wants to reuse code, inherit an existing class, known as super class. This new class is then known as sub class. On composition, a class, which desire to use functionality of an existing class, doesn’t inherit, instead it holds a reference of that class in a member variable, that’s why the name composition. Inheritance and composition relationships are also referred as IS-A and HAS-A relationships. Because of IS-A relationship, an instance of sub class can be passed to a method, which accepts instance of super class. This is a kind of polymorphism, which is achieved using Inheritance. A super class reference variable can refer to an instance of sub class. By using composition, you don’t get this behavior, but still it offers a lot more to tild the balance in its side.

One reason of favoring Composition over Inheritance in Java is fact that Java doesn’t support multiple inheritance. Since you can only extend one class in Java, but if you need multiple functionality like e.g. for reading and writing character data into file, you need Reader and Writer functionality and having them as private members makes your job easy that’s composition.

Composition offers better testability of a class than Inheritance. If one class is composed of another class, you can easily create Mock Object representing composed class for sake of testing. Inheritance doesn’t provide this luxury. In order to test derived class, you must need its super class. Since unit testing is one of the most important thing to consider during software development, especially in test driven development, composition wins over inheritance

Though both Composition and Inheritance allows you to reuse code, one of the disadvantage of Inheritance is that it breaks encapsulation. If sub class is depending on super class behavior for its operation, it suddenly becomes fragile. When behavior of super class changes, functionality in sub class may get broken, without any change on its part

Example
Lets say we are writing simulation software for Rocket Launching systems which are to be supplied to different countries. Now these different countries can use them as they want it. The code for our launching system is below:

public class Launcher 
{ 
    public bool LaunchMissile() 
    { 
        Console.WriteLine("Missile launched"); 
        return true; 
    }
 
}
public class SufraceToAirMissileLauncher : Launcher
{
 
}

Now, country A uses this code to launch missile as follows:

static void Main(string[] args) 
{ 
    SufraceToAirMissileLauncher staLauncher = new SufraceToAirMissileLauncher(); 
    bool isLaunched = staLauncher.LaunchMissile();
    Console.ReadLine();
}

This is how Inheritance is used. The various launchers can reuse the base Launcher class code to launch missile.

The same thing can be achieved by using Composition where base class functionality is encapsulated inside the main concrete class. The code for that is below:

public class SufraceToAirMissileLauncher 
{ 
    private Launcher launcher = new Launcher(); 
    public bool LaunchMissile() 
    { 
        return launcher.LaunchMissile(); 
    } 
}

The client UI code remains the same.

Now due to our superb code, our patented launching software had become famous and another country B wants to use it. But they had a condition that instead of launching the missile through base class they would want to get an instance of a missile. Now it’s up to them what they want to do with it. They might add some nuclear material on it or modify it to increase its range or do whatever they might like. So another Missile object comes into the picture.

public class Missile 
{ 
    private bool isLaunched; 
    public bool IsLaunched 
    { 
        get
        {
            return isLaunched; 
        } 
        set 
        { 
            isLaunched = value; 
        } 
    } 
    public Missile(bool isLaunched) 
    { 
        IsLaunched = IsLaunched;
    } 
}

And the base class function has changed to:

public class Launcher
{
    public Missile LaunchMissile() 
    { 
        Console.WriteLine("Missile returned"); 
        return new Missile(true); 
    }
 
}

Now it returns a missile instead of launching it. So now if we rely on inheritance, the client code of country A would break since the method signature has changed from what is being used in its UI. However, if the country A had used composition instead, the code will not break. Only the derived class function would need to accommodate the new changed behavior of the base class. To accommodate this, we need to change our derived class code function “LaunchMissile” as:

public class SufraceToAirMissileLauncher
{
    private Launcher launcher = new Launcher(); 
    public bool LaunchMissile() 
    { 
        Missile missile = launcher.LaunchMissile(); 
        return missile.IsLaunched; 
    } 
}

Hence, the client code of country A would still work:

static void Main(string[] args) 
{ 
    SufraceToAirMissileLauncher staLauncher = new SufraceToAirMissileLauncher(); 
    bool isLaunched = staLauncher.LaunchMissile();
    Console.ReadLine();
}

On the other hand country B which was insisting on getting a missile would still get missile from the base class.
So through this simple example we see how the composition is favored over inheritance to maintain compatibility and where there is a possibility that the functionality might change in future.

Source Code

Advertisements

SSL – Concepts and how does it work…

SSL Certificates have a key pair: a public and a private key. These keys work together to establish an encrypted connection. The certificate also contains what is called the “subject,” which is the identity of the certificate/website owner.

The Public Key is what its name suggests – Public. It is made available to everyone via a publicly accessible repository or directory. On the other hand, the Private Key must remain confidential to its respective owner. Because the key pair is mathematically related, whatever is encrypted with a Public Key may only be decrypted by its corresponding Private Key and vice versa.

To get a certificate, you must create a Certificate Signing Request (CSR) on your server. This process creates a private key and public key on your server. The CSR data file that you send to the SSL Certificate issuer (called a Certificate Authority or CA) contains the public key. The CA uses the CSR data file to create a data structure to match your private key without compromising the key itself. The CA never sees the private key.

Once you receive the SSL Certificate, you install it on your server. You also install a pair of intermediate certificates that establish the credibility of your SSL Certificate by tying it to your CA’s root certificate.

When a browser attempts to access a website that is secured by SSL, the browser and the web server establish an SSL connection using a process called an “SSL Handshake”. Note that the SSL Handshake is invisible to the user and happens instantaneously.

Essentially, three keys are used to set up the SSL connection: the public, private, and session keys. Anything encrypted with the public key can only be decrypted with the private key, and vice versa.

Because encrypting and decrypting with private and public key takes a lot of processing power, they are only used during the SSL Handshake to create a symmetric session key. After the secure connection is made, the session key is used to encrypt all transmitted data.
How Does the SSL Certificate Create a Secure Connection?
browserservercommunication

1. Browser connects to a web server (website) secured with SSL (https). Browser requests that the server identify itself

2. Server sends a copy of its SSL Certificate, including the server’s public key.

3. Browser checks the certificate root against a list of trusted CAs and that the certificate is unexpired, unrevoked, and that its common name is valid for the website that it is connecting to. If the browser trusts the certificate, it creates, encrypts, and sends back a symmetric session key using the server’s public key.

4. Server decrypts the symmetric session key using its private key and sends back an acknowledgement encrypted with the session key to start the encrypted session.

5. Server and Browser now encrypt all transmitted data with the session key.

SOLID – Application Development Principles

SOLID Principles – PDF Document Well Formatted.

Source Code Can be downloaded from here

SOLID are five basic principles whichhelp to create good software architecture. SOLID is an acronym where:-

  • S stands for SRP (Single responsibility principle
  • O stands for OCP (Open closed principle)
  • L stands for LSP (Liskov substitution principle)
  • I stands for ISP ( Interface segregation principle)
  • D stands for DIP ( Dependency inversion principle)

Single Responsibility

Single Responsibility Principle states that objects should have single responsibility and all of their behaviors should focus on that one responsibility

    class Customer
    {
        public void Add()
        {
            try
            {
                // Database code goes here
            }
            catch (Exception ex)
            {
                System.IO.File.WriteAllText(@"c:\Error.txt", ex.ToString());
            }
        }
 
    }

The above customer class is doing things WHICH HE IS NOT SUPPOSED TO DO. Customer class should do customer data validations, call the customer data access layer etc , but if you see the catch block closely it also doing LOGGING activity. In simple words its over loaded with lot of responsibility.

 

With Single Responsibility Principle – Move that logging activity to some other class who will only look after logging activities.

class FileLogger
{
    public void Handle(string error)
    {
        System.IO.File.WriteAllText(@"c:\Error.txt", error);
    }
}
 
class Customer
{
    private FileLogger obj = new FileLogger();
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            obj.Handle(ex.ToString());
        }
    }
}

 

Open/Closed Principle

Open and Closed principle encourages components that are open for extension, but closed for modification.

class Customer
{
    private FileLogger obj = new FileLogger();
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            obj.Handle(ex.ToString());
        }
    }
}
 
class Customer
{
    private int _CustType;
    public int CustType
    {
        get { return _CustType; }
        set { _CustType = value; }
    }
 
    public double getDiscount(double TotalSales)
    {
 
        if (_CustType == 1)
        {
            return TotalSales - 100;
        }
        else
        {
            return TotalSales - 50;
        }
    }
}

The problem is if we add a new customer type we need to go and add one more “IF” condition in the “getDiscount” function, in other words we need to change the customer class. If we are changing the customer class again and again, we need to ensure that the previous conditions with new one’s are tested again , existing client’s which are referencing this class are working properly as before. In other words we are “MODIFYING” the current customer code for every change and every time we modify we need to ensure that all the previous functionalities and connected client are working as before. How about rather than “MODIFYING” we go for “EXTENSION”. In other words every time a new customer type needs to be added we create a new class as shown in the below. So whatever is the current code they are untouched and we just need to test and check the new classes.

class Customer
{
    private FileLogger obj = new FileLogger();
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            obj.Handle(ex.ToString());
        }
    }
 
    public virtual double getDiscount(double TotalSales)
    {
        return TotalSales;
    }
}
 
class VIPCustomer : Customer
{
    public override double getDiscount(double TotalSales)
    {
        return base.getDiscount(TotalSales) - 50;
    }
}

 

Liskov Substitution Principle

This principle states that object should be easily replaceable by the instances of their sub types without influencing the behavior and rules of the objects. Let’s continue with the same customer. Let’s say our system wants to calculate discounts for Enquiries. Now Enquiries are not actual customer’s they are just leads. Because they are just leads we do not want to save them to database for now. So we create a new class called as Enquiry which inherits from the “Customer” class. We provide some discounts to the enquiry so that they can be converted to actual customers and we override the “Add’ method with an exception so that no one can add an Enquiry to the database.

 

class Enquiry : Customer
{
    public override double getDiscount(double TotalSales)
    {
 
        return base.getDiscount(TotalSales) - 5;
    }
 
 
    public override void Add()
    {
        throw new Exception("Not allowed");
    }
}

So as per polymorphism rule my parent “Customer” class object can point to any of it child class objects i.e. “VIP”, or “Enquiry” during runtime without any issues.

So for instance in the below code you can see I have created a list collection of “Customer” and thanks to polymorphism I can add “VIP” and “Enquiry” customer to the “Customer” collection without any issues.

List<Customer> customers = new List<Customer>();
customers.Add(new VIPCustomer());
customers.Add(new Enquiry());
foreach (Customer customer in customers)
{
    customer.Add();
}

As per the inheritance hierarchy the “Customer” object can point to any one of its child objects and we do not expect any unusual behavior. But when “Add” method of the “Enquiry” object is invoked it leads to below error because our “Equiry” object does save enquiries to database as they are not actual customers. So LISKOV principle says the parent should easily replace the child object. So to implement LISKOV we need to create two interfaces one is for discount and other for database as shown below.

interface IDiscount
{
    double getDiscount(double TotalSales);
}
 
interface IDatabase
{
    void Add();
}

Now the “Enquiry” class will only implement “IDiscount” as he not interested in the “Add” method.

class Enquiry : IDiscount
{
    public double getDiscount(double TotalSales)
    {
        return TotalSales - 5;
    }
}

While the “Customer” class will implement both “IDiscount” as well as “IDatabase” as it also wants to persist the customer to the database.

class Customer : IDiscountIDatabase
{
    private FileLogger obj = new FileLogger();
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            obj.Handle(ex.ToString());
        }
    }
 
    public virtual double getDiscount(double TotalSales)
    {
        return TotalSales;
    }
}

Now there is no confusion, we can create a list of “IDatabase” interface and add the relevant classes to it. In case we make a mistake of adding “Enquiry” class to the list compiler would complain.

Interface Segregation Principle

It encourages the use of Interface but limits the size of the interface. Instead of one super class interface that contains all the behavior for an object, there should exist multiple smaller more specific interfaces

e.g. .NET has separate interface for serialization and disposing (ISerializaable and IDisposable)

Now assume that our customer class has become a SUPER HIT component and it’s consumed across 1000 clients and they are very happy using the customer class. Now let’s say some new clients come up with a demand saying that we also want a method which will help us to “Read” customer data. So developers who are highly enthusiastic would like to change the “IDatabase” interface as shown below.

But by doing so we have done something terrible, can you guess?

interface IDatabase
{
    void Add(); // old client are happy with these.
    void Read(); // Added for new clients.
}

 

If you visualize the new requirement which has come up, you have two kinds of client’s: –

Who want’s just use “Add” method. The other who wants to use “Add” + “Read”. Now by changing the current interface you are doing an awful thing, disturbing the 1000 satisfied current client’s , even when they are not interested in the “Read” method. You are forcing them to use the “Read” method. So a better approach would be to keep existing clients in their own sweet world and the serve the new client’s separately. So the better solution would be to create a new interface rather than updating the current interface. So we can keep the current interface “IDatabase” as it is and add a new interface “IDatabaseV1” with the “Read” method the “V1” stands for version 1.

interface IDatabaseV1 : IDatabase // Gets the Add method
{
    void Read();
}

You can now create fresh classes which implement “Read” method and satisfy demands of your new clients and your old clients stay untouched and happy with the old interface which does not have “Read” method.

 

class CustomerWithRead : IDatabaseIDatabaseV1
{
    public void Add()
    {
        Customer obj = new Customer();
        obj.Add();
    }
    public void Read()
    {
        // Implements logic for read
    }
}

So the old clients will continue using the “IDatabase” interface while new client can use “IDatabaseV1” interface.

IDatabase i = new Customer(); // 1000 happy old clients not touched
i.Add();
IDatabaseV1 iv1 = new CustomerWithRead(); // new clients
iv1.Read();

Dependency Inversion Principle

Components that depend on each other should interact via an abstraction and not directly with concrete implementation. Inversion of Controls Glues all these principles together. Two proper implementation for IoC are

  • Dependency Injection
  • Service Location

 

Major difference between two implementations revolves around how the dependencies are accessed

Service Locator relies on the caller to invoke and ask for dependency

Dependency injection relies on injecting the dependency in to the class through constructor, setting one of its properties or executing one of its methods.

 

In our customer class if you remember we had created a logger class to satisfy SRP. Down the line let’s say new Logger flavor classes are created.

 

class Customer : IDiscountIDatabase
{
    private FileLogger obj = new FileLogger();
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            obj.Handle(ex.ToString());
        }
    }
    public virtual double getDiscount(double TotalSales)
    {
        return TotalSales;
    }
}

Just to control things we create a common interface and using this common interface new logger flavors will be created.

interface ILogger
{
    void Handle(string error);
 
}

Below are three logger flavors and more can be added down the line.

class FileLogger : ILogger
{
    public void Handle(string error)
    {
        System.IO.File.WriteAllText(@"c:\Error.txt", error);
    }
 
}
class EverViewerLogger : ILogger
{
    public void Handle(string error)
    {
        // log errors to event viewer
    }
}
class EmailLogger : ILogger
{
    public void Handle(string error)
    {
        // send errors in email
    }
 
}

Now depending on configuration settings different logger classes will used at given moment. So to achieve the same we have kept a simple IF condition which decides which logger class to be used, see the below code.

class Customer : IDiscountIDatabase
{
    private ILogger obj;
    public virtual void Add(int exhandle)
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            if (exhandle == 1)
            {
                obj = new FileLogger();
            }
            else
            {
                obj = new EmailLogger();
            }
            obj.Handle(ex.Message.ToString());
        }
    }
 
    public virtual double getDiscount(double TotalSales)
    {
        return TotalSales;
    }
}

The above code is again violating SRP but this time the aspect is different, it’s about deciding which objects should be created. Now it’s not the work of “Customer” object to decide which instances to be created, he should be concentrating only on Customer class related functionalities. If you watch closely the biggest problem is the “NEW” keyword. He is taking extra responsibilities of which object needs to be created. So if we INVERT / DELEGATE this responsibility to someone else rather the customer class doing it that would really solve the problem to a certain extent. So here’s the modified code with INVERSION implemented. We have opened the constructor mouth and we expect someone else to pass the object rather than the customer class doing it. So now it’s the responsibility of the client who is consuming the customer object to decide which Logger class to inject.

class Customer : IDiscountIDatabase
{
    private ILogger logger;
    public Customer(ILogger logger)
    {
        this.logger = logger;
    }
 
    public virtual void Add()
    {
        try
        {
            // Database code goes here
        }
        catch (Exception ex)
        {
            logger.Handle(ex.ToString());
        }
    }
    public virtual double getDiscount(double TotalSales)
    {
        return TotalSales;
    }
}

So now the client will inject the Logger object and the customer object is now free from those IF condition which decide which logger class to inject. This is the Last principle in SOLID Dependency Inversion principle. Customer class has delegated the dependent object creation to client consuming it thus making the customer class concentrate on his work.

IDatabase i = new Customer(new EmailLogger());

ASP.NET Web API Request Batching and Service Call aggregation

What is HTTP Batching?

  • Group multiple HTTP requests into a single HTTP call. It defines a way to represent a complete HTTP request (headers and all) as a section in a single HTTP POST body.
  • Batch Http API calls together to reduce the number of HTTP connections client has to make. HTTP connection that client makes results in a certain amount of overhead,
  • Batch requests allow grouping multiple operations, into a single HTTP request payload
  • Minimize  number of messages that are passed between the client and the server and reduce network traffic, provides a smoother, promotes less chattiness.
  • Avoid Redundant HTTP API calls.
  • ASP.NET Web API provide OOB support for http batching through HttpBatchHandler. To enable batching Web API provides custom message handlers DefaultHttpBatchHandler which you can register per-route to handle the batch requests.
  • The key thing of Http batching is the Content-Type:multipart/mixed; boundary=12345665431 which is basically informing the reciever that the content of the POST request is made up of multiple parts seperated by the boundary indicator, Boundary Indicator can be any unique identifier.

Registering HTTP batch endpoint

You can use MapHttpBatchRoute, which is an HttpRouteCollection extension method, to create a batch endpoint. For example, the following will create a batch endpoint at “api/batch” in the App_Start WebApiConfig.cs file

public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.Routes.MapHttpBatchRoute(
routeName: “WebApiBatch”,
routeTemplate: “api/batch”,
batchHandler: new DefaultHttpBatchHandler(GlobalConfiguration.DefaultServer));

config.Routes.MapHttpRoute(
name: “DefaultApi”,
routeTemplate: “api/{controller}/{id}”,
defaults: new { id = RouteParameter.Optional }
);
}
}

Now, on the client side you can use the angular angular-http-batcher module to make batch requests. This module  hooks into the $http service and creates batch requests to batchable endpoints. Therefore, once the module is included as a dependency in your application it will automatically batch all batchable http request.

Set up the angular http batcher dependency to your angular module which can be app, controller or a service, preferably common service.

var batchedServiceClientModule = angular.module(‘batchedServiceClientModule ‘, [‘jcs.angular-http-batch’]);
batchedServiceClientModule.config([
‘httpbatchServiceClientProvider’,
function (httpbatchServiceClientProvider) {
httpbatchServiceClientProvider.setAllowedBatchEndpoint(
https://localhost:8080&#8217;,
https://localhost:8080/api/batch&#8217;);
}
]);

batchedServiceClientModule.factory(batchedServiceClient, [
‘$scope’,
‘$http’,
function ($scope, $http) {
 $http.get(https://localhost:8080/books&#8217;).then(function (data) {
console.log(‘success Books – ‘ + data.data);
}, function (err) {
console.log(‘error Books – ‘ + err);
});

$http.get(https://localhost:8080/books/1′).then(function (data) {
console.log(‘success books/1 – ‘ + data.data);
}, function (err) {
console.log(‘error books/1 – ‘ + err);
});

$http.put(https://localhost:8080/books&#8217;, {
Name: ‘Harry Potter’,
Autor: ‘J K Rowling’
}).then(function (data) {
console.log(‘success Post Books – ‘ + data.data);
}, function (err) {
console.log(‘error Post Books – ‘ + angular.fromJson(err));
});
}]);

Register ‘batchedServiceClientModule; with your mail ng-App