Category Archives: Architecture

Something @ Microservices…

Application architectures have evolved over years in various ways like shown below

MicroServiceEvolution

So as you can see idea behind micro services is to divide a large monolithic web app to smaller pieces.

MonolithicandMicroservices

So why Microservices?

  • Build and operate service at scale
  • Improved resource utilization to reduce cost (especially considering cloud deployments)
  • Fault Isolation
  • Continuous Innovation
  • Small Focus teams (e.g. ‘two-pizza team’: If you can’t feed a team with two pizzas, it’s too large.
  • Can be written in any language and framework.

What are Microservices?

Microservices are…

  • Autonomous – A microservice is a self-contained unit of functionality with loosely coupled dependencies on other services.
  • Isolated – A microservice is a unit of deployment that can be modified, tested and deployed as a unit without impacting other areas of a solution
  • Elastic – A microservice can be stateful or stateless and can be scaled independently of other services
  • Resilient – A microservice is fault tolerant and highly available
  • Responsive – A microservice responds to request in a reasonable amount of time
  • Intelligent – The intelligence in a system is found in the endpoints not on the wire. ESB is an anti-pattern to Microservices.
  • Message Oriented – Microservices rely on asynchronous message-passing to establish a boundary between components and Applications are composed from multiple microservices
  • Programmable – Microservices provide API’s for access by developers and administrators
  • Configurable – Microservices provide an API and/or a console that provides access to administrative operations
  • Automated – The lifecycle of a microservice is managed through automation that includes dev, build, test, staging, production and distribution

Benefits

  • Evolutionary – Can be developed alongside existing monolithic applications providing a bridge to a future state
  • Open – Language agnostic APIs, Highly decoupled
  • Resilient – No monolith to fall over, designed for failure
  • Speed of Development – Adding, updating and maintaining services and can be done at velocity
  • Reuse – Reusable and Composable
  • Deployment Governance – Services are deployed independently
  • Scale Governance – On-demand scaling of smaller services leads to better cost control
  • Replaceable – Services can be rewritten and replaced with minimal downstream impact
  • Versioned – New API’s can be released without impacting clients that are using previous API’s
  • Owned – Microservices are typically owned by one team from development through deployment

Challenges with a Microservices Approach

  • Communication is Key – Communication across teams becomes critical
  • Automation is not an Option – Speed of change requires investment in automation
  • Platform Matters – Your platform must support elastic scale and resilience.
  • Versioning must be Supported – Composability requires versioning
  • Testing – Unit, inter-service, extra-service, composition testing
  • Discoverability – The ability to locate services in a distributed environment without being tightly coupled

Twelve Factor Apps…

Influenced by  12 Fact Apps Principles.

  • Codebase- One codebase tracked in revision control, many deploys. A codebase is any single repo or any set of repos who share a root commit (in a decentralized revision control system like Git). One codebase maps to many deploys.There is always a one-to-one correlation between the codebase and the app:
      • If there are multiple codebases, it’s not an app – it’s a distributed system. Each component in a distributed system is an app, and each can individually comply with twelve-factor.
      • Multiple apps sharing the same code is a violation of twelve-factor. The solution here is to factor shared code into libraries which can be included through the dependency manager.

    There is only one codebase per app, but there will be many deploys of the app. A deploy is a running instance of the app. This is typically a production site, and one or more staging sites. Additionally, every developer has a copy of the app running in their local development environment, each of which also qualifies as a deploy.

  • Config – Store config in the environment. An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes connection strings, Resource handles to the database, Memcached, and other backing services, Credentials to external services such as Amazon S3 or Twitter etc. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; unlike config files, there is little chance of them being checked into the code repo accidentally; and unlike custom config files, or other config mechanisms, they are a language- and OS-agnostic standard.
  • Backing services – Treat backing services as attached resources. A backing service is any service the app consumes over the network as part of its normal operation. Examples include datastores (such as SQl Server), messaging/queueing systems , SMTP services for outbound email , and caching systems (such as Redis). The code for a twelve-factor app makes no distinction between local and third party services. To the app, both are attached resources, accessed via a URL or other locator/credentials stored in the config. A deploy of the twelve-factor app should be able to swap out a local SQl server database with one managed by a third party (such as Azure) without any changes to the app’s code. Likewise, a local SMTP server could be swapped with a third-party SMTP service without code changes. In both cases, only the resource handle in the config needs to change. Resources can be attached and detached to deploys at will. For example, if the app’s database is misbehaving due to a hardware issue, the app’s administrator might spin up a new database server restored from a recent backup. The current production database could be detached, and the new database attached – all without any code changes.
  • Build, release, run – Strictly separate build and run stages. A codebase is transformed into a (non-development) deploy through three stages:
      • The build stage is a transform which converts a code repo into an executable bundle known as a build. Using a version of the code at a commit specified by the deployment process, the build stage fetches vendors dependencies and compiles binaries and assets.
      • The release stage takes the build produced by the build stage and combines it with the deploy’s current config. The resulting release contains both the build and the config and is ready for immediate execution in the execution environment.
      • The run stage (also known as “runtime”) runs the app in the execution environment, by launching some set of the app’s processes against a selected release.

    The twelve-factor app uses strict separation between the build, release, and run stages. For example, it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.Builds are initiated by the app’s developers whenever new code is deployed. Runtime execution, by contrast, can happen automatically in cases such as a server reboot, or a crashed process being restarted by the process manager. Therefore, the run stage should be kept to as few moving parts as possible, since problems that prevent an app from running can cause it to break in the middle of the night when no developers are on hand. The build stage can be more complex, since errors are always in the foreground for a developer who is driving the deploy.

  • Processes – Execute the app as one or more stateless processes. The app is executed in the execution environment as one or more processes.Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database or cache. The memory space or filesystem of the process can be used as a brief, single-transaction cache. For example, downloading a large file, operating on it, and storing the results of the operation in the database. The twelve-factor app never assumes that anything cached in memory or on disk will be available on a future request or job – with many processes of each type running, chances are high that a future request will be served by a different process. Even when running only one process, a restart (triggered by code deploy, config change, or the execution environment relocating the process to a different physical location) will usually wipe out all local (e.g., memory and filesystem) state. Some web systems rely on “sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process. Sticky sessions are a violation of twelve-factor and should never be used or relied upon. Session state data is a good candidate for a datastore that offers time-expiration, such as Memcached or Redis.
  • Port binding – Export services via port binding. Web apps are sometimes executed inside a webserver container. The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.In a local development environment, the developer visits a service URL like http://localhost:5000/ to access the service exported by their app. In deployment, a routing layer handles routing requests from a public-facing hostname to the port-bound web processes. Note also that the port-binding approach means that one app can become the backing service for another app, by providing the URL to the backing app as a resource handle in the config for the consuming app.
  • Concurrency – Scale out via the process model
    Every process inside your application should be treated as a first-class citizen. That means that each process should be able to scale, restart, or clone itself when needed. This approach will improve the sustainability and scalability of your application as a whole. Using this model, the developer can architect their app to handle diverse workloads by assigning each type of work to a process type. For example, HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process. The process model truly shines when it comes time to scale out. The share-nothing, horizontally partitionable nature of twelve-factor app processes means that adding more concurrency is a simple and reliable operation. The array of process types and number of processes of each type is known as the process formation.
  • Disposability – Maximize robustness with fast startup and graceful shutdown. The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. This facilitates fast elastic scaling, rapid deployment of code or config changes, and robustness of production deploys. Processes should strive to minimize startup time. Ideally, a process takes a few seconds from the time the launch command is executed until the process is up and ready to receive requests or jobs. Short startup time provides more agility for the release process and scaling up; and it aids robustness, because the process manager can more easily move processes to new physical machines when warranted. Processes shut down gracefully when they receive a signal from the process manager. For a web process, graceful shutdown is achieved by ceasing to listen on the service port (thereby refusing any new requests), allowing any current requests to finish, and then exiting. Implicit in this model is that HTTP requests are short (no more than a few seconds), or in the case of long polling, the client should seamlessly attempt to reconnect when the connection is lost.Processes should also be robust against sudden death, in the case of a failure in the underlying hardware. While this is a much less common occurrence than a graceful shutdown
  • Dev/prod parity – Keep development, staging, and production as similar as possible. Historically, there have been substantial gaps between development (a developer making live edits to a local deploy of the app) and production (a running deploy of the app accessed by end users). These gaps manifest in three areas:
    • The time gap: A developer may work on code that takes days, weeks, or even months to go into production.
    • The personnel gap: Developers write code, ops engineers deploy it.
    • The tools gap: Developers may be using a certain stack while the production deploy uses different stack.

    The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small. Looking at the three gaps described above:

        • Make the time gap small: a developer may write code and have it deployed hours or even just minutes later.
        • Make the personnel gap small: developers who wrote code are closely involved in deploying it and watching its behavior in production.
        • Make the tools gap small: keep development and production as similar as possible.

    Developers sometimes find great appeal in using a lightweight backing service in their local environments, while a more serious and robust backing service will be used in production. For example local process memory for caching in development and Memcached in production. The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services. Differences between backing services mean that tiny incompatibilities crop up, causing code that worked and passed tests in development or staging to fail in production. These types of errors create friction that disincentivizes continuous deployment. The cost of this friction and the subsequent dampening of continuous deployment is extremely high when considered in aggregate over the lifetime of an application.

  • Logs – Treat logs as event streamsLogs provide visibility into the behavior of a running app. In server-based environments they are commonly written to a file on disk (a “logfile”); but this is only an output format.Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services. Logs in their raw form are typically a text format with one event per line (though backtraces from exceptions may span multiple lines). Logs have no fixed beginning or end, but flow continuously as long as the app is operating.A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.

    In staging or production deploys, each process’ stream will be captured by the execution environment, collated together with all other streams from the app, and routed to one or more final destinations for viewing and long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment. Open-source log routers (such as Logplex and Fluent) are available for this purpose.

    The event stream for an app can be routed to a file, or watched via realtime tail in a terminal. Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive. These systems allow for great power and flexibility for introspecting an app’s behavior over time, including:

    Finding specific events in the past.
    Large-scale graphing of trends (such as requests per minute).
    Active alerting according to user-defined heuristics (such as an alert when the quantity of errors per minute exceeds a certain threshold).

  • Admin processes – Run admin/management tasks as one-off processes. The process formation is the array of processes that are used to do the app’s regular business (such as handling web requests) as it runs. Separately, developers will often wish to do one-off administrative or maintenance tasks for the app, such as:
    • Running database migrations
    • Running a console to run arbitrary code or inspect the app’s models against the live database.
    • Running one-time scripts committed into the app’s repo

    One-off admin processes should be run in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues.