Docker research

January 12, 2017 Leave a comment

Docker research dimensions :

  • Organize and manage cluster
    • OpenShift
    • Symphony Swarm
    • Azure Swarm
    • Docker Datacenter Swarm
  • Build application image
    • Jenkins CI
    • TFS CI
    • OpenShift CI
  • Deploy application bundle
    • Java UI and – Java Service
    • Node Service – Kafka Producer
    • .NET Core – SQL on Linux
  • Build DevOps imsage (CI)
    • Linux , JDK 1.8 , Node, npm, grunt, JUnit
    • Linux, NET Core, Node, npm , NUnit
  • Deploy DevOps bundle (CI)
    • Jenkins  CI –  Linux DevOps image – java application
    • Jenkins  CI –  Linux DevOps image – node application
    • Jenkins  CI –  Linux DevOps image – .NET Core aplication
    • TFS CI –  Linux DevOps image – java application
    • TFS CI –  Linux DevOps image – node application
    • TFS CI –  Linux DevOps image – .NET Core aplication
    • OpenShift  CI –  Linux DevOps image – .NET Core aplication
  • Develop Docker DevOps pipeline
    • Jenkins
    • TFS
  • Evaluate Docker development environment
    • Windows 10 – Nano docker container
    • Windows 10 – Linux docker container
  • Docker cluster monitoring
    • OpenShift
    • UCP
    • Azure MSOMS

 

Docker use cases

User Story
Helpful Links
Base images
Comments
As an  ASG hosting engineer I would like to automate Swarm cluster creation so I would focus on the application deliverable Using Docker Engine 1.12
As an ASG Technical lead I would like to dockernize Java microservice implementation and publish it to the Artifactory docker repository
As an ASG Technical lead I would like to dockernize .NET Core microservice implementation and publish it to the Artifactory docker repository
As an ASG Technical lead I would like to dockernize Node microservice implementation and publish it to the Artifactory docker repository
As an ASG DevOps Infrastructure engineer I would like to dynamically invoke docker container specified in the docker-compose.ci.build.yml using Jenkins workflow so that I could resolve all dependencies required by the application build process and use Docker cluster elasticity in order to make my DevOps practices more agile and resilient Focus on designing and implementing Jenkins Docker workflow
As anASG DevOps Infrastructure engineer I would like to dynamically invoke docker container specified in the docker-compose.ci.build.yml using TFS workflow so that I could resolve all dependencies required by the application build process and use Docker cluster elasticity in order to make my DevOps practices more agile and resilient Focus on designing and implementing TFS Docker workflow

 

Images information

Base Image
OS
Dockerfile Location
command
Comments
Microsoft official images https://hub.docker.com/u/microsoft/ Library of Microsoft images
microsoft/aspnetcore-build Linux https://hub.docker.com/r/microsoft/aspnetcore-build/ docker pull microsoft/aspnetcore-build Official images for building ASP.NET Core applications.
microsoft/azure-cli Linux https://hub.docker.com/r/microsoft/azure-cli/
$ docker run -it microsoft/azure-cli
Docker image for Microsoft Azure Command Line Interface
microsoft/vsts-agent Linux https://hub.docker.com/r/microsoft/vsts-agent/ docker pull microsoft/vsts-agent Official images for the Visual Studio Team Services (VSTS) agent.
microsoft/mssql-server-linux Linux https://hub.docker.com/r/microsoft/mssql-server-linux/ docker pull microsoft/mssql-server-linux Official images for Microsoft SQL Server on Linux for Docker Engine.
CloudBees Official images https://hub.docker.com/u/cloudbees/ Library of CloudBees Images
IBM images https://hub.docker.com/u/ibmcom/ Library of IBM Images
Advertisements
Categories: Docker, Uncategorized

Kubernetes terminology and concepts

January 11, 2017 Leave a comment

Kubernetes  aims to decouple applications from machines by leveraging the foundations of distributed computing and application containers. At a high level Kubernetes sits on top of a cluster of machines and provides an abstraction of a single machine.

CLUSTERS

Clusters are the set of compute, storage, and network resources where pods are deployed, managed, and scaled. Clusters are made of nodes connected via a “flat” network, in which each node and pod can communicate with each other. A typical Kubernetes cluster size ranges from 1 – 200 nodes, and it’s common to have more than one Kubernetes cluster in a given data center based on node count and service SLAs.

PODS

Pods are a colocated group of application containers that share volumes and a networking stack. Pods are the smallest units that can be deployed within a Kubernetes cluster. They are used for run once jobs, can be deployed individually, but long running applications, such as web services, should be deployed and managed by a replication controller.

REPLICATION CONTROLLERS

Replication Controllers ensure a specific number of pods, based on a template, are running at any given time. Replication Controllers manage pods based on labels and status updates.

SERVICES

Services deliver cluster wide service discovery and basic load balancing by providing a persistent name, address, or port for pods with a common set of labels.

LABELS

Labels are used to organize and select groups of objects, such as pods, based on key/value pairs.

The Kubernetes Control Plane

The control plane is made up of a collection of components that work together to provide a unified view of the cluster.

ETCD

etcd is a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being: simple, secure, fast, and reliable. etcd uses the Raft consensus algorithm to achieve fault-tolerance and high-availability. etcd provides the ability to “watch” for changes, which allows for fast coordination between Kubernetes components. All persistent cluster state is stored in etcd.

KUBERNETES API SERVER

The apiserver is responsible for serving the Kubernetes API and proxying cluster components such as the Kubernetes web UI. The apiserver exposes a REST interface that processes operations such as creating pods and services, and updating the corresponding objects in etcd. The apiserver is the only Kubernetes component that talks directly to etcd.

SCHEDULER

The scheduler watches the apiserver for unscheduled pods and schedules them onto healthy nodes based on resource requirements.

CONTROLLER MANAGER

There are other cluster-level functions such as managing service end-points, which is handled by the endpoints controller, and node lifecycle management which is handled by the node controller. When it comes to pods, replication controllers provide the ability to scale pods across a fleet of machines, and ensure the desired number of pods are always running.

Each of these controllers currently live in a single process called the Controller Manager.

The Kubernetes Node

The Kubernetes node runs all the components necessary for running application containers and load balancing service end-points. Nodes are also responsible for reporting resource utilization and status information to the API server.

DOCKER

Docker, the container runtime engine, runs on every node and handles downloading and running containers. Docker is controlled locally via its API by the Kubelet.

KUBELET

Each node runs the Kubelet, which is responsible for node registration, and management of pods. The Kubelet watches the Kubernetes API server for pods to create as scheduled by the Scheduler, and pods to delete based on cluster events. The Kubelet also handles reporting resource utilization, and health status information for a specific node and the pods it’s running.

PROXY

Each node also runs a simple network proxy with support for TCP and UDP stream forwarding across a set of pods as defined in the Kubernetes API.

Categories: Uncategorized Tags:

File based CI trigger

August 15, 2016 Leave a comment

GitLab has introduced file based ci trigger. The ci is using yaml notation in order to describe build pipeline  and  system recognized .gitlab-ci.yml file.    I think this is very power concept for following reasons : It bring text-based dsl ( yaml in this case) and  the file nature of the build definition allows  to version and branch it along with code. Immediate benefit is that by the branching code base , the branch comes with the cloned build definition from the parent . Microsoft could improve on by adding more notation other than yaml ,  adding run-time incline information like $agent and $Build and also bring interactive intelligence using collaboration (bot) platform.

Categories: Uncategorized

Announcing IdentityServer for ASP.NET 5 and .NET Core — leastprivilege.com

April 10, 2016 Leave a comment

Over the last couple of years, we’ve been working with the ASP.NET team on the authentication and authorization story for Web API, Katana and ASP.NET 5. This included the design around claims-based identity, authorization and token-based authentication. In the Katana timeframe we also reviewed the OAuth 2.0 authorization server middleware (and the templates around it) […]

via Announcing IdentityServer for ASP.NET 5 and .NET Core — leastprivilege.com

Categories: Uncategorized

Why Git?

January 4, 2016 Leave a comment

Technical answer

Very often the answer to this question can be provided by highlighting multiple technical advantages of the distributed version control system, such as local repositories, differnt topologies,out of the box DR,efficient branching and merging and other.

Sceptic

An experienced developer could argue that modern centralized version control systems offer backup capability, personal branches in a form of shelves, change set management, labeling and other advanced features. Benefits of the DVCS might not overcome the simplicity and practicality of the centralized version control system. The transition to Git could result in the an increased process complexity and a temporary loss of productivity.

Social aspect of the Git

Developers, engaged into verity of the Open Source projects, have quickly realized that Git is not just a convenient geo accessible repository – it is also a powerful collaboration environment. De facto, Git has become very important social media channel and an idea exchange host for developers. Git mirrors the human thought process branching and provides mechanisms for individual or collaborative memories. “Why Git ?” – DevOps is a collaborative culture and Git fits it perfectly.

Git and Corporation

The transition to Git is very similar to a transition from COTS to DevOps – instead of multiple isolated teams working within closed boundaries, the focus is on the end product and the collaborative contribution to a platform evolution. The goal of adopting the Git culture is to bring Open Source style of development into the corporate environment.

Categories: DevOps

Latest trends in the application test practices

December 22, 2015 Leave a comment

Paradigm shift

New approach in the application testing is innovative and would require some time to digest. The biggest paradigm shift is to stop chasing bugs and focus on the identifying and changing processes and practices yielding high frequencies of production problems. According to Microsoft, it is possible to achieve practically acceptable threshold of bugs in production only by tuning up SDLC pipeline. The previous statement requires a clarification:

  • Limited number of bugs are allowed in the production
  • The major source of bugs is not implementation but processes and practices
  • If amount of bugs does not exceed the threshold there is no need for the QA organization. The rational is that Developers writing automated test, feedback from Insiders/Preview/Production users , green-blue deployment and the implementation of the feature toggle practices is enough framework to deliver quality at speed.

Functional testing

The message for functional testing is very clear:

  • Test has to be automated
  • Test has to be written and not recorded
  • Test has to be written only by a developer who introduces a change and is fully responsible for the application working in production
  • Test has to be written before or right after a change.

Non-Functional tests

The value of the non-functional tests has to be re-visited through a prism of new emerging concepts such as Continuous insight, elastic scaling and fabric deployment. Continuous Insight has three parts: Availability check, Telemetry and Usage. Availability check performs intelligent application pings and is a substitute for any type of connectivity tests. The application telemetry and usage in the new cloud-born architecture are connected to the stream analytics and integrated with elastic services in order to notify self-balanced , self-healing , resilient fabric cluster with provisioning or de-provisioning events. It seems to me that classic performance testing goals – identifying application breaking or throttling points in pre-production environment and resource planning are becoming obsolete. The recommendation is to identify resources consumption anomalies within telemetry stream that might be due to poor application design or implementation and convert them into technical debt backlog items.

What types of functional tests are necessary?” is not a correct question. The only type of automated test to start with is an isolated requisite based test . All other tests should be a result of an application evolution – found bugs, common inconsistencies, consumer’s feedback, etc..

In conclusion, I would suggest the elimination of the multiple obsolete test practices; along with documentation waste and expensive tooling makes application lifecycle management much chipper , much simple , much cleaner and much faster.

Categories: DevOps, Test

Microsoft cross-platform offerings

December 16, 2015 Leave a comment

Run-time Cross-platform

The natural focus of a .NET developer is typically narrowed to a runtime built on the top of the CLR. However, there are different programmable runtimes supported by Windows outside of the .NET platform.   Among all of the choices of platforms – JavaScript is most recognized, supported and adopted by developers.  Below is an ECMA world diagram for the ECMAScript

ecma-cloud

JavaScript is supported by every browser, OS or mobile device, creating a perfect candidate for Cross platform development.  Client side JavaScript libraries, such as AngularJS, React.js and Express.js, have earned recognition within IT Industry, and should be a part of the mix for any .NET Web Application development. MEAN (MongoDB, Express, Angular, Node) is an end-to-end JavaScript based call stack and is one of the most popular cross-platform development choices.

Hosting Cross-platform

Microsoft and Java are have been pursuing interoperable hosting platform for more than a decade.  Java community offers multiple JVM languages such as Java, Scala, Clojure and JVM oriented application servers – Jboss, WebLogic, WebSphere.  Microsoft hosting platform is built on top of CLR and provides COM+, IIS, NT Services and self hosting.  It also supports any .NET language such as C#, VB.NET, IronPython, ClojureCLR and others.
Lately there are a couple of other developments that require community attention – Microsoft Docker support and HttpPlatformHandler for IIS hosting.
HttpHendlers and HttpModules are classic reliable IIS hosting extensions mechanism. HttpPlatformHandler is a module that allows to redirect HTTP traffic to any HTTP listener such as Tomcat, Ruby or Kestrel without losing IIS comprehensive process run-time and security management.  It has been used by Microsoft Azure cloud services and now is offered as a separate install for IIS 8.
Docker is an emerging technology that has a potential to impact all aspects of SDLC.  Docker architecture allows to package business logic, runtime and configuration aspects and to ship it as a single image to the docker registry for future consumption by any runtime.  Microsoft offering in this space would be Windows Container Server, Nano Server (No UI scriptable Windows Server with CLI, REST, WinRT, PowerShell DSC access).  .NET community would definitely need to catch up with understanding of the Docker infrastructure, PowerShell DSC, SSH, YAML manifest format and docker related Windows Server architecture.
 
Windows Cross-Platform

Microsoft OS system is going through a dramatic change in order to remove legacy code, legacy architecture and over 30 years of dependencies.  During those years Microsoft OS was supporting multiple kernels such as Windows, Windows Server, Xbox and others which have proved difficult to manage and secure.

Windows Platform Convergence

With the Windows 10 and Windows Server 2016 Microsoft is in the process of completing a convergence to a much cleaner model which in a lot of sense looks like a Linux Distributions.  The Core OS will be hosting a kernel and will be a part of any Windows Distribution Client, Server, Mobile, Tablet and Xbox.

As a result, applications written with the scope of CoreOS will be able to run on any Windows device without changes.  Such applications are branded as a UniversalUniversal Application has to be considered as the platform of choice for any smart/rich/thick desktop application as well as tablet or mobile client.

W10-MultiDevice.png

 

.NET Cross-Platform

.NET platform is an abstraction layer on a top of the Windows, goes through the significant re-write. While .NET 4.6 is the new fully featured Window .NET implementation of the CLR, Microsoft has also put full commercial backing of the  Mono-Linux CLR implementation.  It has also introduced CoreCLR which is a subset of the CLR capable to run cross-platform (Windows, Linux, Mac OS). Both Mono and CoreCLR are open sourced.  However Mono focuses explicitly on the maturity of the Linux implementation,  while CoreCLR on the portability.

NetPlatform

.NET community should react to the current Microsoft and Industry trends and start to extend Windows oriented skills set to the unfamiliar territories such as Linux and dnx/dnvm. Selecting CoreCLR libraries and tools will make the applications portable across multiple platforms and devices, creating a significant return on investment.  ASP.NET 5, MVC 6, EF 7 and Kestrel web server are a complete re-write of corresponding technologies using only CoreCLR subset of the .NET 4.6.

Windows10PlatformAndTool
 
Cloud Cross-Platform

A Cloud Cross-Platform concept is known as PaaS – Platform as a Service. The idea behind PaaS is the abstraction from the technical aspects of the hardware and OS and focuses on the service offering only. Below is a diagram showing PaaS scope in comparison:

SeparationOfResponsibilities

A good example which highlights PaaS mindset  is an implementation of asynchronous message interchange  scenario between B2B or EAI applications.  This enterprise pattern usually requires a reliable queue with duplicate message detection and poising messages routing.   Azure’s “Service Bus” queue and AWS’s “SQS” provide such services as well as REST or AMQP channels to access them.  Artifacts such as hardware, OS, runtime are irrelevant.  Moreover,  protocol abstruction like REST over HTTP or AMQP allows to switch from one cloud to another without the lost of the functionality

Categories: Cloud Tags: