Microsoft Connect()-2015; feedback

December 15, 2015 Leave a comment

Microsoft

Microsoft continues to surprise the IT industry with bold efforts to re-brand its image. As of now, Microsoft is a Open Sourced, Cross-Platform, Cross-Cloud global player and is fully committed to the end-user experience and satisfaction. A few of the latest announcements that re-enforce and emphasize Microsoft’s message are the Docker, RedHat, Sonatype partnerships, and the CoreCLR, .NET libraries, Visual Studio open sources.

 

Note: During Day 2 of the group meeting, I had a chance to speak with the Product Manager of the iOS mobile platform for one of the communications company… The team he manages has decided to use Azure Mobile Services for push notifications, and Azure Active Directory and Microsoft Intune in order to implement security policies. He was asked “Why choose the Microsoft cloud platform instead of AWS?” and his response was “capabilities, simplicity, and it’s relatively inexpensive”. In my opinion it’s quite meaningful and promising to hear such a response from a representative of the once “alien” culture.

  

Microsoft DevOps practices

Microsoft is a huge supporter and practitioner of the Agile Project Management, Continuous Provisioning, Continuous Integration, Continuous Delivery, and Continuous Insight paradigms. They also mentioned the VSTS ( Visual System Team Services) and the Mobile Tools units that are very close to switching to Continuous Deployment (vs Continuous Delivery), and also that Bing is already a Continuously Delivered product.
 

Public Cloud adoption

Companies that went through the initial, and usually painful, cultural transition have gained trust for cloud security and the ability to support governments’ regulations and privacy requirements. They are now in position to benefit from the global cloud presence, have the ability to offload infrastructure concerns, and enjoy the quick rate of the innovation that cloud provides. Cost-wise – many suggest that initial cost saving is insignificant and is around 10% or less. However, with understanding how to tweak cloud transactions accompanied with cloud elastic services would bring savings up to 25-30%. The transition to a PaaS architecture might decrease spending in half.
 

SAFe

Microsoft definitely provides a set of tools to manage Scaled Agile practices. However, the feeling I got is that Microsoft internally does not use the approach. Instead, they completely rely on the consumer’s feedback. Microsoft speakers actually made a point of working on a feature only if it made to the top of the stack by end users or communities vote. Community-driven features such as Docker support are considered strategic. Recently acquired HockeyApp platform and feedback channel in every new Microsoft product provide an ability to collect user experience and react on it. The role of the business in this consumer oriented model is to adjust highly voted features with financial and market impact analytics and advertise them as soon they are entered into the preview mode. Microsoft intensively uses Power BI and Azure Machine Learning Services for the market and financial mining.
 

Test practices

Microsoft perspective on the application testing is innovative and would require some time to digest. In short – the recommendation is to stop chasing bugs and focus on the identifying and changing processes and practices yielding high frequencies of production problems. According to Microsoft, it is possible to achieve practically acceptable threshold of bugs in production only by tuning up SDLC pipeline. The previous statement requires a clarification:
  • Limited number of bugs are allowed in the production
  • The major source of bugs is not implementation but processes and practices
  • If amount of bugs does not exceed the threshold there is no need for the QA. The rational is that Developers writing automated test, feedback from Insiders/Preview/Production users , green-blue deployment and the implementation of the feature toggle practices is enough framework to deliver quality at speed.
Functional testing

The message for functional testing is very clear:

  • Test has to be automated
  • Test has to be written and not recorded
  • Test has to be written only by a developer who introduces a change and is fully responsible for the application working in production
  • Test has to be written before or right after a change.

 
Non-Functional tests

The value of the non-functional tests has to be re-visited through a prism of new emerging concepts such as Continuous insight, elastic scaling and fabric deployment. Continuous Insight has three parts: Availability check, Telemetry and Usage. Availability check performs intelligent application pings and is a substitute for any type of connectivity tests. The application telemetry and usage in the new cloud-born architecture are connected to the stream analytics and integrated with elastic services in order to notify self-balanced , self-healing , resilient fabric cluster with provisioning or de-provisioning events. It seems to me that classic performance testing goals – identifying application breaking or throttling points in pre-production environment and resource planning are becoming obsolete. The recommendation is to identify resources consumption anomalies within telemetry stream that might be due to poor application design or implementation and convert them into technical debt backlog items.

“What types of functional tests are necessary?” is not a correct question. The only type of automated test to start with is an isolated requisite based test . All other tests should be a result of an application evolution – found bugs, common inconsistencies, consumer’s feedback, etc..

In conclusion, I would suggest the elimination of the multiple obsolete test practices; along with documentation waste and expensive tooling makes application lifecycle management much chipper , much simple , much cleaner and much faster.

Advertisements
Categories: Cloud, DevOps

Differences between IaaS, PaaS and SaaS cloud design approaches

August 26, 2013 Leave a comment

Most popular way to explain differences between IaaS (Infrastructure as a Service), PaaS
(Platform as a Service) and SaaS (Software as a Service) is through their the stack comparison.  All these models remove hardware concerns from the management picture; PaaS additionally simplifies OS, Runtime, Scaling and other infrastructure related concepts. SaaS exposes the service, service integration endpoints and service managed APIs.

SeparationOfResponsibilities

In my opinion the shorter definition would be IaaS is platform you would run on, PaaS is platform you would build on and SaaS you would use or integrate with

Categories: Cloud

Cloud computing in relation to the CAP theorem

April 23, 2013 Leave a comment

The CAP Theorem states that it is impossible for a distributed computer system to simultaneously provide all three of the following guarantees: consistency, availability and partition tolerance. However, client interaction with distributed system also has a measurable velocity and latency. My thought is that cloud computing has a potential to bring node synchronization time below a client interaction time and will enabling a new perspective on a distributed systems design.

Categories: Cloud

Cloud computing in relation to the code quality

April 23, 2013 Leave a comment

Listening to Shy Cohen presentation on cloud salability I have noticed that an ability to quickly and cheaply react to the load might unintentionally create a situation where value of the high quality and high performing code will be greatly diminished. My philosophical dilemma, as an architect, is – either to prepare for the cultural war on the quality vs. speed or embrace the new reality and find the new balance more relevant to the company’s bottom line.  Also, I’m  wondering if cloud providers are planning some mining algorithms to address most common performance/optimization patterns –for example creating indices, setting fill factor or running .NET code in parallel, where algorithm find it acceptable.

Categories: Cloud

RESTifing BizTalk – Part I

August 9, 2011 Leave a comment

Recently, I and Leandro Diaz Guerra had a chance to teach BizTalk speak REST. With introduction of the webHttpBinding, this, previously challenging task, has become much easier. However, the communication over HTTP stack doesn’t make it RESTful. REST is an architectural style that requires careful design of resources, resource navigation and their relations. Translating a trivial BizTalk’s de-batching task into RESTful terms will make the “Batch” a parent resource of the individual messages with following addressable schema:

The information in curly braces represents dynamic parameters which creates a challenge for BizTalk WCF port configuration. The solution to this problem was to modify WebManualAddressingBehavior.cs  WCF custom extensions very well described in Leandro’s blog here. But first, a little bit about of the BizTalk’s process, discipline and Uri template to model REST resources.

By the time the batch picked up by BizTalk process the batch resource already exist with unique {InterchangeId}. The batch gets de-batch within a receive pipeline and produces bunch of messages with unique {MessageId}and the same {InterchangeId}. The Send port , pickups these messages and calls external RESTFul service. It is also solely responsible for building HTTP header, define payload, injecting values into the dynamic resource Uri template and make a POST call. There are three places within Send Port configuration where you could specify URI template:

  • Address (URI) on general tab
  • SOAP action header

image

  • Defining ManualAddress property of the WebManualAddressing custom WCF extension

image

The rules of thumb I use are following:

  • use Address URI only to specify service endpoint. In our case it always will be http://host/batches
  • Use SOAP action header to specify location of resource, reserving WebManualAddress property to override SOAP action header in some rare cases.

Here is a summary of logical changes I need to make within WebManualAddress behavior:

  • Check if WebManualAddress property is defined and if not get it from SOAP action
    public object BeforeSendRequest(ref Message request, IClientChannel channel)
    {
     try
       {
        if (String.IsNullOrWhiteSpace(ManualAddress.OriginalString))
         {
          this.ManualAddress = new Uri(request.Headers.Action);
         }
       request.Headers.To = new Uri(ApplyManualAddressMacro(ManualAddress.OriginalString
                                    , request.Properties));
        }
      catch (Exception ex)
          {
            Debug.Write(ex.Message, AppDomain.CurrentDomain.FriendlyName);
          }
      return request;
     }
  • Substitute dynamic parameter with their values 
    private string ApplyManualAddressMacro (string manualAddress, 
                                            MessageProperties messageProperties)
            {
                var uriAddress = HttpUtility.UrlDecode(manualAddress);
                const string regexExpression = @"%(\w*)%";
                try
                {
                    var regex = new Regex(regexExpression);
                    var matchCollection = regex.Matches(uriAddress);
                    for (var i = 0; i < matchCollection.Count; i++)
                    {
                        var captures = matchCollection[i].Captures;
                        foreach (var capture in captures)
                        {
                            string value = capture.ToString();
                            var replacement = GetCachedValue(value.Replace("%", "")
                                , messageProperties).Replace("{","").Replace("}","");
                            uriAddress = uriAddress.Replace(value, replacement);
                        }
                    }
                    return uriAddress;
                }
                catch (Exception ex)
                {
                    Debug.Write(ex.Message , AppDomain.CurrentDomain.FriendlyName);
                    return manualAddress;
                }
            }

The GetCachedValue is just an optimization method which caches relation between template keys %InterchangeId% retrieved from http://host/batches/%InterchangeID%  in this case   and  BizTalk fully qualified named context properties  ( namespace + name  http://schemas.microsoft.com/BizTalk/2003/system-properties/#InterchangeId). I use standard .NET Cache and store them for 10 minutes

public string GetCachedValue(string key, IEnumerable<KeyValuePair<string, object>> messageProperties)
       {
           var cache = MemoryCache.Default;
           var cacheKey = string.Format("WebManualAddressingBehavior_{0}", key);
           if (!cache.Contains(cacheKey))
           {
               var item = messageProperties.Where(p => p.Key.Contains(key)).FirstOrDefault();
               if (item.Value != null)
               {
                   cache.Set(cacheKey, item.Value, 
                       new DateTimeOffset(DateTime.Now.AddSeconds(60 * 10)));
               }else
               {
                   return key;
               }
           }
           return cache[cacheKey].ToString();
       }
Categories: BizTalk, REST Tags:

Security overview

June 2, 2011 Leave a comment

1. Terminology.

In order to describe common security concepts, different companies use various notations.  I will use Microsoft definitions in this overview. The best way to bring clarity and understanding into this complicated topic is to follow the evolution of security concepts.  Every organization which decides to develop custom security framework has to carefully review the lessons of that past to scope requirements and plan an implementation.

1.1        Permissions, Roles, Users, Groups and Scope.

The original idea of securing access to the resource could be described as a process of establishing relationships between Requestor or Principal and the Object or ACE (Access Control Entry).  In order to achieve this goal, ACE has to define a set of operations (permissions) that a potential Principal could perform with the object.  The subset of operations available to the Principal is called the ACL (Access Control List). The principal with no ACL would have no access to an object. The ACL with the full permissions set has a special name – Full Access.  A good example would be a File object (ACE), with Read, Write, Delete, Create, Move permissions set and Yuriy G. as a Principal with Read, Write ACL.

The change of this concept came as an answer to administrator’s question of how to manage the ACL between the principals and the objects. The simple example, which includes 100 users and 100 objects of the same type with 6 operations each, will create a 60,000 permutations nightmare.  The solution was to group principals and permissions. The group of principles is called simply “Group” and group of permission is named “Role”. This simplification lets administrator use the group as the principle, and to map it to the role with already defined ACL(s).  For example – Administrators, Developers and Business Analysts groups of users mapped to the Full Access, Contributors, and Readers roles respectively.  Applying this concept to the above example will reduce the number of permutations from 60,000 to 9.

 

1.2        Authentication and Authorization

Authentication is a process of identifying the user within the security store.  If a user exists, the system issues the authentication ticket. A requestor that does not exist in the security store is mapped to an Anonymous account (ASPNET/IUSER_MACHINENAME for IIS or Guest for NTLM).

Authorization builds the ACL in the form of permission sets or Roles list. Authorization happened only for authenticated users. Remember that Anonymous user, if not disabled, is a valid Principal with all associated permissions.

1.3        Single Sign-On

Single Sign-On is a run-time mapping process between two Principals located in two different security realms. The user story for SSO would be a request to login once in one system and gain access to another system without being prompted to login again. Behind the scene, the SSO server process takes provided credentials as a key to find credentials for targeted system.

2. Security System design principles.

The security system has to be clear, well understood, easy to administer and monitor. The different security providers have to expose the same contract in common terms to avoid confusion between security consumers.  The implementation has to consider a possibility of reusing existing mature security frameworks.

2.1 Grouping  user and permission

Grouping users and permissions plays essential part in the security system design. The rule of thumb is never map (ACL) user to the resource permission directly. Always create groups and   roles and establish relation between them.  There are several standard questions need to be answered before implementing the security system:

–  is users permission set will be inclusive or exclusive if user is a member of multiple groups.

– the granularity of  permission set.

–  what type of users application in question supports.

I don’t have strong opinion about the what would be resulting calculation of the permission set for the user which belongs to the multiple groups either it is union of all permission  or interception of them. The important part is to stick with the decision made.

The granularity is very depends on an application. Usually,  for ASP.NET application the Role – group of permissions is just an abstraction with no permission attached. The authorization steps relies on the application logic. For more advanced scenarios permission  could be grouped in operations , operations in  task and task in roles. The trivial example would  be file permissions.   The  permission set for a file resource  are  Read, Write , Create, Delete. The Tasks could be : View (Read) , Modify(Read, Write), Move(Create,Delete) , Full(Read,Write,Create,Delete). Finally Roles could be Administrator Role(Full) ,Contributors(Modify, View), Readers(View).  In the end what administrator does is map Administrator Group to the Administrator Role, and the Group of external users to the Readers. The management of the users and their membership completely decoupled from the managing Roles.

The last question is also specific to an application. There are two types of users: application users and Network users. Application users are users that are not recognizable by the network and their identity store in the custom security store. Network user identity stored in the network security store. If application meant to deal application and network users, it would be good idea to use the security store which supports both types such as AzMan.

 

2.2 Zoning 

With the emergence of the Enterprise systems with numerous organization topologies and application deployment scenarios it became obvious that instead of trying to manage entire enterprises it would be easier for administrators to define zones within it and manage every zone independently. This process is called scope reduction or zoning. This paradigm allows using multiple security models, and forces the principle to specify scope prior to the authentication process.

The good example of zoning is the domain we specify during the login.
image

Technically speaking the Domain1\Yuiry and Domain2\Yuriy are to completely different principals   coming from two independent security stores and unauthorized access is prevented on authentication step. The very important part to understand that in case of using single zone/domain/realm any user located in the security store will be authenticated without establishing association to the party, company or application. This association is left to the Authorization step and completely relies on the application logic.  In some cases, in order to resolve dependencies, an application issues custom queries against a security store outside of the security framework.  The design concepts and entity model within the framework become not clear and prone for security hacks.

 

 

 

3.1 Active Directory

Active Directory is a comprehensive security store for internal users.  It supports Authentication and Authorization, user and permission grouping and zoning. Also, it has a built-in administration and monitoring.   AD administrators usually delegate application specific authorization to the application owners. It allows them to choose the Role definition store technology.  It would be advisable to take into consideration SQL database or AzMan.

3.2 ADAM –Active Directory Application Mode.

ADAM is light-weight active directory with ability to store external users and references to internal users. It supports authentication, authorization, scope reduction, delegated administration and monitoring. It is integrated with AzMan (Authorization Manager) role store.

3.3 ASP.NET framework extensions.

ASP.NET security is developed on top of ASP.NET 2.0’s pluggable authentication provider model and has custom Membership Provider implemented for user authentication and custom Role provider for user authorization.  It supports user grouping and some advanced scenarios (group within a group). It has an Administration console implemented, but doesn’t support delegated administration.

3.4 Microsoft Sql ASP.NET security framework extensions.

This is the built-in Microsoft implementation of ASP.NET provider model. It supports authentication, authorization and scope reduction. Membership provider does not support user grouping. It has a very simple administration console.  It allows quick application integration into a security framework, well tested and widely understood. This implementation does not support zones. Any user located in the security store will be authenticated without establishing association to the party, company or application. This association is left to the Authorization step and completely relies on the application logic. In some cases, in order to resolve dependencies, an application issues custom queries against a security store outside of the security framework.

3.5 SharePoint security framework.

SharePoint has a comprehensive and flexible security model which is based on ASP.NET 2.0 security framework. The ability to specify different providers per web application and zone allows support for administration, authorization and scope reduction for internal and external users. SharePoint allows delegate security administration to the external users. SharePoint also supports single sign-on, portal configuration and item based security.

 

3.6 Windows Identity Foundation (WIF)

WIF is  a new Microsoft framework  used to build claims-aware and federation capable applications, externalizes identity logic from their application, enhancing application security, and enabling interoperability. This

Categories: Security Tags:

Configuring BizTalk 2006 R2 with Enterprise Library

February 5, 2009 Leave a comment

BizTalk is a powerful tool with a lot of out of the box features. However, there are varieties of tasks you would prefer to de-couple from BizTalk and delegate to an external framework specifically built for it. When I design software, I always pay attention to the post-deployment configurability of an application.  I always choose an approach that lets me easily change connection information, logging, exception handling, inject some logic, etc. The Microsoft Enterprise library resolves such issues perfectly. So, how would one configure BizTalk to recognize and use Enterprise library?  The answer is trivial and this blog’s purpose is just to document steps I performed to make it happen.

The first step was to install Microsoft Enterprise Library in the GAC. You can find detailed instructions in the help. I would also suggest reading Tom Hollander’s blog about managing Enterprise Library in your organization.

The second step is optional and is related to my habit to have a wrapper around 3rd party framework or solution. I have two libraries Common.Application.Exception with multiple ApplicationException descendents

and Common.EntLib.Logging in order to encapsulate some logic.

Here is an example of ErrorLogEntry with Category and Severity setting.

using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
using Microsoft.Practices.EnterpriseLibrary.Logging;
namespace Common.Logging
{
///<summary>
/// A log entry of category Error. Default priority is High.
///</summary>
public class ErrorLogEntry : LogEntry
{
public static LogPriority DefaultPriority = LogPriority.High;
///<summary>
/// Default constructor. Default priority is High.
///</summary>
///<param name=”message”></param>
public ErrorLogEntry(string message) : this(message, DefaultPriority) { }
///<summary>
/// Constructor with basic log entry information.
///</summary>
///<param name=”message”>Message to be logged.</param>
///<param name=”priority”>Priority of the log entry (low, high, etc.).</param>
public ErrorLogEntry(string message, LogPriority priority)
: base()
{
this.Categories.Add(LogCategories.Error);
Priority = (int)priority;
Severity = TraceEventType.Error;
Message = message;
}
}
}

Obviously, these Libraries has to be signed and deployed to the GAC 

The next step is to configure BizTalk run-time.  Before you do it, DO MAKE A COPY of BTSNTSvc.exe.config. Then open Enterprise Library Configuration console pointing to the BTSNTSvc.exe.config under C:\Program Files\Microsoft BizTalk Server 2006 folder and configure application blocks you intend to use within yours BizTalk applications.

To test an enterprise library call, I have created a simple orchestration with a receive shape and trigger message.

I also added Expression shape to call Enterprise Library.

Dropping trigger message into configured location produces expected Event log message.

I modified configuration by adding new Flat file trace listener

Then I added new trace listener to the Debug category .

The result appeared in two different destinations without modifying any BizTalk artifacts.

Enjoy

Technorati Tags: ,

Categories: BizTalk