Differences between IaaS, PaaS and SaaS cloud design approaches

August 26, 2013 Leave a comment

Most popular way to explain differences between IaaS (Infrastructure as a Service), PaaS
(Platform as a Service) and SaaS (Software as a Service) is through their the stack comparison.  All these models remove hardware concerns from the management picture; PaaS additionally simplifies OS, Runtime, Scaling and other infrastructure related concepts. SaaS exposes the service, service integration endpoints and service managed APIs.

SeparationOfResponsibilities

In my opinion the shorter definition would be - IaaS is platform you would run on, PaaS is platform you would build on and SaaS you would use or integrate with

Categories: Cloud

Cloud computing in relation to the CAP theorem

April 23, 2013 Leave a comment

The CAP Theorem states that it is impossible for a distributed computer system to simultaneously provide all three of the following guarantees: consistency, availability and partition tolerance. However, client interaction with distributed system also has a measurable velocity and latency. My thought is that cloud computing has a potential to bring node synchronization time below a client interaction time and will enabling a new perspective on a distributed systems design.

Categories: Cloud

Cloud computing in relation to the code quality

April 23, 2013 Leave a comment

Listening to Shy Cohen presentation on cloud salability I have noticed that an ability to quickly and cheaply react to the load might unintentionally create a situation where value of the high quality and high performing code will be greatly diminished. My philosophical dilemma, as an architect, is – either to prepare for the cultural war on the quality vs. speed or embrace the new reality and find the new balance more relevant to the company’s bottom line.  Also, I’m  wondering if cloud providers are planning some mining algorithms to address most common performance/optimization patterns –for example creating indices, setting fill factor or running .NET code in parallel, where algorithm find it acceptable.

Categories: Cloud

RESTifing BizTalk – Part I

August 9, 2011 Leave a comment

Recently, I and Leandro Diaz Guerra had a chance to teach BizTalk speak REST. With introduction of the webHttpBinding, this, previously challenging task, has become much easier. However, the communication over HTTP stack doesn’t make it RESTful. REST is an architectural style that requires careful design of resources, resource navigation and their relations. Translating a trivial BizTalk’s de-batching task into RESTful terms will make the “Batch” a parent resource of the individual messages with following addressable schema:

The information in curly braces represents dynamic parameters which creates a challenge for BizTalk WCF port configuration. The solution to this problem was to modify WebManualAddressingBehavior.cs  WCF custom extensions very well described in Leandro’s blog here. But first, a little bit about of the BizTalk’s process, discipline and Uri template to model REST resources.

By the time the batch picked up by BizTalk process the batch resource already exist with unique {InterchangeId}. The batch gets de-batch within a receive pipeline and produces bunch of messages with unique {MessageId}and the same {InterchangeId}. The Send port , pickups these messages and calls external RESTFul service. It is also solely responsible for building HTTP header, define payload, injecting values into the dynamic resource Uri template and make a POST call. There are three places within Send Port configuration where you could specify URI template:

  • Address (URI) on general tab
  • SOAP action header

image

  • Defining ManualAddress property of the WebManualAddressing custom WCF extension

image

The rules of thumb I use are following:

  • use Address URI only to specify service endpoint. In our case it always will be http://host/batches
  • Use SOAP action header to specify location of resource, reserving WebManualAddress property to override SOAP action header in some rare cases.

Here is a summary of logical changes I need to make within WebManualAddress behavior:

  • Check if WebManualAddress property is defined and if not get it from SOAP action
    public object BeforeSendRequest(ref Message request, IClientChannel channel)
    {
     try
       {
        if (String.IsNullOrWhiteSpace(ManualAddress.OriginalString))
         {
          this.ManualAddress = new Uri(request.Headers.Action);
         }
       request.Headers.To = new Uri(ApplyManualAddressMacro(ManualAddress.OriginalString
                                    , request.Properties));
        }
      catch (Exception ex)
          {
            Debug.Write(ex.Message, AppDomain.CurrentDomain.FriendlyName);
          }
      return request;
     }
  • Substitute dynamic parameter with their values 
    private string ApplyManualAddressMacro (string manualAddress, 
                                            MessageProperties messageProperties)
            {
                var uriAddress = HttpUtility.UrlDecode(manualAddress);
                const string regexExpression = @"%(\w*)%";
                try
                {
                    var regex = new Regex(regexExpression);
                    var matchCollection = regex.Matches(uriAddress);
                    for (var i = 0; i < matchCollection.Count; i++)
                    {
                        var captures = matchCollection[i].Captures;
                        foreach (var capture in captures)
                        {
                            string value = capture.ToString();
                            var replacement = GetCachedValue(value.Replace("%", "")
                                , messageProperties).Replace("{","").Replace("}","");
                            uriAddress = uriAddress.Replace(value, replacement);
                        }
                    }
                    return uriAddress;
                }
                catch (Exception ex)
                {
                    Debug.Write(ex.Message , AppDomain.CurrentDomain.FriendlyName);
                    return manualAddress;
                }
            }

The GetCachedValue is just an optimization method which caches relation between template keys %InterchangeId% retrieved from http://host/batches/%InterchangeID%  in this case   and  BizTalk fully qualified named context properties  ( namespace + name  http://schemas.microsoft.com/BizTalk/2003/system-properties/#InterchangeId). I use standard .NET Cache and store them for 10 minutes

public string GetCachedValue(string key, IEnumerable<KeyValuePair<string, object>> messageProperties)
       {
           var cache = MemoryCache.Default;
           var cacheKey = string.Format("WebManualAddressingBehavior_{0}", key);
           if (!cache.Contains(cacheKey))
           {
               var item = messageProperties.Where(p => p.Key.Contains(key)).FirstOrDefault();
               if (item.Value != null)
               {
                   cache.Set(cacheKey, item.Value, 
                       new DateTimeOffset(DateTime.Now.AddSeconds(60 * 10)));
               }else
               {
                   return key;
               }
           }
           return cache[cacheKey].ToString();
       }
Categories: BizTalk, REST Tags:

Security overview

June 2, 2011 Leave a comment

1. Terminology.

In order to describe common security concepts, different companies use various notations.  I will use Microsoft definitions in this overview. The best way to bring clarity and understanding into this complicated topic is to follow the evolution of security concepts.  Every organization which decides to develop custom security framework has to carefully review the lessons of that past to scope requirements and plan an implementation.

1.1        Permissions, Roles, Users, Groups and Scope.

The original idea of securing access to the resource could be described as a process of establishing relationships between Requestor or Principal and the Object or ACE (Access Control Entry).  In order to achieve this goal, ACE has to define a set of operations (permissions) that a potential Principal could perform with the object.  The subset of operations available to the Principal is called the ACL (Access Control List). The principal with no ACL would have no access to an object. The ACL with the full permissions set has a special name – Full Access.  A good example would be a File object (ACE), with Read, Write, Delete, Create, Move permissions set and Yuriy G. as a Principal with Read, Write ACL.

The change of this concept came as an answer to administrator’s question of how to manage the ACL between the principals and the objects. The simple example, which includes 100 users and 100 objects of the same type with 6 operations each, will create a 60,000 permutations nightmare.  The solution was to group principals and permissions. The group of principles is called simply “Group” and group of permission is named “Role”. This simplification lets administrator use the group as the principle, and to map it to the role with already defined ACL(s).  For example - Administrators, Developers and Business Analysts groups of users mapped to the Full Access, Contributors, and Readers roles respectively.  Applying this concept to the above example will reduce the number of permutations from 60,000 to 9.

 

1.2        Authentication and Authorization

Authentication is a process of identifying the user within the security store.  If a user exists, the system issues the authentication ticket. A requestor that does not exist in the security store is mapped to an Anonymous account (ASPNET/IUSER_MACHINENAME for IIS or Guest for NTLM).

Authorization builds the ACL in the form of permission sets or Roles list. Authorization happened only for authenticated users. Remember that Anonymous user, if not disabled, is a valid Principal with all associated permissions.

1.3        Single Sign-On

Single Sign-On is a run-time mapping process between two Principals located in two different security realms. The user story for SSO would be a request to login once in one system and gain access to another system without being prompted to login again. Behind the scene, the SSO server process takes provided credentials as a key to find credentials for targeted system.

2. Security System design principles.

The security system has to be clear, well understood, easy to administer and monitor. The different security providers have to expose the same contract in common terms to avoid confusion between security consumers.  The implementation has to consider a possibility of reusing existing mature security frameworks.

2.1 Grouping  user and permission

Grouping users and permissions plays essential part in the security system design. The rule of thumb is never map (ACL) user to the resource permission directly. Always create groups and   roles and establish relation between them.  There are several standard questions need to be answered before implementing the security system:

-  is users permission set will be inclusive or exclusive if user is a member of multiple groups.

- the granularity of  permission set.

-  what type of users application in question supports.

I don’t have strong opinion about the what would be resulting calculation of the permission set for the user which belongs to the multiple groups either it is union of all permission  or interception of them. The important part is to stick with the decision made.

The granularity is very depends on an application. Usually,  for ASP.NET application the Role – group of permissions is just an abstraction with no permission attached. The authorization steps relies on the application logic. For more advanced scenarios permission  could be grouped in operations , operations in  task and task in roles. The trivial example would  be file permissions.   The  permission set for a file resource  are  Read, Write , Create, Delete. The Tasks could be : View (Read) , Modify(Read, Write), Move(Create,Delete) , Full(Read,Write,Create,Delete). Finally Roles could be Administrator Role(Full) ,Contributors(Modify, View), Readers(View).  In the end what administrator does is map Administrator Group to the Administrator Role, and the Group of external users to the Readers. The management of the users and their membership completely decoupled from the managing Roles.

The last question is also specific to an application. There are two types of users: application users and Network users. Application users are users that are not recognizable by the network and their identity store in the custom security store. Network user identity stored in the network security store. If application meant to deal application and network users, it would be good idea to use the security store which supports both types such as AzMan.

 

2.2 Zoning 

With the emergence of the Enterprise systems with numerous organization topologies and application deployment scenarios it became obvious that instead of trying to manage entire enterprises it would be easier for administrators to define zones within it and manage every zone independently. This process is called scope reduction or zoning. This paradigm allows using multiple security models, and forces the principle to specify scope prior to the authentication process.

The good example of zoning is the domain we specify during the login.
image

Technically speaking the Domain1\Yuiry and Domain2\Yuriy are to completely different principals   coming from two independent security stores and unauthorized access is prevented on authentication step. The very important part to understand that in case of using single zone/domain/realm any user located in the security store will be authenticated without establishing association to the party, company or application. This association is left to the Authorization step and completely relies on the application logic.  In some cases, in order to resolve dependencies, an application issues custom queries against a security store outside of the security framework.  The design concepts and entity model within the framework become not clear and prone for security hacks.

 

 

 

3.1 Active Directory

Active Directory is a comprehensive security store for internal users.  It supports Authentication and Authorization, user and permission grouping and zoning. Also, it has a built-in administration and monitoring.   AD administrators usually delegate application specific authorization to the application owners. It allows them to choose the Role definition store technology.  It would be advisable to take into consideration SQL database or AzMan.

3.2 ADAM –Active Directory Application Mode.

ADAM is light-weight active directory with ability to store external users and references to internal users. It supports authentication, authorization, scope reduction, delegated administration and monitoring. It is integrated with AzMan (Authorization Manager) role store.

3.3 ASP.NET framework extensions.

ASP.NET security is developed on top of ASP.NET 2.0’s pluggable authentication provider model and has custom Membership Provider implemented for user authentication and custom Role provider for user authorization.  It supports user grouping and some advanced scenarios (group within a group). It has an Administration console implemented, but doesn’t support delegated administration.

3.4 Microsoft Sql ASP.NET security framework extensions.

This is the built-in Microsoft implementation of ASP.NET provider model. It supports authentication, authorization and scope reduction. Membership provider does not support user grouping. It has a very simple administration console.  It allows quick application integration into a security framework, well tested and widely understood. This implementation does not support zones. Any user located in the security store will be authenticated without establishing association to the party, company or application. This association is left to the Authorization step and completely relies on the application logic. In some cases, in order to resolve dependencies, an application issues custom queries against a security store outside of the security framework.

3.5 SharePoint security framework.

SharePoint has a comprehensive and flexible security model which is based on ASP.NET 2.0 security framework. The ability to specify different providers per web application and zone allows support for administration, authorization and scope reduction for internal and external users. SharePoint allows delegate security administration to the external users. SharePoint also supports single sign-on, portal configuration and item based security.

 

3.6 Windows Identity Foundation (WIF)

WIF is  a new Microsoft framework  used to build claims-aware and federation capable applications, externalizes identity logic from their application, enhancing application security, and enabling interoperability. This

Categories: Security Tags:

Configuring BizTalk 2006 R2 with Enterprise Library

February 5, 2009 Leave a comment

BizTalk is a powerful tool with a lot of out of the box features. However, there are varieties of tasks you would prefer to de-couple from BizTalk and delegate to an external framework specifically built for it. When I design software, I always pay attention to the post-deployment configurability of an application.  I always choose an approach that lets me easily change connection information, logging, exception handling, inject some logic, etc. The Microsoft Enterprise library resolves such issues perfectly. So, how would one configure BizTalk to recognize and use Enterprise library?  The answer is trivial and this blog’s purpose is just to document steps I performed to make it happen.

The first step was to install Microsoft Enterprise Library in the GAC. You can find detailed instructions in the help. I would also suggest reading Tom Hollander’s blog about managing Enterprise Library in your organization.

The second step is optional and is related to my habit to have a wrapper around 3rd party framework or solution. I have two libraries Common.Application.Exception with multiple ApplicationException descendents

and Common.EntLib.Logging in order to encapsulate some logic.

Here is an example of ErrorLogEntry with Category and Severity setting.

using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
using Microsoft.Practices.EnterpriseLibrary.Logging;
namespace Common.Logging
{
///<summary>
/// A log entry of category Error. Default priority is High.
///</summary>
public class ErrorLogEntry : LogEntry
{
public static LogPriority DefaultPriority = LogPriority.High;
///<summary>
/// Default constructor. Default priority is High.
///</summary>
///<param name=”message”></param>
public ErrorLogEntry(string message) : this(message, DefaultPriority) { }
///<summary>
/// Constructor with basic log entry information.
///</summary>
///<param name=”message”>Message to be logged.</param>
///<param name=”priority”>Priority of the log entry (low, high, etc.).</param>
public ErrorLogEntry(string message, LogPriority priority)
: base()
{
this.Categories.Add(LogCategories.Error);
Priority = (int)priority;
Severity = TraceEventType.Error;
Message = message;
}
}
}

Obviously, these Libraries has to be signed and deployed to the GAC 

The next step is to configure BizTalk run-time.  Before you do it, DO MAKE A COPY of BTSNTSvc.exe.config. Then open Enterprise Library Configuration console pointing to the BTSNTSvc.exe.config under C:\Program Files\Microsoft BizTalk Server 2006 folder and configure application blocks you intend to use within yours BizTalk applications.

To test an enterprise library call, I have created a simple orchestration with a receive shape and trigger message.

I also added Expression shape to call Enterprise Library.

Dropping trigger message into configured location produces expected Event log message.

I modified configuration by adding new Flat file trace listener

Then I added new trace listener to the Debug category .

The result appeared in two different destinations without modifying any BizTalk artifacts.

Enjoy

Technorati Tags: ,

Categories: BizTalk

Using Stored Procedure with ADO.NET Entity framework

February 4, 2009 Leave a comment

I think that Microsoft ADO.NET Entity framework (EF) has a big future which goes beyond just an ORM area. However, the first release is not perfect. Erik Kindblad discusses bunch of shortfalls of VS 2008 SP1 EF implementation. The biggest one is inability to map Entity to the stored procedure. My believe is that in the production environment the access to the tables has to be revoked and the only way to access the data in the table would be a stored procedure. I found several good articles aiming to resolve the same issue, but most of them require additional coding. Then I looked at edmx file to see what mapping xml has been produces by EF designer.

<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="1.0" xmlns:edmx="http://schemas.microsoft.com/ado/2007/06/edmx">
<!– EF Runtime content –>
<edmx:Runtime>
<!– SSDL content –>
<edmx:StorageModels>
<Schema Namespace="ClinetSODAModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2006/04/edm/ssdl">
<EntitySet Name="Barrier" EntityType="ClinetSODAModel.Store.Barrier" store:Type="Tables" store:Schema="dbo" store:Name="Barrier">
<DefiningQuery>
SELECT
[Barrier].[Barrier_CD] AS [Barrier_CD]
,[Barrier].[Description] AS [Description]
,[Barrier].[Last_Mod_Dt] AS [Last_Mod_Dt]
,[Barrier].[Last_Mod_ID] AS [Last_Mod_ID]
FROM [dbo].[Barrier] AS [Barrier]
</DefiningQuery>
</EntitySet
 

My first reaction was to put stored procedure call instead of the SELECT statement.

<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="1.0" xmlns:edmx="http://schemas.microsoft.com/ado/2007/06/edmx">
<!– EF Runtime content –>
<edmx:Runtime>
<!– SSDL content –>
<edmx:StorageModels>
<Schema Namespace="ClinetSODAModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2006/04/edm/ssdl">
<EntitySet Name="Barrier" EntityType="ClinetSODAModel.Store.Barrier" store:Type="Tables" store:Schema="dbo" store:Name="Barrier">
<DefiningQuery>
EXEC [dbo].[sp_GetAllBarrier]
</DefiningQuery>
</EntitySet>

The T-SQL generated by EF run-time failed, but gave me a material to play with

SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT cast(1 as bit) AS X ) AS [SingleRowTable1]
LEFT OUTER JOIN (SELECT
    COUNT(cast(1 as bit)) AS [A1]
    FROM (
EXEC [dbo].[sp_GetAllBarrier]
) AS [Extent1] ) AS [GroupBy1] ON 1 = 1

 

Finally, I found a construct that would let me to use stored procedure.

<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="1.0" xmlns:edmx="http://schemas.microsoft.com/ado/2007/06/edmx">
<!– EF Runtime content –>
<edmx:Runtime>
<!– SSDL content –>
<edmx:StorageModels>
<Schema Namespace="ClinetSODAModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2005" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2006/04/edm/ssdl">
<EntityContainer Name="ClinetSODAModelStoreContainer">
<EntitySet Name="Barrier" EntityType="ClinetSODAModel.Store.Barrier" store:Type="Tables" store:Schema="dbo" store:Name="Barrier">
<DefiningQuery>
SELECT * FROM OPENQUERY(LOCALSERVER, ‘EXEC SODA.dbo.sp_GetAllBarrier’)
</DefiningQuery>
</EntitySet>
 

Yes, it would require me to register local server

sp_addlinkedserver @server = ‘LOCALSERVER’, @srvproduct = ,
@provider = ‘SQLOLEDB’, @datasrc = @@servername

 

And the usage of full stored procedure name including DB name is not convenient, but step forward and buys some time before Microsoft issues new EF release.

Categories: Entity Framework
Follow

Get every new post delivered to your Inbox.