Azure and new DR service

As of late Microsoft has been stepping up their game in the cloud. Recently they released some new features around automated replication of virtual machines between customer data centers and Azure. Along with encryption being applied to data as it travels between sites and Integration with System Center Virtual Machine Manager is provided. Restoration of services can also be automated, with PowerShell, while it is possible to create virtual networks that span your data center and Azure. Take a look here

Virtual machines can be brought up in an orchestrated fashion to help restore service quickly, even for complex multi-tier workloads. They have some pretty good documentation on this here but what is also interesting is that many of the VMware infrastructure will be able to replicate to Azure in the near future. What does that mean? You can replicate VMware VM’s inside Azure to customer site, but you can not replicate from customer site to Azure yet, but soon.

New vault

Docker use and challenges


Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. You can encapsulate any payload, without concern of  server type for the most part. Using Docker allows infrastructure to use the same environment that a developer builds and tests on a laptop to then run and scale as needed. It can take and run in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above. The typical environment developers are using currently is call Vagrant ( This is a tool that allows developers to create a lightweight reproducible environment that is also considered portable. Vagrant 1.5 recently released and is an easy install.

So why am I talking about Docker? As I am not a big Google fan on most things I will not talk about lmctfy ( I don’t there is enough adoption of it to warrant a discussion yet. In 6months that may change. With Docker a few features that jump out make it compelling. Cross Cloud compatibility along with incremental images and builds. Global image portability and delivery are done via the Registry.   It is however powerful from the standpoint that Docker usually imposes little or no overhead since programs use a virtual partition and run as a normal application and do not need any emulation as traditional virtualization does. This means that you could have a single instance on a physical machine or just deploying the resource seamlessly as PaaS. Horizontal and vertical scale is a big thing here.








Hadoop and OpenStack

Hadoop and Open Stack

Recently the name changed from Savanna to Sahara due to possible copyright problems. Source The code is already in Havana, but new things are coming in Iceberg, next OpenStack Release, around April 17th as it’s currently in Release Candidate mode.

So what is the big deal with Sahara? It honestly ties well into the environment that the company I work for has deployed, but it is also the next big thing that is already here.  The goal is to get to “Analytics as a service” where you can connect any OS you are looking for either Linux or Windows and leverage multiple scripting language and not just the likes of Hive and a few others. This would make it easily deployed and then easier for mainstream to consume. Some of the integration with OpenStack Swift where you can Cache Swift Data on HDFS is a start, but it has to get more integrated in order for it to get widespread adoption.


Why is OpenStack and Hadoop the right mixture?

  1. Hadoop provides a shared platform that can scale out.
  2. OpenStack is agile on operations and supports scale out.
  3. Combine 2 of the most active Open Source communities.
  4. Attracting major ecosystem players that can increase adoption.

This new release of Sahara has some features that are welcome. Templates for both node group and clusters is awesome as well as scaling of the cluster in adding and removing nodes for scale.  Interoperability with different Hadoop distributions and new plugins for specific distributions (Hortonworks and Vanilla Apache. The new release is also enhancing the API to MAP/Reduce jobs without exposing details of infrastructure. New network configuration support with Neutron as well.

Why am I writing about this? Internap has many deployments for various customers, but does not always advertise capabilities. With some of the new enhancements of OpenStack as well as some of the new developer direction internally this is going to change.  With new report from Frost & Sullivan Internap is the Bare metal Cloud leader and the Bare metal servers are perfectly suited to address Hadoop and Big Data needs.

Security in Cloud? Oxymoron or just common sense.

Companies are moving towards Hosted Private Clouds which can range from 100% virtualized to a mixture of both virtual and dedicated. Many times this is based on comfort in some Enterprise applications being virtualized. When designing your private cloud environment take into account some security principles into the design.

  • Isolate and have a basic security best practice defined
  • Assume those trying to break in attackers are authenticated and authorized as many times they are.
  • Realize that even with private networks in a Private cloud that all data locations are accessible
  • Try and use an automated security practice if possible with good strong cryptography
  • Monitor and audit as much as you can and reduce attack surface in many Data Centers this is done with SSAE16 Type II compliance, but you need your customers to review as well.
  • Review your risks and compliance along with design assurance.

One thing that I have not seen talked about much as of late is the use of Honey Pots especially with virtual machines being so easy to deploy these days why would you not build an environment to trap the bugs before they do any damage.

Microsoft Licensing and VMware Licensing

So the market says that VMware has 70+% of the market and Microsoft has 20+% and the rest is among Citrix, RedHat and a few others. I find it interesting that this is a discussion now. VMware has certainly up the ante with VMware 5 licensing, but it is easy to point out that many large Enterprise will always have at least 2 Hyper-visors and it is usually VMware and Microsoft.  These days the experts are saying that only about 60% of of the virtualized workloads have been virtualized and I would tend to believe that number is lower than most think. VMware is not moving as quickly as they need to be moving to keep the mountain and the one company that they don’t need catching them is and that is Microsoft. Windows Server 2012 has closed the gap with vSphere with Hyper-V 3.0. Most companies that offer Private Cloud should have more than one offering only because it tends to be the choice at the enterprise as well.

Cloud Decisions

Private Cloud is looking at a huge increase in spend over the next few years.  With revenue to reach 24 Billion by 2016 it equates to a 50% increase in the next 4 years (Link). The question becomes do you Rent versus buy your cloud.

An apartment complex is similar to public cloud.

  • Renters share infrastructure and pay for what is used.
  • Easy to leave when contract is done
  • Nothing custom but cost effective

A house is similar to private cloud.

  • Owner controls Infrastructure and is customizable
  • Security is designed for the property
  • Dedicated resources and more of an investment

The perception too many customers have is that there is only one type of cloud and how it can help. Many customers don’t really understand the differences out there. Public Cloud has been around for years with Amazon being one of the first to do this. Amazon is has feature functionality that many want until they know what they really want. The framework used at AWS is proprietary and nobody everyone thinks to ask their developers and architects thoughts on direction and where to go. OpenStack has become the alternative to AWS and what they are doing and many companies like Internap, Rackspace, HP and IBM have adopted it. You also have Azure from Microsoft which is proprietary as well tied to the .Net Framework and specifically Microsoft. Everyone views AWS as the competition, even VMware. Companies are now covering all bases with public so take a look at a Cloud Buyers Guide.

Service Manager Slips to 2010

System Center Service Manager which is a part of the System Center Suite from Microsoft appears to be delayed until 2010. The Beta was a interesting release but then almost over night dropped off the radar. Now it sounds like a beta refresh is tentatively planned for late 2008.

This is not a very good sign for sevice desk with interim people who are wanting an alternative like Rememdy and Siebel to name a few. We will see how things work out in the next few years.

October 2015
« Jun    

Cuball’s Corner

Error: Twitter did not respond. Please wait a few minutes and refresh this page.


Get every new post delivered to your Inbox.

Join 73 other followers