Azure Stack delayed

It appears that Azure stack has been delayed for a variety of different reasons. Microsoft had said they were going to launch with just Dell, HPE and Lenovo. Does anyone else see something missing here? With Hyper-converged taking names at the moment I would have thought it was a simple inclusion of CISCO and Nutanix as well. Seems odd to not include them. Maybe the delay will help them come to their senses and include them. Many of the customers that I am talking to have API calls and other provisioning stories and would say the list of OEM’s that Microsoft is starting with are behind with integration of the network, storage and compute layers. If you want to do something that AWS is not doing and do it well you don’t start with the HW vendors that are playing catch up in the space. Again that is my opinion, but seems very easy to figure out. This is also based on the move away from letting customer use their own hardware for deployment. This is a pivot away from the correct direction. Pre-Validated HW is probably a good thing, but if I have HP, Dell or Lenovo already I don’t want to go out and purchase new HW if the OEM says they didn’t test that version of HW. This could work really well if Microsoft decides to have recently released HW that is already being consumed by customers included. If they don’t the adoption rate will be low and slow and allow AWS to catch up if they have not already done something already. Microsoft cannot delay this for very long as AWS is known for being quick to market and if they deem this as a weakness they will address in business quarter or two and keep taking over the world. AWS is doing to Microsoft what Microsoft used to do to others. Game on Microsoft can you rise to the occasion? I hope so as I don’t see anyone else catching AWS at the moment.  Delaying the release of azure stack to the middle of 2017 is a HUGE mistake in my view as they give AWS way too much time to close the gap on a good differentiator that could help take market share.  The technical preview is still out there  but I would say that nobody is going to use it now that Microsoft has changed direction again.

Nano Server Looks good

Nano Server and Container service

Nano Server is a 64-bit only, headless installation option that’s baked into Windows Server 2016. Nano Server can’t run the full .NET Framework and runs on CoreCLR runtime environment that gives you access to most .NET Framework capabilities. CoreCLR is opensourced as well take a look discussion site has great information.Use cases where this would be applicable would be with Web server, DNS server, Container server with Docker so almost any compatible application you can put into Docker.

Nano Server has a very small footprint and the first version show some great improvements versus the Windows Server:

  • 93 percent lower VHD size
  • 92 percent fewer critical bulletins
  • 80 percent fewer reboots

To achieve these results Microsoft removed some parts of Windows Server like:

  • GUI stack
  • 32 bit support (WOW64)
  • MSI support
  • RDP
  • Some default Server Core components

Interesting points that I found. When you boot the VM for the first time you have to press F11 per the on screen text to set the password from the VM console. Then you manage remotely with tools like PowerShell and Server mgmt tools in Azure. Take a look at Windows Server Evaluations

Azure and new DR service

As of late Microsoft has been stepping up their game in the cloud. Recently they released some new features around automated replication of virtual machines between customer data centers and Azure. Along with encryption being applied to data as it travels between sites and Integration with System Center Virtual Machine Manager is provided. Restoration of services can also be automated, with PowerShell, while it is possible to create virtual networks that span your data center and Azure. Take a look here

Virtual machines can be brought up in an orchestrated fashion to help restore service quickly, even for complex multi-tier workloads. They have some pretty good documentation on this here but what is also interesting is that many of the VMware infrastructure will be able to replicate to Azure in the near future. What does that mean? You can replicate VMware VM’s inside Azure to customer site, but you can not replicate from customer site to Azure yet, but soon.

New vault

Docker use and challenges


Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. You can encapsulate any payload, without concern of  server type for the most part. Using Docker allows infrastructure to use the same environment that a developer builds and tests on a laptop to then run and scale as needed. It can take and run in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above. The typical environment developers are using currently is call Vagrant ( This is a tool that allows developers to create a lightweight reproducible environment that is also considered portable. Vagrant 1.5 recently released and is an easy install.

So why am I talking about Docker? As I am not a big Google fan on most things I will not talk about lmctfy ( I don’t there is enough adoption of it to warrant a discussion yet. In 6months that may change. With Docker a few features that jump out make it compelling. Cross Cloud compatibility along with incremental images and builds. Global image portability and delivery are done via the Registry.   It is however powerful from the standpoint that Docker usually imposes little or no overhead since programs use a virtual partition and run as a normal application and do not need any emulation as traditional virtualization does. This means that you could have a single instance on a physical machine or just deploying the resource seamlessly as PaaS. Horizontal and vertical scale is a big thing here.








Hadoop and OpenStack

Hadoop and Open Stack

Recently the name changed from Savanna to Sahara due to possible copyright problems. Source The code is already in Havana, but new things are coming in Iceberg, next OpenStack Release, around April 17th as it’s currently in Release Candidate mode.

So what is the big deal with Sahara? It honestly ties well into the environment that the company I work for has deployed, but it is also the next big thing that is already here.  The goal is to get to “Analytics as a service” where you can connect any OS you are looking for either Linux or Windows and leverage multiple scripting language and not just the likes of Hive and a few others. This would make it easily deployed and then easier for mainstream to consume. Some of the integration with OpenStack Swift where you can Cache Swift Data on HDFS is a start, but it has to get more integrated in order for it to get widespread adoption.


Why is OpenStack and Hadoop the right mixture?

  1. Hadoop provides a shared platform that can scale out.
  2. OpenStack is agile on operations and supports scale out.
  3. Combine 2 of the most active Open Source communities.
  4. Attracting major ecosystem players that can increase adoption.

This new release of Sahara has some features that are welcome. Templates for both node group and clusters is awesome as well as scaling of the cluster in adding and removing nodes for scale.  Interoperability with different Hadoop distributions and new plugins for specific distributions (Hortonworks and Vanilla Apache. The new release is also enhancing the API to MAP/Reduce jobs without exposing details of infrastructure. New network configuration support with Neutron as well.

Why am I writing about this? Internap has many deployments for various customers, but does not always advertise capabilities. With some of the new enhancements of OpenStack as well as some of the new developer direction internally this is going to change.  With new report from Frost & Sullivan Internap is the Bare metal Cloud leader and the Bare metal servers are perfectly suited to address Hadoop and Big Data needs.

Security in Cloud? Oxymoron or just common sense.

Companies are moving towards Hosted Private Clouds which can range from 100% virtualized to a mixture of both virtual and dedicated. Many times this is based on comfort in some Enterprise applications being virtualized. When designing your private cloud environment take into account some security principles into the design.

  • Isolate and have a basic security best practice defined
  • Assume those trying to break in attackers are authenticated and authorized as many times they are.
  • Realize that even with private networks in a Private cloud that all data locations are accessible
  • Try and use an automated security practice if possible with good strong cryptography
  • Monitor and audit as much as you can and reduce attack surface in many Data Centers this is done with SSAE16 Type II compliance, but you need your customers to review as well.
  • Review your risks and compliance along with design assurance.

One thing that I have not seen talked about much as of late is the use of Honey Pots especially with virtual machines being so easy to deploy these days why would you not build an environment to trap the bugs before they do any damage.

Microsoft Licensing and VMware Licensing

So the market says that VMware has 70+% of the market and Microsoft has 20+% and the rest is among Citrix, RedHat and a few others. I find it interesting that this is a discussion now. VMware has certainly up the ante with VMware 5 licensing, but it is easy to point out that many large Enterprise will always have at least 2 Hyper-visors and it is usually VMware and Microsoft.  These days the experts are saying that only about 60% of of the virtualized workloads have been virtualized and I would tend to believe that number is lower than most think. VMware is not moving as quickly as they need to be moving to keep the mountain and the one company that they don’t need catching them is and that is Microsoft. Windows Server 2012 has closed the gap with vSphere with Hyper-V 3.0. Most companies that offer Private Cloud should have more than one offering only because it tends to be the choice at the enterprise as well.

October 2016
« Jul    

Cuball’s Corner

Error: Twitter did not respond. Please wait a few minutes and refresh this page.