Azure Solutions Architect Expert

Starting my journey. I am going to be Learning AZ-303 and AZ-304 on my own and will take the tests and get the certification (Microsoft)

Exam AZ-303 is the first test I will be focused on.

Implement and Monitor an Azure Infrastructure (50-55%)

  • Implement cloud infrastructure monitoring
  • Implement storage accounts
  • Implement VMs for Windows and Linux
  • Automate deployment and configuration of resources
  • Implement virtual networking
  • Implement Azure Active Directory
  • Implement and manage hybrid identities

Implement Management and Security Solutions (25-30%)

  • Manage workloads in Azure
  • Implement load balancing and network security
  • Implement and manage Azure governance solutions
  • Manage security for applications   

Implement Solutions for Apps (10-15%)

  • Implement an application infrastructure
  • Implement container-based applications

Implement and Manage Data Platforms (10-15%)

  • Implement NoSQL databases
  • Implement Azure SQL databases

Once I have passed AZ-303 I will work on AZ-304

Design Monitoring (10-15%)

  • Design for cost optimization 
  • Design a solution for logging and monitoring

Design Identity and Security (25-30%)

  • Design authentication 
  • Design authorization 
  • Design governance 
  • Design security for applications

Design Data Storage (15-20%)

  • Design a solution for databases 
  • Design data integration 
  • Select an appropriate storage account

Design Business Continuity (10-15%)

  • Design a solution for backup and recovery 
  • Design for high availability

Design Infrastructure (25-30%)

  • Design a compute solution 
  • Design a network solution 
  • Design an application architecture 
  • Design migrations

On Premise vs Managed Data Center

Many out there like to have the control that they feel with “On Premise” Data Centers (DC). The issue with On Premise most times is that it would be considered Tier 1 or Tier 2 by Data Center Standards. That means that it was purposely built but not with all the redundancy that companies need today for applications and end-user demand. That being said 90+% of all managed Data Centers today design and/or adhere to the Up-time Institute Tier III standard. Doesn’t mean that they are certified, but you can check here https://uptimeinstitute.com/uptime-institute-awards . Which after reading all the information Tier 3 means that you are concurrently maintainable. That means you can re-mediate, repair, remedy or any other R needed to fix the problem and maintain up-time at the Data Center.

Areas of concern when looking at a DC and how they are accredited is important. Many companies will say they follow the up-time institute guidance in design and then cut corners for budgetary or timing requirements. This means that customers will think they are Tier III and find out that they are really Tier II and can have unplanned downtime potentially.

The first areas are the what some call the “Ping, Power and Pipe” when looking at On Premise you need the “3 P’s” as a baseline to be honest.

  1. Power is vital to any DC and it needs to be redundant power. Grid power in many locations only provides a single primary power source. If that is the case the DC needs to have an alternative power source as well as refueling services to keep the alternative power source running. Generators do this so typically being able to run 2-3 days is all that is needed unless there is a larger issue and the refueling ability becomes a requirement. Having efficient PUE or Power Usage Effectiveness is how this is measured. DC’s these days need to be under 1.5 to be considered efficient. The other reason for having consistent power is spikes in power. DC’s also condition their power so that the infrastructure uses clean and consistent power. On Premise will not be able to do this effectively typically.
  2. Cooling deals with how you keep humidity and temperature consistent in the DC. Using ASHRAE as your standard will assist. Depending on the DC’s cooling units it can be provided several different ways from CRAH, CRAC to newer technology that has the cooling unit outside the DC. 40% is a good amount of humidity to have in the DC and in the 70’s for temperature.
  3. Network Connectivity you have in a properly designed Data Center 2 different Points of Presence (POP’s) and various carriers coming both POP’s to provide multi-path resilience. In addition to connectivity there are other value adds data centers provide are around Security with DDoS, Latency efficiency as well as the ability to burst and upgrade bandwidth. Some Data Centers will have Blends of their carriers coming into the data center to give better SLA’s to customers.
  4. Finances This is where many DC’s have more flexibility than an on premise. Due to the volume or power, carriers and cooling the scale of the DC’s make it easier for DC to discount below what an On Premise DC can do. The carriers will bring in multiple 10GbE connections and provide exact Bandwidth requirements at
  5. Maintenance and Security are important to have and not always a simple item to facilitate in order to have 24x7x365. Security Cameras, and maintenance of the DC infrastructure above needs to be done. These are preventative measures that many On Premise providers forget or don’t have staff to take on. This introduces unknown and untested equipment issues.
  6. Hybrid  This is something that some companies are starting to adopt with their cloud and DC strategy. This term is used to determine what equipment you keep On Premise and what you outsource or move to a Data Center to have it managed. It becomes the walk first then run strategy for those who want to go to the cloud. How you consume can be just a cross connect or a dedicated managed private cloud. Many will tell you the different cloud that are out there from public to private to hosted to multi-tenant. I will tell you cloud is cloud and it doesn’t change very much. It is something that touches the internet and needs to be protected no matter what.

Conclusion Just because someone says they are a Tier III DC does not mean that it was built to the standard. I have seen many a data center cut corners for financial or timing reasons. It happens. Knowing how your Data Center escalates problems and handles tickets on your behalf can also be a differentiation. Many companies change business focus as they are not sure overall direction. Some have done what they have done for years and know it well and other have constantly offered

Some questions to ask when considering a Data Center.

  • Were you designed and built to Tier III standards?
  • Do you exercise your Batteries in you UPS?
  • How often do you run your generator?
  • Do you have redundant POP’s in your DC?
  • Do you have redundant network connectivity from different vaults?
  • How many layers of security do you have?
  • What sort of Hybrid services do you offer?
  • Are you a REIT or do you own your own DC?
  • What are the SLA’s that you provide?
  • How can or should I develop a Cloud Strategy?

There are many more questions that you can answer and you should be sure you are ready before pulling the trigger on a Data Center provider.

RVTools great VMware toolkit item.

Installing RVTools?

This day and age with different types of cloud and different directions Admins, Architects and Engineers need to have an easy way to pull information from VMware. RVTools is a free utility that you get here. As of March 2019 a revised tool with new features has been released in version 3.11.6 The last revision was over a year ago so it is worth the time to take a look at the new version. Features I found interesting.

  • vInfo tab page new column: Creation date virtual machine
  • vInfo tab page new columns: Primary IP Address and vmx Config Checksum
  • vInfo tab page new columns: log directory, snapshot and suspend directory

Go download it and run the RVTools.msi. (Should be a series of left clicks and done depending on personal requirements.)

When the install is done you should be able to go to Start Menu–> All Programs –> RVTools

rvtools1

 

 

The IP address that is being asked for should be the one that has vCenter running. This gives you what you are needing from a single location. If you are not using vCenter you can go and run against each ESXi hosts but will need to hit each one individually.

 

How do you use the tool?

Login and authenticate. As soon as you log in, you can see all the information about your virtual environment in the home screen.

rvtools2

When you are in the tool you can go to File–> Export to Excel to get all the information in a digestible format. Most people that are working with a customer environment will only need Tab vInfo which is the first tab.

vInfo

Gives details about the Virtual Machines and its health status like Name, Power State, Config Status, Number of CPU’s, Memory, Storage and the HW Version which is important at times when using 3rd party tools.

Other Tabs

More information can be digested but it is going to be more of an administration directive to know those specifics or if you are trying to diagnose the environment from the export sometimes it can help too.

  • vCPU – Talks about all things processor from sockets to entitlements.
  • vMemory – Memory utilization and overhead can help in right sizing.
  • vDisk – Talks about capacity and controllers and modes.
  • vPartition – which partitions are active and other disk information
  • vNetwork – Adapters, IP address assignments and connected values.
  • vFloppy – Interesting one if you actually use it.
  • vCD – Like the floppy but geared towards the CDROM.
  • vSnapshot – This one is important in that it can provide the Tools Version and upgrade flag which can be very important when migration needs the granularity.
  • vRP – Resource Pools for VM but only if you interested in the resevations needed.
  • vCluster – displays information about each cluster specific to the name and status of the cluster, along with VM’s per core on the cluster. Troubleshooting assistance for sure.
  • vHBA – Again all about the name and the specifics on Name, Drive, Device Type, Bus WW Name, PCI address
  • vNIC – physical network such as host name, datacenter name, cluster name, network name, driver, device type, switch, speed, duplex switch
  • vSwitch – Every virtual switch is located here which can help if you are troubleshooting them or moving applications.
  • vPort – All in the name and what each port does, port group, VLAN ID
  • vLicense – All in the name, information on your licenses. Name of licensed product, key, labels, and expiration date. Which licenses are currently used.
  • More tabs, but the ones I am familiar with and use.

Azure Availability Zones – Public Preview

It was released from Microsoft that the zones are now in Public Preview. It sounds like it will go until the end of the year and expand possibly. You have to go and sign up for the preview. For a preview it is pretty nice, but it is limited in what VM’s you can use (AV2, DV2 and DSv2) and the services that are supported are (Linux VM’s WIndows VM’s Managed DIsks, Load Balancer and Zonal Virtual Machine Scale Sets).

Once you have signed up for the preview you have to sign in to your subscription and choose a region that supports it which there is only 2.

  • East US 2
  • West Europe

Use one of the following links to start using Availability Zones with your service.

They gave some templates to use as well.

Azure Stack delayed

It appears that Azure stack has been delayed for a variety of different reasons. Microsoft had said they were going to launch with just Dell, HPE and Lenovo. Does anyone else see something missing here? With Hyper-converged taking names at the moment I would have thought it was a simple inclusion of CISCO and Nutanix as well. Seems odd to not include them. Maybe the delay will help them come to their senses and include them. Many of the customers that I am talking to have API calls and other provisioning stories and would say the list of OEM’s that Microsoft is starting with are behind with integration of the network, storage and compute layers. If you want to do something that AWS is not doing and do it well you don’t start with the HW vendors that are playing catch up in the space. Again that is my opinion, but seems very easy to figure out. This is also based on the move away from letting customer use their own hardware for deployment. This is a pivot away from the correct direction. Pre-Validated HW is probably a good thing, but if I have HP, Dell or Lenovo already I don’t want to go out and purchase new HW if the OEM says they didn’t test that version of HW. This could work really well if Microsoft decides to have recently released HW that is already being consumed by customers included. If they don’t the adoption rate will be low and slow and allow AWS to catch up if they have not already done something already. Microsoft cannot delay this for very long as AWS is known for being quick to market and if they deem this as a weakness they will address in business quarter or two and keep taking over the world. AWS is doing to Microsoft what Microsoft used to do to others. Game on Microsoft can you rise to the occasion? I hope so as I don’t see anyone else catching AWS at the moment.  Delaying the release of azure stack to the middle of 2017 is a HUGE mistake in my view as they give AWS way too much time to close the gap on a good differentiator that could help take market share.  The technical preview is still out there https://azure.microsoft.com/en-us/overview/azure-stack/  but I would say that nobody is going to use it now that Microsoft has changed direction again.

Nano Server Looks good

Nano Server and Container service

Nano Server is a 64-bit only, headless installation option that’s baked into Windows Server 2016. Nano Server can’t run the full .NET Framework and runs on CoreCLR runtime environment that gives you access to most .NET Framework capabilities. CoreCLR is opensourced as well take a look https://github.com/dotnet/coreclr

http://blogs.technet.com/b/nanoserver/ discussion site has great information.Use cases where this would be applicable would be with Web server, DNS server, Container server with Docker so almost any compatible application you can put into Docker.

Nano Server has a very small footprint and the first version show some great improvements versus the Windows Server:

  • 93 percent lower VHD size
  • 92 percent fewer critical bulletins
  • 80 percent fewer reboots

To achieve these results Microsoft removed some parts of Windows Server like:

  • GUI stack
  • 32 bit support (WOW64)
  • MSI support
  • RDP
  • Some default Server Core components

Interesting points that I found. When you boot the VM for the first time you have to press F11 per the on screen text to set the password from the VM console. Then you manage remotely with tools like PowerShell and Server mgmt tools in Azure. Take a look at Windows Server Evaluations
https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview

Azure and new DR service

As of late Microsoft has been stepping up their game in the cloud. Recently they released some new features around automated replication of virtual machines between customer data centers and Azure. Along with encryption being applied to data as it travels between sites and Integration with System Center Virtual Machine Manager is provided. Restoration of services can also be automated, with PowerShell, while it is possible to create virtual networks that span your data center and Azure. Take a look here http://azure.microsoft.com/en-us/services/site-recovery/

Virtual machines can be brought up in an orchestrated fashion to help restore service quickly, even for complex multi-tier workloads. They have some pretty good documentation on this here https://azure.microsoft.com/en-us/documentation/articles/site-recovery-overview/ but what is also interesting is that many of the VMware infrastructure will be able to replicate to Azure in the near future. What does that mean? You can replicate VMware VM’s inside Azure to customer site, but you can not replicate from customer site to Azure yet, but soon.

New vault

Docker use and challenges

docker

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. You can encapsulate any payload, without concern of  server type for the most part. Using Docker allows infrastructure to use the same environment that a developer builds and tests on a laptop to then run and scale as needed. It can take and run in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above. The typical environment developers are using currently is call Vagrant (http://www.vagrantup.com) This is a tool that allows developers to create a lightweight reproducible environment that is also considered portable. Vagrant 1.5 recently released and is an easy install.

So why am I talking about Docker? As I am not a big Google fan on most things I will not talk about lmctfy (https://github.com/google/lmctfy) I don’t there is enough adoption of it to warrant a discussion yet. In 6months that may change. With Docker a few features that jump out make it compelling. Cross Cloud compatibility along with incremental images and builds. Global image portability and delivery are done via the Registry.   It is however powerful from the standpoint that Docker usually imposes little or no overhead since programs use a virtual partition and run as a normal application and do not need any emulation as traditional virtualization does. This means that you could have a single instance on a physical machine or just deploying the resource seamlessly as PaaS. Horizontal and vertical scale is a big thing here.

docker

 

 

 

 

 

 

Hadoop and OpenStack

Hadoop and Open Stack

Recently the name changed from Savanna to Sahara due to possible copyright problems. Source The code is already in Havana, but new things are coming in Iceberg, next OpenStack Release, around April 17th as it’s currently in Release Candidate mode.

So what is the big deal with Sahara? It honestly ties well into the environment that the company I work for has deployed, but it is also the next big thing that is already here.  The goal is to get to “Analytics as a service” where you can connect any OS you are looking for either Linux or Windows and leverage multiple scripting language and not just the likes of Hive and a few others. This would make it easily deployed and then easier for mainstream to consume. Some of the integration with OpenStack Swift where you can Cache Swift Data on HDFS is a start, but it has to get more integrated in order for it to get widespread adoption.

Drawing1

Why is OpenStack and Hadoop the right mixture?

  1. Hadoop provides a shared platform that can scale out.
  2. OpenStack is agile on operations and supports scale out.
  3. Combine 2 of the most active Open Source communities.
  4. Attracting major ecosystem players that can increase adoption.

This new release of Sahara has some features that are welcome. Templates for both node group and clusters is awesome as well as scaling of the cluster in adding and removing nodes for scale.  Interoperability with different Hadoop distributions and new plugins for specific distributions (Hortonworks and Vanilla Apache. The new release is also enhancing the API to MAP/Reduce jobs without exposing details of infrastructure. New network configuration support with Neutron as well.

Why am I writing about this? Internap has many deployments for various customers, but does not always advertise capabilities. With some of the new enhancements of OpenStack as well as some of the new developer direction internally this is going to change.  With new report from Frost & Sullivan Internap is the Bare metal Cloud leader and the Bare metal servers are perfectly suited to address Hadoop and Big Data needs.

Security in Cloud? Oxymoron or just common sense.

Companies are moving towards Hosted Private Clouds which can range from 100% virtualized to a mixture of both virtual and dedicated. Many times this is based on comfort in some Enterprise applications being virtualized. When designing your private cloud environment take into account some security principles into the design.

  • Isolate and have a basic security best practice defined
  • Assume those trying to break in attackers are authenticated and authorized as many times they are.
  • Realize that even with private networks in a Private cloud that all data locations are accessible
  • Try and use an automated security practice if possible with good strong cryptography
  • Monitor and audit as much as you can and reduce attack surface in many Data Centers this is done with SSAE16 Type II compliance, but you need your customers to review as well.
  • Review your risks and compliance along with design assurance.

One thing that I have not seen talked about much as of late is the use of Honey Pots especially with virtual machines being so easy to deploy these days why would you not build an environment to trap the bugs before they do any damage.


April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930