Newsfeed

Author's details

Date registered: May 23, 2012

Latest posts

  1. New England VMUG – Summer Slam — July 2, 2012
  2. Vendorwag Offline – Xsigo — June 25, 2012
  3. Deep in the Heart of Texas with Dell – (Day 2 of 2) — June 12, 2012
  4. Creating a nested lab — June 12, 2012
  5. Deep in the Heart of Texas with Dell – (Day 1 of 2) — June 11, 2012

Author's posts listings

Jun 12

Creating a nested lab

I was just building a nested lab to record some demo videos. I find myself googling for this every single time so I figured I would write about it so I can easily get it of my own website. Many have written about this before and all credits go to William Lam and Eric Gray, which are the two  main blogs I have used in the past to get this working.

After installing ESXi on my physical box I “ssh” in to it. In order to allow “nested ESXi” to boot a 64bit OS you will need to run the following:

echo 'vhv.allow = "TRUE"' >> /etc/vmware/config

After you have done that you will want to make sure you will get network connection. Go to your “VM Network” portgroup, or if you named it differently the portgroup that is used to connect the virtual ESXi hosts to. For each of the hosts do the following:

  1. Click on the host
  2. Go to “Configuration”
  3. Click on “Networking”
  4. Click “Properties” on the vSwitch
  5. Select the correct portgroup
  6. Click “Edit”
  7. Click “Security”
  8. Set “Promiscuous Mode” to “Accept”
  9. Click “Ok”
  10. Click “Close”

Now for each virtual ESXi host (note there is a “guest os” called ESXi 5 in there, use it!) that you have created do the following:

  1. Right click on the VM
  2. Click “Edit settings”
  3. Click the “Options” tab
  4. Click on “CPU/MMU virtualization”
  5. Select the 4th option “Use Intel VT-x / AMD-v…”

I am building this out to record a new of “DR of the Cloud”. In other words, 3 virtual clusters + vCloud Director + SRM + vSphere Replication + Virtual Storage Appliances… Cool stuff right.

Creating a nested lab” originally appeared on Yellow-Bricks.com. Follow us on Twitter and Facebook.
Available now: vSphere 5 Clustering Deepdive. (paper | e-book)

Permanent link to this article: http://www.startswithv.com/2012/06/12/creating-a-nested-lab/

Jun 11

Deep in the Heart of Texas with Dell – (Day 1 of 2)

A couple of weeks ago I spent sometime with Dell at their location in Austin, Texas. It was roller-coaster ride of two-days worth of briefings. Firstly, I would like to thank everyone at Dell for sparing their valuable time with me, and I would like to especially thank Lani Dame and Jeff Sullivan who made it all happen.

 

Much of what was discussed is stuff that is out there in the wild and available now, but there was some content that was embargo’d and NDA’d. In the end it turn out to wait for the embargo to lift – and write one piece about my two-days spent with them – than to try and separate content out…

Dell Equallogic BladeArray and Storage Blades

Caption: This me adding an additional Storage Blade to a “Dell Equallogic BladeArray”. Over my shoulder is the CMC console which is used to manage the chasis and all the components. We slid the new Dell Equallogic Storage Blade, and in less 5mins it was available in Dell Equallogic Group Manager…

For me the stand-out part of the day was looking at what I will call “Dell Equallogic BladeArray”, and is part of much large “convergence” play…

I first saw this system last year when I was Dell Equallogic guys in Nashua, New Hampshire. I must admit I was wetting myself with expectation then, and I felt the same again this time around.  Colossus was the project name for what Dell are calling “The Datacenter Blade System” – but I imagine its going to be called many things by the time word gets around – “Datacenter-in-a-box” being probably one of them. The concept is a relatively simple one. Condensing the Dell Equallogic and Server Blades in single chasis. The enclosure includes all the parts need for convergence and consolidation – servers, storage and networking. The chasis takes 1/4, 1/2 and full-height blades along side Dell Equallogic Storage in what’s being dubbed a “Storage Blade”. The storage is available in a number of formats including SSD and SAS with built-in auto-tiering. As I’m a VMware SRM man so I asked if the chassis could take two Storage Blades – and enable for them for replication. The theory I had was I could have whole VMware SRM environment – with the appearance of two sites in one box.

  

Note: Pictures above show the front and rear of “Dell Equallogic BladeArray” – the front shows 2 storage blades (with the pretty green LEDS) and 24 12G PowerEdge ¼ height server blades (M420). The rear pictures show power supplies at the bottom and fans – and in the middle the Force10 MXL 10/40 GbE networking switches

For Dell this system is part of a much bigger “convergence” play that you will hear them talk about increasingly. I think Dell Equallogic BladeArray is a good example of this – with the literally the storage/network/server all residing in one enclosure. The more I looked it the more I could see each chasis representing a cluster in vSphere – and “pod” approach to scalability being adopted. Another view could be different types of Dell Equallogic BladeArray being racked and stacked on each other to represent your Gold/Silver/Bronze classes of infrastructure. Perhaps that’s wrong – am I being a bit “old school” in so directly tying the hardware to the virtual layer? Perhaps so…

There’s significant research to show that in the next few years we will see the demise of DIY approach where people design and build their own solution based on various components – to either using build your own based around detailed referrence architectures, and this off-the-peg solutions. The interesting thing is folks think these off-the-peg solutions lack flexibility and choice. But if you look at them there’s a huge amount of variance of server/blade types (1/4, 1/2, full) as well as storage solutions (SSD, SSD+SAS with auto-tiering) and so on.  I think the hard thing will be picking the right flavour of off-the-peg to meet/cover you resource needs. For me I would want to fill out “Dell Equallogic BladeArray” with two Equallogic Storage Blades, with a combo of SSD and SAS for auto-tiering with as many 1/4 server blades.

So I guess the next thing to ask is who is Dell pitching this at? It seems squarely placed at the Small/Medium size company. Of course, that has to be seen through the prism of defining “small”. For many of my customers this system would offer far too much compute/storage than they need – although they could start out with only partially populating chasis. But for them, if they want to go with Dell – the vStart50/100 might seem more appropriate. That’s a offering that sends out a ready-racked system of storage/network/servers using conventional rack-mount hardware. What attracts me to Dell Equallogic BladeArray is that it has everything I would need in one tight little enclosure. It’s form-factor means it could be wheeled into my colo and racked-up very easily. The problem I would have with things like vStart & vBlock is they come ready-racked in the vendors cabinet. Sadly, my colo uses its own racks and when ever anything has been sent in a vendor rack (like my EMC NS-120) I’ve had to totally de-cable/de-rack and re-cable and re-rack the kit – which largely defeats the object. Personally, my only concern with Dell Equallogic BladeArray is the power consumption required. I recently scaled back my AMPs commitment and cancelled second PDU to reduce my colo fees. I doubt very much if I could take it in my environment without re-instating that expense. That however is a very personal consideration of mine – for customer with who already has 24 servers and two arrays – and was looking to consolidate that, they would most likely find themselves needing less power, not more…

In the morning I spent time with Robert Bradfield, just looking at the Server Blade side of the house. A couple of interesting observations came out from that discussion. The first one is that Dell isn’t seeing much large density server consolidation – that is folks racking up terrifying amounts of RAM to run lots and lots of VMs. That seems suggest that might be the influence of the cost of large RAM systems, but also the licensing implications introduce by VMware vSphere vRAM licensing allocations. It’s a fundamental law that customers will always “game” licensing systems.

So say a vendor licenses a server product at $1 per GB for 254GB, but then says you must pay $100 once you have >254GB. Then customers will have a tendency to design their solutions around avoiding licensing penalties – where ever that bar is set – so in the example above they would try to design systems that would not exceed 254 GB.  The important thing to say here, is that Dell is seeing folks by large RAM systems – but generally to run a smaller number of VMs that have large  memory requirements. So although the quantity of addressable RAM is going up, the VM consolidation ratio is not (so long as you don’t include the virtual desktop workload which necessitates packing in as many VMs on the same form-factor as possible to lower the per-virtual desktop cost). Setting aside the licensing issue, it perhaps suggests that many customers have psychological barrier with consolidation – the certain number of VMs per Server that they are comfortable with – and anything above that becomes undesirable. Why? Well, they worry about the impact of HA. The eggs-in-one-basket concern that has the consolidation ratio goes up the impact of failed server increases. It also means that you have to reserve significant amount of compute resources to accommodate the HA restart. Additionally, the time it takes evacuate a host of all its VMs increases as well for maintenance mode activities.

The other interesting aspect of these new blade achectictures is the way Dell have intregrated their Force10 acquistion into the platform. I think that’s quite an impressive turn around. That means all the components – server, storage, network are all interlinked with 10Gps networking – and there was talk of 40Gps later down the road. For the moment there’s 40GbE support on uplinks, there’s a max of 32 Internal server facing ports – ip to a total of 56 10GbE ports (32 internal and 24 external ports) and Up to 6 external 40GbE Ports

 

Dell Virtual Network Architecture (VNA)

My next discussion was with the Dell Virtual Network Architecture team. This was a more general discussion about the process of convergence – where LAN ethernet network meets the Storage Network. This was a pretty strong theme all week. For years we have been talking about how convergence and more “services oriented” view of the world is challenging the traditional silos of expertise. What I’m talking about is the way most Corp IT is still based around having specialist in the network, servers, storage, desktops, security et al. Increasingly, I’m seeing CIO/CTO feeling frustrated about this structure – and how increasingly the feel these skills groups represent fiefdoms of vested interested. In short the way we divided roles and responsibilities is increasingly seen as part of the problem, not part of the solution. I guess if you like you could say that CIO/CTO look for “one throat to choke” when it comes to vendor relationships – perhaps they are looking for “one throat to choke” from their internal support teams.

The analogy I proposed to the VNA group at Dell – was as the Storage vendors like Dell Equallogic, NetApp and EMC have all developed plug-ins to help VMware Admin provision new LUNs/Volumes – could we be looking at doing the same thing with the network. You might think this is a wacky notion. But how about when you create & define a portgroup on a vSwitch – it also creates the VLAN on the physical switch. That’s in marked contrast to how the folks at Cisco & IBM have things – where they are pushing their network management into virtualization layer… the idea Dell have is to use OpenFlow to turn physical switches into slightly intelligent engines for moving packets about – with the intelligence up inside the virtualization layer. So in this view of the world an administrator is in charge of the server/storage/network layers. As the layers converge so does the management. Now, of course there are some folks you might see such as concept a threat to their job security. I think that would be mistake to view it that way. If you think you job security depends on provisioning new LUNs/Volume or VLANs you need to think again. Those skills are increasingly commiditized. The way to view this is handing off low-skill, low-value tasks to those who make those requests – thus freeing up Network and Storage professionals to focus on stuff that really justifies their existence. I guess the analogy I would use here is – if you were a Active Directory specialist is it a good use of their time resetting end-user passwords.

Anyway, putting this high-brow, high-flautin thinking to one side there were some interesting practical examples of automating this stuff so the left-hand (virtual world) knows what the right-hand (physical world) is up to. So the team talked about how their working with vCloud Director API to allow for the dynamic creation of VLANs – and for the dynamic creation of VLANs to allow vMotion events to occur. There was also a strong indication that from hardware perspective of having top-of-rack switches uplinked to the core network could fade-away with the core network residing inside blade enclosures.

Some of these capabilities were demo’d at the Spring InterOp Demos – in total there were 4 demos:

  • Datacenter Infrastructure and Fabric Management demo
  • Automated Workload Mobility
  • Virtual Machine Tunnelling
  • Network Virtualization

vStart Roadmap

Next session I spent with the folks from the vStart group.  In case you don’t know vStart is Dell’s answer to vBlock/Matrix/FlexPod. It’s the stand-alone components ready-racked. There keen to emphasis this isn’t just a re-packagaing exercise on their part – and that thing is shipped as service with management layer placed on top of the conventional management tools (like Dell Equallogic “Group Manager” to accelerate the deployment times.

It’s tricky to write about here because this outside of the embargo period that I’ve agreed with Dell in June.I guess the most I can say is expect vStart to be BIGGER in the near future. That which I cannot speak of, I must pass over in silence. What I can say is that Dell are intergrating the Dell Compellent storage component into a new SKU called vStart 1000.


Note:  vStart 1000 shown with Compellent controllers and 12G PowerEdge ½ height server blades (M620).

I guess what I can also say is there is management process which I can see will apply to both vStart & Dell Equallogic BladeArray in equal measure – and suspect that Dell want to sharpen up the look and feel of the management interfaces for the chasis, servers, network and storage – to make them feel like an integrated system.  That’s a work in progress that comes of the back of having management system that were original developed before the accquistion of Force10 Networking and Equallogics…

I was given a brief tour around the lab environment – and got to take a look at vStart 50 which had just come back from being on the Solutions Exchange – much though I love idea of Dell Equallogic BladeArray the reality is that the vStart is probably more within my budget, and power consumption limits… My only problem with these sorts of re-racked kit is having de-rack them for my colo provider to fit into the racking…

Dell Management Plug-in for vCenter:

This is something I have blogged about before last year. My relationship with Dell has improved massively over the last two years. Partly spurred by work on Dell Equallogics for the VMware Site Recovery Manager book, but also I think by general urge by the Dell TechCenter guys to engage with folks like me who are firmly in the VMware Community side of the house. In case you don’t know the vCenter plug-in extends information you that normally be found in Dell OpenManage into the vCenter environment. Unlike say the Dell Equallogic plug-in this management plug-in is not for “free”. Starts from about $99 per server. The term “free” is a bit of mute point here. Nothing is really for free if you think about it – all the storage vendors have made their management plug-ins for free, but lets face it without parting with a lot of $$$ for the array they are pretty much useless.

Anyway, putting that hot-potato aside the plug-in allows for couple of administrative actions – and it essentially acknowledges that deploying a new server isn’t JUST about getting ESX on it. That’s one of the downsides of VMware’s Auto-Deploy. Wonderful though PXE boot of the VMkernel is – it’s only one part of the process when racking and stacking a new server. Dell’s Management plug-in can handle firmware updates, configure BIOS settings (such as ensuring advanced CPU features needed with vSphere are enabled systematically), iDRAC settings as well as deploying ESX via the iDRAC. One of the downsides I see if Auto-Deploy is the struggles customers might have getting approval for DHCP on their network, as well as the requirement for Enterprize+ and host profiles – also there’s a security aspect to consider. So I can imagine customers will be attracted to installing ESX to disk (san, sdcard, local). I think customers should be looking to their OEMs to stitch a deployment process together that handles ALL the requirements needed for setting up a new ESX host.

The plug-ins top jobs are:

  • Monitoring & Alerting
  • Deployment & Provisioning (using the Dell iDRAC with LifeCycle Controller)
  • Firmware Updates (using the Dell iDRAC with LifeCycle Controller)
  • Hardware Management (Warranties and wot not!)

Starting with the 1.5 release of the plug-in its no longer a requirement that OMSA has to be installed if your using the new 12G hardware – and it comes with new features such as support deploy to dual SD cards in the servers, as well as lockdown mode. The firmware update feature intregrates with vSphere’s maintainance mode to evacuate the host of all VMs, before applying the firmware updates. With 1.5, Dell will also bring the box back out of maintenance mode automatically after the update if the customer chooses.

Note: Graphic kindly provided by Dell, click on the thumbnail to see a bigger view!

OpenManage Integration

My last session of the day was the folks in the OpenManage group. That was quite a scary meeting! There was about 10 folks around the table and it was getting late in the day – and I was reaching “saturation point”. We kicked off with looking at iDRAC7 and how Dell is investing in the “Lifecycle Controller”. The idea is to allow for extended management without the need for agents and so on where previously you did for things like NIC management. However, what really grabbed my attention was their integration with System Center 2012. Again, I can’t say too much about this – but what I was struck by is how open System Center is to these third-party extensions. In away that is currently harder for VMware vCenter. Again the management is delivered in agentless way using the lifecycle controller.

Permanent link to this article: http://www.startswithv.com/2012/06/11/deep-in-the-heart-of-texas-with-dell-day-1-of-2/

May 31

Which isolation response should I use?

I wrote this article about split brain scenarios for the vSphere Blog. Based on this article I received some questions around which “isolation response” to use. This is not something that can be answered by a simple “recommended practice” and applied to all scenarios out there. Note that below has got everything to do with your infrastructure. Are you using IP-Based storage? Do you have a converged network? All of these impact the decision around the isolation response.

The following table however could be used to make a decision:

Likelihood that host will retain access to VM datastores Likelihood that host will retain access to VM network Recommended Isolation policy Explanation
Likely Likely Leave Powered On VM is running fine so why power it off?
Likely Unlikely Either Leave Powered On or Shutdown Choose shutdown to allow HA to restart VMs on hosts that are not isolated and hence are likely to have access to storage
Unlikely Likely Power Off Use Power Off to avoid having two instances of the same VM on the VM network
Unlikely Unlikely Leave Powered On or Power Off Leave Powered on if the VM can recover from the network/datastore outage if it is not restarted because of the isolation, and Power Off if it likely can’t.

Which isolation response should I use?” originally appeared on Yellow-Bricks.com. Follow us on Twitter and Facebook.
Available now: vSphere 5 Clustering Deepdive. (paper | e-book)

Permanent link to this article: http://www.startswithv.com/2012/05/31/which-isolation-response-should-i-use/

Older posts «

» Newer posts