Category Archive: cloud

Jun 12

Deep in the Heart of Texas with Dell – (Day 2 of 2)

Note: DTC Poster!

Note: Dennis Smith of Dell TechCenter Team, in his MASSIVE TRUCK!

In the evening I was left to rest and recuperate for about an hour! That evening I met up with Justin King of VMware (aka @vCenterGuy).

He’s a brit who has lived/worked in Austin, Texas for more than decade, and currently is in the Technical Marketing group in the VMware Product. I first met Justin a year or so ago when I was VMware for on-site beta of the VMware Site Recovery Manager 5.0, when he was alighned to the vCenter Heartbeat Service.  Much of what we talked about is off the record so I can’t share here. But I was able to have a sneakpeak at the new “Web Client’ vSphere. As you probably know the vSphere Client that’s a 32-bit C# application is going away eventually, and will be superseded by this new “Web Client”. It’s there right now in vSphere 5.x but at the moment its more “Virtual Machine Administrator” UI rather than “vSphere Administrator” UI. I would have to say I’m quite impressed with this new direction. Personally, I’m a bit tired of having upgrade my vSphere Client everytime vSphere moves a release. Hopefully, this new direction will stop the Apple Mac people on the VMTN forums bleating on about not having a Mac version of the vSphere Client!

The next day I picked up the thread with Dell…

Dell VIS Creator

VIS is the umbrella group that includes VIS Creator and Dell AIM

Dell VIS creator is cloud automation layer that sits in the IaaS end of the market – and seeks to add chargeback, service catalogs, self-service etc above infrastructure layer. As with all these solutions the intention is to hide the “gory details” of the plumbing. What distinguishes it from say VMware vCloud Director is its willingness to recognise other virtualization platforms and well as physical servers – still being an important part of the “provisioning” process. Additionally, it offers an auditable/trackable way of advertising public cloud resources.  Despite this it is angled primarily at the Private Cloud, rather than Service Providers.

As you can see from the graphic above it contains all the stuff you normally expect to see with sort of cloud automation. Over on the left you have the 4 core business processes that you could call the “lifecycle” – Requisition-to-Provision-to-Manage-to-Retire. Service Blueprints are definitions of types of resources you could call up from the Service Catalog – such as Linux VM running Apache or Windows VM running SQL 2012.

Note: Show Global Blueprints that have been newly created, and awaiting for approval by the Enterprise Administrator

I was also shown how with a little bit of scripting (PowerCLI/PowerShell) you could add the provisioning of a desktop from either VMware View or XenDesktop. These Service Blue Prints can have “Cost Profiles” attached to them – or you can have these “Cost Profiles” attached when resources are reserved during the deployment phase.

The “Business Groups” are how VIS Creator handles what people can do and see. This allows you to put your folks into roles, assigned to Business Groups – so only the right business units can see the right items in the Service Catalog – it can also be used to assign policies that control – leases, limits and quotas for resources.

Resources can be grouped together so in the Capcity Usage pages you can see what quantities of memory and storage are allocated and unallocated.

Over on the far right is the provider targets that can be used – virtual, physical and public. The interesting thing about VIS Creator is how “agnostic” it is about the source of those resources – it doesn’t give a damn if you use Xen, KVM, VMware, HyperV…

Note: Here we have an executive summary of resources available by Virtual providers and Physical providers

What it lacks is something that’s common to many technologies of this type – it doesn’t have its own special methods of handling the network (such as network pools, mac-in-mac abstraction) or storage (datastore clusters, or storage pools). Rather it piggy back of the back of what ever the provider targets can accommodate or offer. I guess you could argue that this “lack” of a feature says more about the strategic design behind the technology. It’s not VIS job to worry about those underlying technologies – that’s the role of the providers. However, I can see how some of the work within the VNA team could be harnessed here. VIS Creator speaks to VNA that then speaks agnostically to the underlying physical network. That would be quite interesting.

We also had a brief discussion about Dell AIM (Advanced Infrastructure Manager) which ties into a workload management play. Essentially, AIM also the portability of workloads from P2V, P2V, and V2P. It does this by using using SAN LUNs to hold the data, and then pointing either the VM or physical server to the LUNs in question. Of course, this means kissing goodbye to our beloved virtual disks… But if you think about the idea of VMware Volumes (where VMFS/VMDK’s give way to vVOLS presented directly to the VM) then there might be more legs in this idea than first seems. The aim of AIM (did you see what I did there…) is to free Dell customers from vendor lock-in allowing them to flexibly move from one platform to another – when it comes to Physical-to-Physical I’m assuming the direction of travel would be from HP/IBM to Dell. Not the other way round!

There’s also a HA/DR play here. Once the workload is portable you could move it from one failed server, to functioning one – or from site to another site. As a 100% virtualization guy, I don’t see the benefits – but I can imagine my customers might who by the nature of their business will never be 100% virtual on any platform…

The Unmentionable

There was something I was shown at the end of the day. A management system. Erm. Anything more represents a breach of an NDA. But lets say it ties into the recent blade offering that incorporates the Equallogics Storage Blade

Time in the Dell TechCenter

To finish of the day I went for BBQ lunch with the guys from the Dell TechCenter, and was given a little tour of their neck of the woods. I was given the tour by Peter Tsai (@supertasi). That’s where the photo at the top of this post was taken.


Permanent link to this article:

Jun 11

Deep in the Heart of Texas with Dell – (Day 1 of 2)

A couple of weeks ago I spent sometime with Dell at their location in Austin, Texas. It was roller-coaster ride of two-days worth of briefings. Firstly, I would like to thank everyone at Dell for sparing their valuable time with me, and I would like to especially thank Lani Dame and Jeff Sullivan who made it all happen.


Much of what was discussed is stuff that is out there in the wild and available now, but there was some content that was embargo’d and NDA’d. In the end it turn out to wait for the embargo to lift – and write one piece about my two-days spent with them – than to try and separate content out…

Dell Equallogic BladeArray and Storage Blades

Caption: This me adding an additional Storage Blade to a “Dell Equallogic BladeArray”. Over my shoulder is the CMC console which is used to manage the chasis and all the components. We slid the new Dell Equallogic Storage Blade, and in less 5mins it was available in Dell Equallogic Group Manager…

For me the stand-out part of the day was looking at what I will call “Dell Equallogic BladeArray”, and is part of much large “convergence” play…

I first saw this system last year when I was Dell Equallogic guys in Nashua, New Hampshire. I must admit I was wetting myself with expectation then, and I felt the same again this time around.  Colossus was the project name for what Dell are calling “The Datacenter Blade System” – but I imagine its going to be called many things by the time word gets around – “Datacenter-in-a-box” being probably one of them. The concept is a relatively simple one. Condensing the Dell Equallogic and Server Blades in single chasis. The enclosure includes all the parts need for convergence and consolidation – servers, storage and networking. The chasis takes 1/4, 1/2 and full-height blades along side Dell Equallogic Storage in what’s being dubbed a “Storage Blade”. The storage is available in a number of formats including SSD and SAS with built-in auto-tiering. As I’m a VMware SRM man so I asked if the chassis could take two Storage Blades – and enable for them for replication. The theory I had was I could have whole VMware SRM environment – with the appearance of two sites in one box.


Note: Pictures above show the front and rear of “Dell Equallogic BladeArray” – the front shows 2 storage blades (with the pretty green LEDS) and 24 12G PowerEdge ¼ height server blades (M420). The rear pictures show power supplies at the bottom and fans – and in the middle the Force10 MXL 10/40 GbE networking switches

For Dell this system is part of a much bigger “convergence” play that you will hear them talk about increasingly. I think Dell Equallogic BladeArray is a good example of this – with the literally the storage/network/server all residing in one enclosure. The more I looked it the more I could see each chasis representing a cluster in vSphere – and “pod” approach to scalability being adopted. Another view could be different types of Dell Equallogic BladeArray being racked and stacked on each other to represent your Gold/Silver/Bronze classes of infrastructure. Perhaps that’s wrong – am I being a bit “old school” in so directly tying the hardware to the virtual layer? Perhaps so…

There’s significant research to show that in the next few years we will see the demise of DIY approach where people design and build their own solution based on various components – to either using build your own based around detailed referrence architectures, and this off-the-peg solutions. The interesting thing is folks think these off-the-peg solutions lack flexibility and choice. But if you look at them there’s a huge amount of variance of server/blade types (1/4, 1/2, full) as well as storage solutions (SSD, SSD+SAS with auto-tiering) and so on.  I think the hard thing will be picking the right flavour of off-the-peg to meet/cover you resource needs. For me I would want to fill out “Dell Equallogic BladeArray” with two Equallogic Storage Blades, with a combo of SSD and SAS for auto-tiering with as many 1/4 server blades.

So I guess the next thing to ask is who is Dell pitching this at? It seems squarely placed at the Small/Medium size company. Of course, that has to be seen through the prism of defining “small”. For many of my customers this system would offer far too much compute/storage than they need – although they could start out with only partially populating chasis. But for them, if they want to go with Dell – the vStart50/100 might seem more appropriate. That’s a offering that sends out a ready-racked system of storage/network/servers using conventional rack-mount hardware. What attracts me to Dell Equallogic BladeArray is that it has everything I would need in one tight little enclosure. It’s form-factor means it could be wheeled into my colo and racked-up very easily. The problem I would have with things like vStart & vBlock is they come ready-racked in the vendors cabinet. Sadly, my colo uses its own racks and when ever anything has been sent in a vendor rack (like my EMC NS-120) I’ve had to totally de-cable/de-rack and re-cable and re-rack the kit – which largely defeats the object. Personally, my only concern with Dell Equallogic BladeArray is the power consumption required. I recently scaled back my AMPs commitment and cancelled second PDU to reduce my colo fees. I doubt very much if I could take it in my environment without re-instating that expense. That however is a very personal consideration of mine – for customer with who already has 24 servers and two arrays – and was looking to consolidate that, they would most likely find themselves needing less power, not more…

In the morning I spent time with Robert Bradfield, just looking at the Server Blade side of the house. A couple of interesting observations came out from that discussion. The first one is that Dell isn’t seeing much large density server consolidation – that is folks racking up terrifying amounts of RAM to run lots and lots of VMs. That seems suggest that might be the influence of the cost of large RAM systems, but also the licensing implications introduce by VMware vSphere vRAM licensing allocations. It’s a fundamental law that customers will always “game” licensing systems.

So say a vendor licenses a server product at $1 per GB for 254GB, but then says you must pay $100 once you have >254GB. Then customers will have a tendency to design their solutions around avoiding licensing penalties – where ever that bar is set – so in the example above they would try to design systems that would not exceed 254 GB.  The important thing to say here, is that Dell is seeing folks by large RAM systems – but generally to run a smaller number of VMs that have large  memory requirements. So although the quantity of addressable RAM is going up, the VM consolidation ratio is not (so long as you don’t include the virtual desktop workload which necessitates packing in as many VMs on the same form-factor as possible to lower the per-virtual desktop cost). Setting aside the licensing issue, it perhaps suggests that many customers have psychological barrier with consolidation – the certain number of VMs per Server that they are comfortable with – and anything above that becomes undesirable. Why? Well, they worry about the impact of HA. The eggs-in-one-basket concern that has the consolidation ratio goes up the impact of failed server increases. It also means that you have to reserve significant amount of compute resources to accommodate the HA restart. Additionally, the time it takes evacuate a host of all its VMs increases as well for maintenance mode activities.

The other interesting aspect of these new blade achectictures is the way Dell have intregrated their Force10 acquistion into the platform. I think that’s quite an impressive turn around. That means all the components – server, storage, network are all interlinked with 10Gps networking – and there was talk of 40Gps later down the road. For the moment there’s 40GbE support on uplinks, there’s a max of 32 Internal server facing ports – ip to a total of 56 10GbE ports (32 internal and 24 external ports) and Up to 6 external 40GbE Ports


Dell Virtual Network Architecture (VNA)

My next discussion was with the Dell Virtual Network Architecture team. This was a more general discussion about the process of convergence – where LAN ethernet network meets the Storage Network. This was a pretty strong theme all week. For years we have been talking about how convergence and more “services oriented” view of the world is challenging the traditional silos of expertise. What I’m talking about is the way most Corp IT is still based around having specialist in the network, servers, storage, desktops, security et al. Increasingly, I’m seeing CIO/CTO feeling frustrated about this structure – and how increasingly the feel these skills groups represent fiefdoms of vested interested. In short the way we divided roles and responsibilities is increasingly seen as part of the problem, not part of the solution. I guess if you like you could say that CIO/CTO look for “one throat to choke” when it comes to vendor relationships – perhaps they are looking for “one throat to choke” from their internal support teams.

The analogy I proposed to the VNA group at Dell – was as the Storage vendors like Dell Equallogic, NetApp and EMC have all developed plug-ins to help VMware Admin provision new LUNs/Volumes – could we be looking at doing the same thing with the network. You might think this is a wacky notion. But how about when you create & define a portgroup on a vSwitch – it also creates the VLAN on the physical switch. That’s in marked contrast to how the folks at Cisco & IBM have things – where they are pushing their network management into virtualization layer… the idea Dell have is to use OpenFlow to turn physical switches into slightly intelligent engines for moving packets about – with the intelligence up inside the virtualization layer. So in this view of the world an administrator is in charge of the server/storage/network layers. As the layers converge so does the management. Now, of course there are some folks you might see such as concept a threat to their job security. I think that would be mistake to view it that way. If you think you job security depends on provisioning new LUNs/Volume or VLANs you need to think again. Those skills are increasingly commiditized. The way to view this is handing off low-skill, low-value tasks to those who make those requests – thus freeing up Network and Storage professionals to focus on stuff that really justifies their existence. I guess the analogy I would use here is – if you were a Active Directory specialist is it a good use of their time resetting end-user passwords.

Anyway, putting this high-brow, high-flautin thinking to one side there were some interesting practical examples of automating this stuff so the left-hand (virtual world) knows what the right-hand (physical world) is up to. So the team talked about how their working with vCloud Director API to allow for the dynamic creation of VLANs – and for the dynamic creation of VLANs to allow vMotion events to occur. There was also a strong indication that from hardware perspective of having top-of-rack switches uplinked to the core network could fade-away with the core network residing inside blade enclosures.

Some of these capabilities were demo’d at the Spring InterOp Demos – in total there were 4 demos:

  • Datacenter Infrastructure and Fabric Management demo
  • Automated Workload Mobility
  • Virtual Machine Tunnelling
  • Network Virtualization

vStart Roadmap

Next session I spent with the folks from the vStart group.  In case you don’t know vStart is Dell’s answer to vBlock/Matrix/FlexPod. It’s the stand-alone components ready-racked. There keen to emphasis this isn’t just a re-packagaing exercise on their part – and that thing is shipped as service with management layer placed on top of the conventional management tools (like Dell Equallogic “Group Manager” to accelerate the deployment times.

It’s tricky to write about here because this outside of the embargo period that I’ve agreed with Dell in June.I guess the most I can say is expect vStart to be BIGGER in the near future. That which I cannot speak of, I must pass over in silence. What I can say is that Dell are intergrating the Dell Compellent storage component into a new SKU called vStart 1000.

Note:  vStart 1000 shown with Compellent controllers and 12G PowerEdge ½ height server blades (M620).

I guess what I can also say is there is management process which I can see will apply to both vStart & Dell Equallogic BladeArray in equal measure – and suspect that Dell want to sharpen up the look and feel of the management interfaces for the chasis, servers, network and storage – to make them feel like an integrated system.  That’s a work in progress that comes of the back of having management system that were original developed before the accquistion of Force10 Networking and Equallogics…

I was given a brief tour around the lab environment – and got to take a look at vStart 50 which had just come back from being on the Solutions Exchange – much though I love idea of Dell Equallogic BladeArray the reality is that the vStart is probably more within my budget, and power consumption limits… My only problem with these sorts of re-racked kit is having de-rack them for my colo provider to fit into the racking…

Dell Management Plug-in for vCenter:

This is something I have blogged about before last year. My relationship with Dell has improved massively over the last two years. Partly spurred by work on Dell Equallogics for the VMware Site Recovery Manager book, but also I think by general urge by the Dell TechCenter guys to engage with folks like me who are firmly in the VMware Community side of the house. In case you don’t know the vCenter plug-in extends information you that normally be found in Dell OpenManage into the vCenter environment. Unlike say the Dell Equallogic plug-in this management plug-in is not for “free”. Starts from about $99 per server. The term “free” is a bit of mute point here. Nothing is really for free if you think about it – all the storage vendors have made their management plug-ins for free, but lets face it without parting with a lot of $$$ for the array they are pretty much useless.

Anyway, putting that hot-potato aside the plug-in allows for couple of administrative actions – and it essentially acknowledges that deploying a new server isn’t JUST about getting ESX on it. That’s one of the downsides of VMware’s Auto-Deploy. Wonderful though PXE boot of the VMkernel is – it’s only one part of the process when racking and stacking a new server. Dell’s Management plug-in can handle firmware updates, configure BIOS settings (such as ensuring advanced CPU features needed with vSphere are enabled systematically), iDRAC settings as well as deploying ESX via the iDRAC. One of the downsides I see if Auto-Deploy is the struggles customers might have getting approval for DHCP on their network, as well as the requirement for Enterprize+ and host profiles – also there’s a security aspect to consider. So I can imagine customers will be attracted to installing ESX to disk (san, sdcard, local). I think customers should be looking to their OEMs to stitch a deployment process together that handles ALL the requirements needed for setting up a new ESX host.

The plug-ins top jobs are:

  • Monitoring & Alerting
  • Deployment & Provisioning (using the Dell iDRAC with LifeCycle Controller)
  • Firmware Updates (using the Dell iDRAC with LifeCycle Controller)
  • Hardware Management (Warranties and wot not!)

Starting with the 1.5 release of the plug-in its no longer a requirement that OMSA has to be installed if your using the new 12G hardware – and it comes with new features such as support deploy to dual SD cards in the servers, as well as lockdown mode. The firmware update feature intregrates with vSphere’s maintainance mode to evacuate the host of all VMs, before applying the firmware updates. With 1.5, Dell will also bring the box back out of maintenance mode automatically after the update if the customer chooses.

Note: Graphic kindly provided by Dell, click on the thumbnail to see a bigger view!

OpenManage Integration

My last session of the day was the folks in the OpenManage group. That was quite a scary meeting! There was about 10 folks around the table and it was getting late in the day – and I was reaching “saturation point”. We kicked off with looking at iDRAC7 and how Dell is investing in the “Lifecycle Controller”. The idea is to allow for extended management without the need for agents and so on where previously you did for things like NIC management. However, what really grabbed my attention was their integration with System Center 2012. Again, I can’t say too much about this – but what I was struck by is how open System Center is to these third-party extensions. In away that is currently harder for VMware vCenter. Again the management is delivered in agentless way using the lifecycle controller.

Permanent link to this article:

May 10

vCenter Infrastructure Navigator throws the error: “an unknown discovery error has occurred”

I was deploying vCenter Infrastructure Navigator (VIN) in my lab today and the following error came up after I wanted to check dependencies for a virtual machine:

Access failed, an unknown discovery error has occurred

I rebooted several services but nothing seemed to solve it. Internally I bumped on a thread which had the fix for this problem: DNS. Yes I know always DNS right. Anyway, I used “DHCP” for my VIN appliance and this DHCP server pointed to a DNS server which did not have the IP/name of my ESXi hosts listed. Because of this the discovery didn’t work as VIN tries to resolve the names of the hosts as they were added to vCenter Server. I configured VIN with a fixed IP and pointed the VIN appliance to the right DNS server. Problem solved.

vCenter Infrastructure Navigator throws the error: “an unknown discovery error has occurred”” originally appeared on Follow us on Twitter and Facebook.
Available now: vSphere 5 Clustering Deepdive. (paper | e-book)

Permanent link to this article: