Archive for October, 2010

Is Price Everything ?

Friday, October 29th, 2010

There is scarcely anything in the world that some man cannot make a little worse, and sell a little more cheaply. The person who buys on price alone is this man’s lawful prey.

via John Ruskin Quotes – The Quotations Page.

Is Microsoft ready for post-PC world?

Thursday, October 28th, 2010

I’ve been saying it for a while and now it seems the BBC has cottoned on too. With the advent of Bring Your Own Computer (BYOC) and users wanting to use consumer devices in the workplace, the decline of Microsoft as an operating system is almost guaranteed.

I believe that manufacturers will still ship Windows on new PC’s but far more operating system choices will be available (Citrix has arranged for XenClient to ship on board some models of laptops widening the choice to load your own). This choice will lead inevitably to lower market share for Microsoft in the consumer o/s world (though I expect them to still dominate the corporate market for some time to come, especially for virtualised desktops). Similarly, with the move to Software as a Service, productivity applications such as Office will be consumed far more from on line, especially as connectivity increases and becomes more reliable (still no link on the 6pm back from Waterloo though !).

Is this an issue for Microsoft ? Not really. Microsoft will be offering Office from the cloud from early next year (http://www.office365.com).  This allows them to guarantee they will be paid for the software (they’re offering it free to consumers to ensure that users continue to request office in the workplace) which will lead to lower prices for customers and higher profits for Microsoft. Conflict versions will disappear as everyone will be updated together. Bugs will be fixed on time without being announced and the platform will be looked after by the people writing the software making it a more stable solution.

The same will hold true for Exchange, SharePoint and Lync.

So, always available, free upgrades, built in anti-virus, no upkeep or support costs, no hardware, bug free and cheap. Who wouldn’t want to transfer across ?

In my view, its a changing world and Microsoft probably are running to catch up but they will get there but with a smaller company making more money than before.

HP Ships Windows 7 on Slate 500 Tablet PC

Wednesday, October 27th, 2010

Will it be an iPad killer or a damp squid – read the overview at => HP Slate 500 Tablet PC overview – HP Small & Medium Business products.

The 10 Pains you Solve by Using Application Virtualization

Thursday, October 21st, 2010

This webcast features independent industry analysts Brian Madden and Gabe Knuth as they discuss application virtualization. Learn how this technology works and how it can simplify the role of IT administrators. Gain expert insight into the top ten problems you can solve with application virtualization. 

Click here to view.

System Center Configuration Manager 2007 R3 unleashed!

Tuesday, October 19th, 2010

Below is a quick summary of what’s new with R3. Go here to download the evaluation software.

Centralized Power Management

Configuration Manager R3 lets IT organizations centrally manage the power settings of Windows 7, Vista and XP computers, helping reduce energy consumption and costs. You can plan and apply a power management policy for high and low PC usage periods, monitor user activity to avoid any productivity interruptions and correct non-compliance. Detailed reports of trends and settings help you make smart power management choices, and also validate Green IT projects with summaries of power, money and CO2 savings. Learn more here.

Mobile Device Management

Configuration Manager R3 includes licenses for the popular System Center Mobile Device Manager, so you can run comprehensive asset inventories, deploy software, manage settings and enforce password policies for Widows phones.

Enhanced Scalability and Performance

Configuration Manager R3 is more scalable than ever, increasing the number of supported clients to 300,000 per site. R3 is also more efficient in the way it communicates with Active Directory, helping you discover user or machine changes more quickly and allowing custom queries to define user, system or group attributes

Read more about SCCM 2007 R3 here.

Backing up in a virtualized environment

Monday, October 18th, 2010

A BrightTALK Channel

XenDesktop, Hyper-V and System Center Resources » ocb – Citrix Community

Tuesday, October 12th, 2010

Want to know how to run XenDesktop under Hyper-V – full set up guide for evaluating this fantastic technology here

XenDesktop, Hyper-V and System Center Resources » ocb – Citrix Community.

Virtual Server Sprawl

Wednesday, October 6th, 2010

You’re planning on going virtual with your servers. Everything is going well. You have your hypervisor deployed and you’ve converted your servers from physical to virtual. You’ve “lived the dream” and put your dev and test environment on there as soon as you could and allowed other people to create the virtualised servers they needed. Life is good.

But wait (enter the sound of screeching tyres). Is this so good and what can it mean for the business ? After all, this is why your virtualised, right ? To be able to save costs and deploy servers quickly. Only, without the financial constraints that existed to stop additional servers being provisioned in the physical world there was nothing to hold anyone back in the virtual world. If things have got out of hand then lets face it, you trusted these people to retire their servers and they’ve let you down. If this sounds familiar then you are no doubt a victim of “Virtual Server Sprawl“.

One of the benefits of virtualisation was to be able to not just load multiple services onto one instance of an operating system (physical server) but to be able to run multiple instances of the operating system and dedicate each of those instances to specific services. Server virtualisation also promised to make those services more highly available with vMotion, LiveMotion, XenMotion, etc, Virtual Server Sprawl is not the creation of additional servers. that was always expected and planned for in any virtual server migration. Virtual Server Sprawl results from “the uncontrolled creation, administration and lifecycle management of virtual servers“. The important word here is uncontrolled. Out of hand virtual server sprawl can become a nightmare for the server team with issues arising around licensing, maintenance, backups and security as well as environment stability. All of this can translate into cost that erodes the very savings server virtualisation was meant to realise. As Thomas Bittman of Gartner put it “Virtualisation without good management is more dangerous than not using virtualisation in the first place”.

I use the three stage definition as it highlights the areas that need to be controlled to prevent Virtual Server Sprawl and, as always, these issues reflect those in the physical server world. It’s just that with physical servers IT departments have had physical constraints placed on them to prevent physical server sprawl (finance, physical number of servers, limited power and cooling in the data centre, separation of dev / test LAN from main network) and frequently these limitations have not been re-invoked in the virtual world.

Lets look at each of the three areas in turn:

Creation

Uncontrolled creation of servers arises when servers can be deployed at will with no consequences. Generally this can be handled by process and if an automated provisioning process is used, by assigning users “credits” that reflect the number of machines they can create. This can also help with lifecycle management in that users will be more willing to retire servers which are no longer needed to reclaim credits. Its not enough just to assign credits to users though. Internal IT will not be constrained by credits and, as above, will create additional servers to provision additional services. As there is no capital expenditure (CapEx) procedure to follow they are more likely to add additional servers and if the virtualised environment supports memory overcommit then “That’s OK, we have almost unlimited memory”. The truth is more likely to be that the virtualised environment will page out the RAM more, shared disk sub-systems may hit performance issues and shared network connections may be overwhelmed. At least by having some sort of process that requests, authorises, provisions and reports on the virtual server provisioning process then these issues may be minimised.

Administration

Once a server is provisioned, who maintains that server ? As the server provisioning process is accelerated is the server deployed in a secure way and what effect will it have in a production environment ? If server deployment is delegated outside of the core server team then will firewalls be turned off “because it’s easier”. Will the servers be placed in the correct OU in Active Directory to have the right policies applied. If not, will the anti-virus server deploy the engine and patterns and will appropriate exceptions be applied for the role of the server ? Indeed, will anyone have made arrangements to back-up the server or will all VM’s be backed up by default even if they have no data on them (web servers) or are only used for testing. Who will patch the servers ? These issues can be mitigated by the use of virtual machine templates that include all latest kernel updates, service packs and patches for the operating system. That add the provisioned machine automatically to an OU appropriate to its role (messaging, database, file, web etc) in order that the correct GPO’s can be applied. That assign an appropriate amount of hard disk space, CPU, RAM etc. That have the basics of the anti-virus solution correctly pre-installed so that engines and patterns can be downloaded and updated  on first boot.

Lifecycle Management

Let’s assume that the server was authorised for creation and it was provisioned fine. However, it was only meant to be used for 30 days to test out a scenario. If the server is never retired it continues to consume resources on our physical host., not only memory but possibly also expensive shared storage space If the virtual server has software installed on it (extremely likely even if ti is just an operating system) then it may be running unlicensed if its license has been transferred or re-assigned to another machine (either its replacement in production or another test machine). In the physical world physical servers would be retired to be repurposed or retired totally to reclaim space in the server rack, regain network ports or return scarce power sockets to use. These items that self correct IT teams use of physical servers don’t exist in our virtualised environment. Basic change control and reporting processes can limit the effect of virtual machines being provisioned beyond their useful lifetime.

The issues around Virtual Server Sprawl are readily identifiable and easy to anticipate.

  • Increased paging of RAM due to overcommitment / over usage of physical RAM in host
  • Reduction in available storage space which may be expensive in shared storage environments
  • Additional network traffic and possibly incorrect assignment of servers to appropriate VLANs
  • Security vulnerabilities with machines incorrectly configured or patched
  • Incorrect policies assigned due to mis-placement in AD
  • Lack of backup, DR or business continuity
  • Possibility of licensing issues with applications on redundant servers

There are many software packages you can buy to help but they are only as good as they are configured and if they are used rigorously. For those on a budget Virtual Server Sprawl is reasonably easy to control with forward planning.

  • Process & Authorisation
  • Configuration & Templates
  • Monitoring & Reporting

Should Virtual Server Sprawl stop you from virtualising your environment ? In my opinion absolutely not, but you definitely need to be aware of its existence and plan accordingly.

Backup throughput metrics

Monday, October 4th, 2010

When planning your backup strategy you need to consider the path that data takes to your backup servers and from there to your tape drives or disks. Whichever is the slowest point in the chain will determine just how fast your backups can run.

Below are some metrics that give rough throughputs for networks, backup devices etc to help you plan your backup and know in advance just what sort of throughput you are likely to get. All you then need to do is multiply the slowest link in your chain by the number of hours of your backup window and then you know just how much data you will be able to backup – remember that bonding cards together or adding backup devices to a tape library will increase your data throughput (simply multiply the figures below).

Network Transfer Rates

Network Type Theoretical Rate Realistic Throughput Realistic Rate
10 Base-T 10 Mbps
or
1.25 MB/sec
40 – 50 percent 500 KB/sec
or
1.8 GB/hr
100 Base-T 100 Mbps
or
12.55 MB/sec
80 percent 10 MB/sec
or
36 GB/hr
1 Gigabit 1000 Mbps
or
125 MB/sec
70 percent 87.5 MB/sec
or
315 GB/hr

SCSI Transfer Rates

Version Bus Width Approximate Maximum Data-Transfer Rate
Wide Ultra SCSI 16 bits 40 MB/sec
or
144 GB/hour
Ultra2 SCSI 8 buts 40 MB/sec
or
144 GB/hour
Wide Ultra2 SCSI 16 bits 80 MB/sec
or
288 GB/hour
Ultra160 SCSI 16 bits 160 MB/sec
or
576 GB/hour
Ultra320 SCSI 16 bits 320 MB/sec
or
1,152 GB/hour

Fibre Channel Transfer Rates

Version Bus Width Approximate Maximum Data-Transfer Rate
Fibre Channel 1 Gbps 100 MB/sec
or
360 GB/hour
Fibre Channel 2 Gbps 200 MB/sec
or
720 GB/hour
Fibre Channel 4 Gbps 400 MB/sec
or
1,440 GB/hour
Fibre Channel 8 Gbps 800 MB/sec
or
2,880 GB/hour

Tape Drives

Device Type Approximate Transfer Rate Maximum Capacity
DDS-4 6.0 MB/sec or 21.6 GB/hour 40GB
AIT-2 12.0 MB/sec or 43.2 GB/hour 100 GB
AIT-3 31.2 MB/sec or 112.3 GB/hour 260 GB
DLT 7000 10.0 MB/sec or 36.0 GB/hour 70 GB
DLT 8000 12.0 MB/sec or 43.2 GB/hour 80 GB
Super DLT 24.0 MB/sec or 86.4 GB/hour 220 GB
Mammoth-2 24.0 MB/sec or 86.4 GB/hour 160 GB
Ultrium (LTO) 30.0 MB/sec or 108.0 GB/hour 200 GB
IBM 9890 20.0 MB/sec or 72.0 GB/hour 40 GB
IBM 3590E 15.0 MB/sec or 54.0 GB/hour 60 GB
LTO2 68.0 MB/sec or 245.0 GB/hour 400 GB
LTO3 160.0 MB/sec or 576.0 GB/hour 800 GB
LT04 240.0 MB/sec or 864.0 GB/hour 1.6 TB
DLTS4 320.0 MB/sec or 1,152 GB/hour 1.6 TB

So, don’t buy a DLTS4 drive to run over a 1GB link, you may as well save some money and buy a less capable drive as your link will slow you down. And do consider streaming multiple backups to the same backup device or using multiple older backup devices to take account of faster links.