Thomas Sasala has successfully led the Army through a virtualized desktop infrastructure (VDI) program that currently is being scaled from pilot phase to more than 20,000 users throughout the Pentagon. The program holds promise for the rest of the Defense Department and broader federal government as agencies look to combine efficiencies, savings and security — with IT promising significant possibilities.

Sasala recently sat down with Federal Times Senior Staff Writer Amber Corrin to discuss the successes so far and where
the VDI program is headed next.

Where does Army ITA's VDI program currently stand? How is the Army benefiting?

From a desktop perspective, we think it is a game-changer for us. There are really two fundamental reasons for that, which revolve around patch management as well as the larger management-monitoring-maintenance perspective.

Don't Wait! Thomas Sasala will be the guest in a C4ISR & Networks webcast on DoD Virtualization, scheduled for March 10. Click here for more information.

In terms of specific data about where we are today, we currently manage 18,000 desktops; there are well over 50,000 desktops in the Pentagon. Typically it takes us two to three weeks when a security patch comes out to get the environment up to snuff, and we still always have cats and dogs where we don't get patches applied to the desktop. With VDI we can take a single master image, patch that and then test and recompose overnight. Users get a fresh desktop in the morning that's patched. It takes those two to three weeks down to 24 hours. It's a robust platform for us to launch from, from a cybersecurity perspective. We can secure the environment very quickly when, say, a zero day is discovered.

We still have the need for traditional desktops and laptops; we still have mobile devices that are a concern. But 80 to 90 percent will be virtualized and we'll get those benefits in a large, mass scale, and that represents significant savings in terms of labor and energy.

How do you track savings in a program like this?

It's complicated to figure out what the total cost of ownership for a system is, and when you're making fundamental changes to the way a system works, it's challenging to really estimate in advance the differences you're going to experience. One thing we did is we tried to baseline the existing environment in terms of performance, utilization and other less-tangible things. Once we get VDI fully employed and populated, we'll toggle together the same data and compare and contrast and show the differences.

Another challenge is is that government contracting and the pool of labor is mostly on firm, fixed-priced contacts. We've hired vendors and integrators to do a certain type of work at certain costs, so it's challenging, if not impossible, to get that money back initially. Once we start renegotiating and re-competing labor contracts using VDI and other virtualized infrastructure as the basis, we can start asking vendors to take a hard look at the density of system administrators to servers and desktops that they're proposing … and gauge potential savings.

In certain circumstances we have seen significant savings from a labor perspective, because vendors have acknowledged … administration is remotely done with what's called "light touch," no need to physically touch databases anymore.

When money isn't spent for an original requirement, then we take it back to the investment review board and ask if it [can be used] for another requirement or just pushed somewhere else. In VDI, even with fewer desktop administrators installing software, we now have new requirements to remotely provision software for the new architecture. We can't tell yet if that's identical in scope and size … it becomes complicated quickly from the desktop perspective when we're talking about servers.

There are relatively few software packages intended to run on servers, but there are thousands that run on desktops.

What are some of the challenges that spurred the move to VDI?

We have an extremely large community of users with highly diverse requirements, so we were really trying to meet their individual requirements on a one-on-one basis, whereas with VDI we're trying to address 80 percent of requirements in a unified manner and treat one-off requirements as one-off. It's a little about expectation management: If the user expects to receive a personal application, you need to manage what it means to get that application. Do they really need it, or can they use another that's already virtualized and does essentially the same thing?

Effectively when 80 to 90 percent of certain tools all do same thing, why wouldn't we wick it down to one or two tools and virtualize those and get a site license or enterprise license?

There are the same functionally based challenges across the public sector, where people are spending a lot of time trying to address requirements individually, and they're not getting any sort of network effect or bang for buck. We're trying to address the preponderance of the requirements in one fell swoop and pick off individuals as they come up.

Are there new challenges that have been introduced by moving to VDI?

One of reasons we did the pilot was to figure out issues unique to us. Some issues turned out to be relatively minor; some not so minor. At the end of the day VDI relies on the network and data centers … the connection from the backend to the network infrastructure relies on that and the power within the building.

We found performance issues that we had to resolve, and the Pentagon has a complex environment so we had to get to the root cause. We also had to introduce a lot of redundancy and resiliency into the design to address load balancing and automatic re-routing. There were some other problems after the pilot we didn't anticipate, such as individual virtual desktops consuming more resources than others and we didn't know why. It turned out we did not adequately investigate typical use patterns of users — some were system admins and spent all day looking at log files, so what we were seeing was people remoting into other machines and generating huge amounts of traffic and using up large amounts of bandwidth.

So we came up with the T-shirt model, a "small/medium/large" profile of users.

What's next for the program?

The pilot was completed in December of last year officially; we're about 50 percent through the migration of about 2,000 users. The Joint Staff wants to add an additional 5,000 users; meanwhile Department of Army Headquarters wants to go virtual desktop, too, which has 12,500 users. The original deployment was for 2,000 users, and now the requirement is for 20,000-plus, so that's almost 10 times bigger than the original plan. There's also the possibility of [Office of the Secretary of Defense] joining in as well as the Marine Corps. It's not dissimilar from enterprise email, where others see the advantages and get to benefit from bumps experienced by [early adopters]. We didn't have as many bumps as enterprise email, but we still had challenges.

We started migrating pilot users for Army headquarters in early September to see what the picture looks like. We will go to full operations within months and spend the next year to a year-and-a-half to migrate everyone. We're targeting a migration of about 100 users per week … if we see anything wonky we can put brakes on migration. The critical metrics were around 1,000 users, to see if everything is scaling linearly and it is as we expected, which is good news. The next threshold will be 5,000 and 10,000, and we're hoping it will continue to scale predictively instead of hitting some kind of wall.

Editor's Note: This story was originally published on Oct. 20, 2014.

Editor's Note: This story was originally published on Oct. 20, 2014.

Editor's Note: This story was originally published on Oct. 20, 2014.

Share:
More In PM View