RADM David Lewis is the commander of the Navy's Space and Naval Warfare Systems Command (SPAWAR) in San Diego. Since selection to flag rank in 2009, Lewis has also served as vice commander of Naval Sea Systems Command and was program executive officer for ships, where he directed the delivery of 18 ships and procurement of another 51 ships.

He spoke with C4ISR & Networks Editor Barry Rosenberg about production fielding of CANES, the Consolidated Afloat Networks and Enterprise Services (CANES) program that pulls together consolidates the Navy's five legacy C4I networks into one cyber-secure computing architecture that integrates voice, video, data and system management functions, as well as the challenges associated with modernization — especially when there's an oxy acetylene cutting torch involved.

With support of the war fighter a given, what's at the top of your to-do list?

RADM DAVID LEWIS: The first one is sustaining the fleet. The second one is cybersecurity, and the third one is system architectures for C4I systems. In sustaining the fleet, we have a very extensive modernization program for the ships; CANES is one you're familiar with.

It is being done in all of the Navy's home ports, [and we're looking at] how we're organized to oversee and execute that activity. So we've gone into a project management organization at all of our home ports and we have clear accountability and responsibility for ship modernization.

What are some of those other modernization efforts?

LEWIS: Navy Multiband Terminal, commercial broadband, satellite program installations, XP eradication across the force, which is a precursor to CANES. We're upgrading our older systems for cybersecurity. So there's an extensive C4I modernization portfolio going on all the time, and, as I said, CANES is the headliner, if you will.

Another function in Navy [system commands] is in-service engineering agents (ISEA). We had 52 ISEAs that support our various fielded systems. They do in-service support; they track efficiencies and manage tech assist. In looking at that we determined that there was too many and they're not well resourced so we're in the process of consolidating them down to 16 ISEAs, which are capabilities based.

For instance, satellite communications will be one ISEA, with five satellite communication systems to be now be supported through one in-service engineering agent. Our expectation in doing this is that we will have better support with the resources that we currently have through these smaller number of larger in-service engineering agents. We'll get synergy between systems in the sense that some of the systems are very old and some are very new, and by having a single in-service engineering agent across that spectrum they will do a better job of managing them.

Have you already begun to neck down from those 52 ISEAs?

LEWIS: We consolidated the five satellite communications ISEAs on the 1st of October as a pilot. It took almost a year to figure out how to do that. We learned a lot about what it was going to take to do that, including the social and the cultural factors as much as anything. Now we're proceeding with the remaining ones over the next year to consolidate them down to 16. I think the fleet will see a big benefit from that in terms of level of support.

And the third priority?

LEWIS: And then the last thing is our tech assist process. We have a tiered support process for when a ship at sea calls for help. What I found was—particularly for new systems or for exotic systems, and we have a lot of those—using that normal process didn't work well. It wasn't responsive for deployed ships.

So we've agreed with the four-star fleet commanders to set up an accelerated tech assist process for certain systems where the expertise just doesn't exist in the normal fleet maintenance infrastructure. My ISEA is the only person in the world that knows anything about that system. So when a ship calls for help, it comes straight to me to support that, which was a breakthrough. We implemented that in both 5th and 6th Fleets. It's been a practice in 7th Fleet, where we didn't have formal documentation but it's worked well so now we're formalizing that relationship with the 7th Fleet.

What role is SPAWAR playing in DoD's enterprise architecture plans, specifically the Joint Regional Security Stacks, which the Navy hasn't yet joined?

LEWIS: We are actually heavily involved in that. The Navy is all in on JRSS, and as you probably know the Navy through the [Navy Marine Corps Intranet/NextGeneration Enterprise Network] architecture has the biggest network in DoD. So we built a lot of boundary protection into the NGEN architecture, which duplicates much of what's in JRSS. We're heavily involved in the JRSS requirements architecture for 1.5 to 2.0 specifically so that we can move that protection [from the NMCI/NGEN network] over to JRSS. And we can take it out of the scope of NGEN, since JRSS will take over that function for DoD.

Our chief engineer is very heavily involved in that discussion with PMW 205 [Navy Enterprise Networks] program office and also with the JRSS/JIE folks.

CANES received approval for full-rate production in October, meaning program oversight switched from DoD to the Navy, which can now install CANES across all shipboard, submarine and shore-based systems. Twenty-six installs have been completed since 2013 with 153 remaining. What would you view as the key challenges associated with full fielding?

LEWIS: We had a lot of issues originally just getting it installed on a ship, and getting the schedule and implementation right. We're pretty much through that. Our first carrier install on Stennis took 16 months, the second one on Reagan took nine months and our third one on Vinson is going to take seven months. We're pretty far down the learning curve in terms of getting it on the ship. Now we're looking at compressing our schedules. I'm also looking at how we are going to do our software upgrades and hardware upgrades.

Something that's exciting out there is hyper convergence infrastructure [HCI, an enterprise data center infrastructure that integrates server, storage and virtualization into a modular platform]. It's architecture that's present in the commercial data center business and we're looking at that very closely to see if there's an opportunity in CANES to bring that in. We're doing some testing in that area.

Its implementation in the commercial world dramatically reduces the size of a system. CANES is a private cloud on a ship. But it takes up a lot of space and uses a conventional architecture. HCI would reduce the footprint by potentially 50, 60, 70 percent. It dramatically increases available storage and available processing power so we would have a better cloud that is less expensive to implement. It's also more secure, even than CANES.

So there's a lot of potential advantages to HCI and we're watching that very closely. We'd like to be aggressive in that area.

I'm hearing the same sort of modernization discussions at places like the Army's Program Executive Office Command Control Communications-Tactical, where they're looking to reduce the footprint of command-and-control communications gear.

LEWIS: Let me follow up on that a little bit. One of the problems we have in modernization is that we do a lot of our C4I modernization using an oxy acetylene cutting torch, OK? We've got to cut and because of that we have what we call a CNO availability, which is a formally scheduled shipyard period for a ship, which lasts four or five months, and up to a year and sometimes longer. Because we need to cut and burn to install a piece of C4I equipment we're tying ourselves to those CNO availabilities, and that restricts the rate and pace with which we can modernize our C4I systems.

So we're working very aggressively on a C4I modernization process that uses a wrench and DVDs or other media to do our upgrades and our modernization. I'd rather pull a drawer out of a rack and bolt a new drawer in place than have to cut off a foundation, pull cable and do all those heavy industrial things.

Because of that our requirement to do cutting and burning very much restricts and controls our ability to do modernization; so to the extent that we can go to bolted fixtures, to the extent that we can go to drawer replacement in racks rather than installing new racks, [the faster modernization can happen]. Hyper convergence infrastructure is a case in point with that.

So that's an architecture issue…how we design the system to be physically installed on the ship? That's a complicated problem for us. We're working through the details on that. The commercial terms for that would be modularity, fix ability, adaptability. The driver for CANES is the requirement to do cutting and burning, so that paces our ability to modernize.

If we can get out of a reliance on an industrial availability schedule and move to a faster less intrusive modernization schedule I think it would be a good service to our fleet customers, allow us to upgrade our technology much faster, and at a much lower cost.

Where will the contracting opportunities be with SPAWAR in the future?

LEWIS: NGEN is the big one coming. PEO Enterprise Information Systems has already had an industry day on that. We're looking at some alternatives for what that future competition is and the scope. You've already mentioned JRSS. That changes the scope of what we would be doing in that future competition. We're also looking very hard at cloud implementations. We've got a couple of pilots going for cloud access points. We're looking at bringing, mainstreaming if you will, cloud into NGEN, and looking at mainstreaming mobility into NGEN.

I used to run the data center consolidation work for the Navy. I've moved that to PEO EIS under Victor Gavin so that we could combine what we're doing with data center consolidation and what NGEN already has for data centers. We're already seeing some good synergy there. So I think for the future we're looking at data centers as probably being potentially within the scope of future competition. So the next NGEN will be very innovative for all the reasons I just went through.

Share:
More In West 2020