Can DevSecOps Undo DoD’s Broken Software Failures?

The Department of Defense (DoD) has been developing software intensive systems for the last thirty years. Only in the past decade has the Department openly recognized that these software intensive systems are critical to the future of U.S. National Security. DevSecOps, short for Development, Security, Operations, is one of the hottest commercial information technology (IT) trends. Continue reading

Posted in Cybersecurity, DoD IT Acquisition, Technology Evolution | 22 Comments

Part 2: Will Artificial Intelligence (AI) Help Maintain U.S. Naval Superiority over China’s Growing Naval Power?

As discussed in Part 1 of this AI discussion, China’s central government plans to achieve AI breakthroughs by 2025 and world AI dominance by 2030. If the DoD’s past acquisition track record doesn’t change, it could be twenty years before significant AI technology is actually deployed to military units. Continue reading

Posted in DoD IT Acquisition, Global Perspectives, National Security, Technology Evolution, U.S. Navy Capability | 13 Comments

Part-1: Will Artificial Intelligence Help Maintain U.S. Naval Superiority over China’s Growing Naval Power?

Background

It is hard to pick up a magazine or newspaper today without seeing something about the amazing things artificial intelligence and/or machine learning (AI/ML) are doing to change our lives for the better. Most people enjoy the benefits of talking to our computers, cars, and home specialty devices like Google Home and Amazon Alexa but don’t think about or care that these technologies are enabled by natural language processing (NLP), one of the today’s most advanced forms of AI/ML. It is even possible today to get real time language translation earbuds to help us more easily explore visits to foreign countries, bringing the Star Trek universal translator that much closer to reality (https://www.startrek.com/database_article/universal-translator). Continue reading

Posted in DoD IT Acquisition, Global Perspectives, National Security, Technology Evolution, U.S. Navy Capability | Tagged , , , | 28 Comments

Effective DoD Acquisition Needs Less Noise … Part 2

The well-known valley of death between the DARPA or Military Service Science & Technology (S&T) development and military Programs of Record (PORs) is the result of the high-entropy, high-noise channel, that sits between S&T and the bureaucratic DoD acquisition system. This noisy acquisition demise can be traced backward from today’s Planning Programming Budgeting System (PPBS), Federal Acquisition Regulations (FAR), Operation of the Defense Acquisition System Instruction (DoD Instruction 5000.2), and Joint Capabilities Integration and Development System (JCIDS). Each are now large bureaucracies that help make up the majority of the 25,000-person army of centralized DoD oversight, operating from the Pentagon.

The root of these centralized policy and processes changes can be traced back to Secretary of Defense Robert McNamara‘s Whiz Kids, introduced into the Pentagon in 1961. Using his automotive executive background he brought modern economic analysis, operations research, game theory, computing, and modern management systems to the Department. It was his invention of the PPBS that was supposed to introduce unprecedented budget transparency and pinpoint management responsibilities for weapon system acquisition.

SECDEF Robert McNamara Jan 1061-Feb 1968

Since the introduction of McNamara’s PPBS, each of the additional high-noise policy and process changes have further centralized Defense acquisition in the name of greater efficiency and tax dollar effectiveness. The most sweeping changes came from the 1986 Goldwater-Nichols (GN), Department of Defense Reorganization Act which passed into law two major changes. First it decentralized military operations by streamlining the chain of command from the President through the Secretary of Defense directly to Combatant Commanders, thereby bypassing the Service Chiefs. Second it centralized acquisition by directing the establishment of the Office of the Under Secretary of Defense for Acquisition, USD(A), similar Component Acquisition Executives (CAE), and Program Element Officers (PEOs) that manage groups of related Program Managers. This sweeping acquisition change finalized the full separation of acquisition from the military Service Chiefs, who were left with the authority only to set requirements and provide acquisition budgets. This GN acquisition-side change has led to the unintended consequences of today’s high-entropy, high-noise, over regulated acquisition system that needlessly squanders National Security budgets for long timeline, too-big-to-fail acquisition programs such as the most recent F-35 Joint Strike Fighter.

Why do we know that Gilder’s bureaucratic noise theory reduces acquisition effectiveness? One only has to observe that every Secretary of Defense since Goldwater-Nichols has announced their intention to improve acquisition inefficiencies.

“Today’s defense acquisition system is a product of decades of reform initiatives, legislation, reports and government commissions. Major reform efforts began in earnest in the 1960s with Secretary of Defense Robert McNamara. His main reform efforts centralized control within the Office of the Secretary of Defense (OSD) and created the Planning, Programming and Budgeting System for resource allocation. Throughout the latter half of the 20th century, each administration left its own mark on defense acquisition, focusing primarily on the acquisition process itself, as well as Department of Defense management. However, many of the reforms recycled various schemes to shift decision-making authority from the services to OSD, realign oversight and accountability responsibilities, and alter the process (adding and removing milestones, phases and so forth). Despite these initiatives, cost and schedule growth continue.”

The New Acquisition Reform Effort: Back to the Future – William Lucyshyn

The result has been several rewrites of the DoD 5000 instruction and the system hasn’t gotten better… but it has gotten worse! In addition, the Department has better trained the acquisition workforce through the Defense System Management College (DSMC), now Defense Acquisition University (DAU), where all PMs and PM staff members are required to have completed DAU training. Despite all of these well-intentioned improvements, the acquisition oversight staffs have gotten bigger, defense contractors have grown larger staffs to address the required oversight documentation, and an entirely new form of support-contractor has emerged to support program offices, in order to feed the oversight documents and process requirements. And after all of that, defense system acquisition can show no measured improvement in delivered and effective systems. But it can show performance, cost, and schedule failure after failure!

F-117 Nighthawk

As discussed in Part 1 of this post, we can understand from DoD’s history, that a previously lower-noise, lower-bureaucracy acquisition channel enabled the human inspired entrepreneurial surprise of nuclear powered SLBM submarines developed and delivered in less than a decade, or AEGIS multi-warfare missile defense ships developed and delivered in thirteen years, just to name two pre-GN programs. Post GN, the only remarkable acquisition achievements have been made as special programs like the F-117 stealth aircraft that achieved IOC in six years following the DARPA proof of concept HAVE Blue aircraft. This program avoided the standard high-noise acquisition path by leveraging DARPA’s low noise development activities and extended that through special low-noise, high-profile, acquisition activities.

If there is any good news for future DoD acquisition it is that Congress, through the National Defense Authorization Acts (NDAAs) from 2016-2018, have passed several new laws designed to regain DoD’s technical edge and empower Program Managers to speed up acquisition and take advantage of as much commercial technology development as possible. The result of these law changes has been the breakup of the Under Secretary of Defense for Acquisition, Technology & Logistics, into two Acquisition Under Secretaries, one for Research and Engineering, and the second for Acquisition and Sustainment.

The goal of the USD Research and Engineering, the Department’s Chief Technology Officer, is to empower the adoption of breakthrough technologies, extend the capabilities of current war fighting systems, counter strategic surprise, and develop policies for rapid technology transition. One of the most empowering changes is NDAA Section 804 that authorizes Middle-Tier acquisitions and the use of Other Transaction Authority Agreements for rapid prototype developments. Middle-Tier acquisition does create a low-noise channel and allows for prototype to IOC development in five years using OTA Agreements and if successful authorizes full production under OTA for IOC to Full Operational Capability (FOC) in five years under an OTA Agreement.

The more traditional role of USD Acquisition and Sustainment supervises all DoD acquisitions including procurement of goods and services, while empowering Military Services by moving all major Service programs under acquisition control of the Component Acquisition Executives. This change reduces the drive for Joint programs and reduces some of the bureaucratic noise by leaving the Services in control of their high value acquisition programs. What is not addressed by these changes to date, is that all acquisition remains under control of the CAE and continues the GN practice of no Military Service Chief acquisition involvement with the materiel their forces must use to fight and win.

While these recent NDAA law changes are working to lower the acquisition process noise, they could further reduce the noise channel by putting the PEO’s and PM’s back under system commands or a single senior military acquisition head reporting to both the Service Chief’s and the CAE. Such a change would empower military leaders to get more engaged in acquisition activities while giving them the new tools of Middle-Tier acquisition to push major program acquisition back toward the post WWII timelines.

In the meantime, Military Service leaders are challenged by the potential for Artificial Intelligence (AI) to become a new battlefield game changer. While they cannot yet directly influence the acquisition of AI capabilities, they can help to align requirements and funding support toward Middle-Tier AI acquisitions. Given the near-peer pressure on the military adoption of AI, this is an area that needs a low-noise acquisition channel if it is going to effectively help introduce AI capabilities into military platforms, command & control systems, and weapon control systems.

As I have written in a previous blog post, Navy Acquisition Can Easily be Fixed, lowering acquisition bureaucratic noise could begin with these easy to implement policy/practice changes that require no Congressional approval:

  1. Speed to Capability;
  2. Passive Oversight;
  3. OTA Prototype to Production; and,
  4. Strong Technical Leadership.

These low-noise channel changes would reduce oversight, help deliver stable funding for shorter development timelines, empower high-performing program managers with full authority and accountability, while reducing the inefficiency of drawn-out multi-year contract awards, followed by the inevitable FAR contract protests.

… and because commercial AI is making such significant progress, AI is an excellent place to apply low-noise acquisition practices to ensure timely sustainment of our Nation’s military capability.

Posted in DoD Acquisition, DoD IT Acquisition, Global Perspectives, Leadership, Technology Evolution | 17 Comments

Effective DoD Acquisition Needs Less Noise! …Part 1

Claude Shannon – Father of the Information Age

To fix DoD acquisition, one only has to leverage George Gilder‘s brilliant adoption of Claude Shannon‘s information theory as the economic growth engine. Shannon’s 1948 landmark information theory paper, “A Mathematical Theory of Communication,” defines information as surprise. For his mathematics and research he is credited as the father of the information age. He taught the world that information transmitted across a communication channel (a wire, a fiber, cell towers, or human networks) is information or surprise at the receiving end because it is unknown before it was sent. If we could predict new information communicated across a channel, it would have little or no value*

“Information theory treats human creations or communications as transmissions through a channel, whether a wire or the world, in the face of the power of noise, and gauges the outcomes by their news or surprise… Now it is ready to come out into the open and to transform economics as it has already transformed the world economy itself.”

Gilder, George. Knowledge and Power: The Information Theory of Capitalism and How it is Revolutionizing our World

An ideal Shannon communications channel is a low-noise predictable carrier, such as an undersea fiber-optic cable delivering a coded signal, that if interpreted correctly on the other end of the channel delivers new information as surprise. Gilder applies the idea of information surprise to economic growth, arguing that all useful economic growth is the result of human-inspired entrepreneurial surprise delivered across a government/business global low-noise channel of predictable laws, reasonable taxes, and supportive infrastructure.

Gilder illustrates this by citing successful entrepreneurs like Bill Gates, or Steve Jobs, whose successes helped accelerate the information age onto the world stage. The multiplier effect of Microsoft’s desktop operating system and Apple’s Macintosh desktop computer and later iPhone, all delivered entrepreneurial surprises that helped catalyzed the world from the industrial age into the information age. These surprises were not the result of government identified requirements or consumer identified demand, but rather a world with a stable low-noise government system enabling entrepreneurs like Bill and Steve to bring surprising products to the market. In macro-economic theory, this is known as supply-side economics, because the supply of surprising new goods and services are the primary driver of economic growth, which in-turn grows human knowledge, creates jobs & wealth, and enables synergistic opportunities, such as mobile phone applications, to grow exponentially.

Supply-side economics competes politically with demand-side economics which holds that the government controlled money supply (The Federal Reserve Central Bank of the United States) helps induce consumer demand, in turn multiplying economic activity and resulting job growth. Government spending during the great depression, and again following the crash of 2007, are held up as value proofs for demand-side economics.

Gilder’s 2013 book, “Knowledge and Power,” explores this dichotomy in depth and refutes demand-side economics as a high-noise information theory economics because it espouses government control of the money supply, high taxes, and restrictive regulations to modulate economic carrots and sticks. Gilder argues that such government control and over-regulation is effectively Shannon’s high-noise channel (high taxes, over-regulation, money manipulation), or economic noise interference that disincentivizes high-surprise entrepreneurs from bringing new inspired products and services to the marketplace. The reason, he argues, is that centralized, limited-knowledge, control never succeeds as well as human-inspired distributed knowledge and the resulting surprise of entrepreneurial invention. His demonstration proof of this is twentieth century centralized Marxist governments that pushed their people into poverty and left 100 million dead citizens in their wake, with little to no entrepreneurial surprise. Because human knowledge is always dispersed, Gilder argues that economic power must also be dispersed to effectively tap the power of entrepreneurial surprise:

“Enforced by genetics, sexual reproduction, perspective, and experience, the most manifest characteristic of human beings is their diversity. The freer an economy is, the more this human diversity of knowledge will be manifested. By contrast, political power originates in top-down processes—governments, monopolies, regulators, and elite institutions—all attempting to quell human diversity and impose order. Thus power always seeks centralization.

Gilder, George. Knowledge and Power: The Information Theory of Capitalism and How it is Revolutionizing our World

The history of DoD acquisition suggests that Gilder’s information economics theory can also be applied to the government/industry partnership which determines our Military’s materiel capabilities. Much of our Military’s greatness can be attributed to past successful surprises like nuclear powered submarines, F-117 stealth jet-fighters, and precision guided munitions, just to name a few. The same low-noise acquisition channel, and resulting entrepreneurial surprise are applicable and easily traced back through DoD’s acquisition history.

USS George Washington SSBN 598

Following World War II, early post-war acquisition successes revolutionized our Nation’s military capability. For example, Admiral Hyman Rickover led the development and demonstration of a pressurized steam nuclear reactor capable of powering a submarine, and went on to lead the development of the world’s first nuclear submarine, the USS Nautilus. All of that took place between 1947 and 1954 when the USS Nautilus, powered by that nuclear reactor plant, was launched 2 1/2 years after ship contract award.

Or consider Admiral “Red” Raborn who led the development of the associated Polaris Submarine Launched Ballistic Missile (SLBM). That development began in 1956 and four years later the world’s first SLBM was demonstrated in 1960, fired from the world’s first ballistic missile submarine, USS George Washington. That submarine was put on contract in December of 1957 and launched in June of 1959, just 18 months later!

USS Ticonderoga CG47

Lastly consider Admiral Wayne Meyer‘s entrepreneurial achievements. In 1970 he was assigned as the Program Manager of the newly formed AEGIS shipboard missile defense system. Early confidence building missile intercept testing enabled him to grow the program into the ships combat weapons system and then lead the development of a new class of AEGIS guided missile ships. Prior to AEGIS, weapon systems were adapted to the ships they were installed aboard. Wayne Meyer turned that around and engineered the ship to best accommodate the combat weapons system. The lead ship of the class, USS Ticonderoga, was put under contract in 1980 and launched in 1983. Wayne Meyer led the AEGIS program office from 1970 until 1983. During that time the program never missed a milestone schedule, overran a budget, nor failed to achieve performance requirements. I was honored to report to him directly for a portion of my Naval career, and still value his leadership lessons. Equal examples can be found for the Air Force and Army in those early post WWII years.

Unfortunately those 5-7 year development-to-production timelines have steadily crept upward until today we are experiencing 20+ year development-to-production timelines for programs such as the F-35 Joint Strike Fighter. The most common argument for the radical rise of acquisition timelines is the complexity of new weapon systems, the need for continuous competition, and the importance of husbanding taxpayer resources. But can one really argue that the F-35 is more complex than the first nuclear powered ballistic missile submarine developed at a time when nuclear power was still a science experiment and missiles had never been launched from an undersea submarine?

As a mid-level Pentagon bureaucrat during the 1990’s, I believe the past forty years have demonstrated DoD’s unintended transformation from low-noise successful acquisition processes to high-noise unsuccessful acquisition processes. That isn’t to say those early program pioneers didn’t have to navigate the government bureaucracy of their day, they did. But those program pioneers held themselves accountable and provided leadership to their offices and associated contractors in a way that inspired Navy seniors and Congress to entrust them with the funding and authority to successfully complete their programs on time. In the end, through a low-noise acquisition channel, they delivered entrepreneurial surprises that remain at the forefront of our National Security.

Contrast that with today’s easily observable ineffective Defense acquisition spending, massive acquisition cost and schedule growth, and less effective military materiel! Perhaps even more significant is that today’s programs must satisfy hundreds to thousands of oversight people chartered to second guess all major program decisions. In addition, an entirely new class of defense industry has emerged as government support contractors. These companies exist just to service the high-noise program documentation and decision processes specified in the Departments “Operation of the Defense Acquisition System Instruction” (DoD Instruction 5000.02), and significantly increase the cost of all programs.

An easy to measure test of this truth is the time-honored success of DARPA projects. Using general requirements for improved warfare capability, DARPA has consistently delivered revolutionary National Security systems to include precision guided weapons, stealth platforms, energy weapons, artificial intelligence, the internet, etc. While most will argue that DARPA is only providing the early Science & Technology proof-of-concept for these systems, many projects have developed into full battlefield proven capabilities before adoption into a Military service for fielding and sustainment. The reason for these successes is the low-noise DARPA funding/contracting channel that enables rapid entrepreneurial surprise of new military capabilities!

Part 2 of this post will provide a detailed discussion of how today’s current high-noise bureaucratic acquisition system can be transformed into a lower-noise channel to enable our Country’s Military to retain the dominance required to address today’s near-peer competition for the future of National Security!

*Shannon’s invention of the term “information entropy” has been excluded from this post. The mathematics of information theory isn’t necessary to understand Gilder’s application of “information theory surprise” to the economy and DoD acquisition.

Posted in DoD Acquisition, Global Perspectives, Leadership, Technology Evolution | 21 Comments

Blockchain – The Coming Global Paradigm Shift!

George Gilder, in his latest book, Life After Google: The Fall of the Big Data and the Rise of the Blockchain Economy, effectively argues that our current big data IT world (he uses Google as the metaphor and leading provider) is not here to stay because of inherent flaws, the most significant being cyber-insecurity and the associated loss of human privacy. George Gilder is an American writer, investor, 20-book author, and techno-utopian advocate. Despite it’s current hype, some may ask, what is blockchain?

“A blockchain, originally block chain, is a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a merkle tree root hash).

By design, a blockchain is resistant to modification of the data. It is ‘an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way’.”

Wikipedia

Most people associate blockchain with the cryptocurrency Bitcoin which is a new form of electronic cash. Bitcoin is a decentralized digital currency designed to bypass central banks and Nation State fiat monies undergirding today’s global financial transactions. Unlike fiat currencies, Bitcoin transactions are verified by network nodes through cryptography, and recorded in a public distributed ledger called a blockchain. Bitcoin was invented by an unknown person or group of people using the name Satoshi Nakamoto and released as open-source software in 2009.

Although several cryptocurrencies have been started since Bitcoin first stood up, the real hype and interest surrounding blockchain is the possibility of truly distributed immutable ledgers that offer the possibility of transparent ledgers, for all business types, without the need for continuous verifiable auditing, and without the possibility of ledger fraud and related crime.

Gilder’s concern about big data and internet security leads him to predict that blockchain and immutable ledgers will become the technical foundation of a future he calls the “cryptocosm.”

“Google’s security foibles, its “aggregate and advertise” model, its avoidance of price signals, its vertical silos of customer data, and its visions of machine mind are unlikely to survive the root-and-branch revolution of distributed peer-to-peer technology, which I call the ‘cryptocosm’.”

Gilder, George. Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (Kindle Locations 723-727).

While the free and/or low cost IT services we currently use on the internet have provided unprecedented access to information for anyone able to access it through computers and mobile devices, it hasn’t really been free when one considers the massive loss of personal privacy, intellectual property, and the growing effectiveness of cybercrime. In addition, as Gilder points out, free ultimately robs us of time, our ultimate immutable resource.

Even more interesting to me is his comparison of Marxism and the failed twentieth-century socialist governments that perpetrated the slaughter of over 100-million of their citizens in the name of rational centralized government planning. While Gilder acknowledges the brilliance of Silicon Valley leaders such as Google founders, Sergey Brin and Larry Page, he also argues that modern centralized big data systems are the neo-Marxism of our day.

“Marx was typical of intellectuals in imagining that his own epoch was the final stage of human history. William F. Buckley used to call it an immanentized eschaton, a belief the “last things” were taking place in one’s own time. The neo-Marxism of today’s Silicon Valley titans repeats the error of the old Marxists in its belief that today’s technology—not steam and electricity, but silicon microchips, artificial intelligence, machine learning, cloud computing, algorithmic biology, and robotics—is the definitive human achievement. The algorithmic eschaton renders obsolete not only human labor but the human mind as well.”

Gilder, George. Life After Google, Kindle Locations 224-229

Alternately Gilder argues, based upon Shannon’s information theory, that information is surprise and inventive human minds will always provide that surprise in a system of economics that rewards surprise. He also doesn’t fear the AI takeover of the human race by machines, as is predicted by Ray Kurzweil’s singularity or Elon Musk’s AI fears, because machines are, by definition, deterministic and will never achieve the non-determinism of human consciousness that provides our human ability to invent surprise. Inventing surprise or knowledge is Gilder’s definition of economy.

To bring this back to Naval and DoD practicalities, the near-term value of this discussion is that blockchain, as an enabler of immutable distributed ledgers, offers a fundamental transformation of today’s unsecured DoD IT architecture. Working with this emerging technology, provides DoD an opportunity to start building the security stepping stones needed to transform today’s cyber-insecurity into tomorrow’s National Security cryptocosm. To that end, consider the value of secure distributed ledgers for challenges such as: encryption key distribution; supply chain logistics; military medical records management; weapon and nuclear material tracking; human resource skills and assignment tracking; and, perhaps even situation awareness track management.

China’s First Aircraft Carrier, Liaoning Type 001A

One thing is certain! Without a fundamental improvement in IT cybersecurity across the DoD, the future of warfare is up for grabs with our near-peer competitors, and the solution will be more about proven results than Information Assurance process driven activities like Risk Management Framework.

Posted in Cybersecurity, DoD IT Acquisition, Global Perspectives, Leadership, Technology Evolution | 11 Comments

Navy Acquisition Can Easily Be Fixed

Driven by peer naval competition from Russia and China, the U.S. Navy has embarked on a transformational Fleet Design vision to enhance U.S. naval warfare. Along with new weapon and sensor technologies, this vision is critically dependent upon out-pacing peer competitors with artificial intelligence and cyberspace control.

Like previous transformation visions, this vision has a high probability of failure under the weight of costly, long drawn-out acquisition timelines, driven by the acquisition “build culture” of Department of Defense (DoD) and Navy.  The rapid pace of commercial IT innovation now demands a “buy before build culture,” as Congress has mandated in law. Because commercial IT is globally available for peer competitors to adopt, the hard reality is that the U.S. can no longer remain ahead while fielding and sustaining 3, 4, or even 5 generation old information technology. Given these real-world challenges, it is imperative that Navy rapidly transform the acquisition system to support Fleet Design.  Four, easy to adopt, acquisition changes would help to deliver the Fleet Design vision in time to make a difference.

1. Speed to Capability – In the world of commercial IT procurement or development projects, acquisition timelines average 70%-80% less than similar DoD program timelines. Tax dollar stewardship is the given justification for these long timelines, but in reality long acquisition timelines needlessly increase taxpayer costs and delay needed warfighter capability.

Long acquisition programs also insure that nobody in the acquisition system is personally responsible for failure because a Program Manager’s (PM) 3-4-year leadership tour only covers a portion of the full requirement-to-IOC (Initial Operational Capability) timeline. To turn this around “speed to capability” should be the primary program metric, with cost, performance, and schedule as secondary metrics . Because time is easy to measure, every person in the acquisition system can be graded on speed to capability. Such a system would not diminish the importance and responsibility of a Title X PM’s acquisition authority, but it would enable the rest of the “acquisition team” (oversight, contracts, legal, budgeting) to be responsible for a measured element of a program’s success or failure.

In the fast moving world of IT intensive systems DoD’s “build culture” is exactly wrong for achieving speed to capability. Even in the *Federal Acquisition Regulation (FAR), the holy book for acquisition contracting, it states that all agencies are obligated to fit requirements into available Commercial Off the Shelf (COTS) products before contracting to build a capability. Congress has also called for greater use of COTS products prior to resorting to building, and backed up that law by making non-FAR Other Transaction Authority (OTA) available, for prototyping COTS and non-development capabilities, for all Military PM’s.

2. Passive Oversight – Currently acquisition oversight is effectuated through PM’s, and their staff’s, satisfying hundreds of staff oversight checkers, responsible for various elements of the acquisition process, including requirements, budgets, contracts, and of course legal. None of these checkers (most of whom have never worked in a program office) can say yes to a program decision, but all of them can say no by challenging, kibitzing, and forcing re-briefs. This translates into hundreds of view graph presentations, papers, and scheduled briefings, given to everyone in the chop-chain above the PM, for every decision.

The Milestone Decision Authority (MDA) approval for each major program milestone is the culmination of these hundreds of briefs, put together by program staff support contractors, and reviewed by chop-chain support contractors. It is a lucrative business for support staff contractors and, through no fault of the support contractors, doubles or triples the cost and time for every program. By taking advantage of publishable program status, program decision briefings and most of the acquisition documentation could be replaced with “Passive Oversight” of program planning and tracking information.

Every program utilizes some form of program management automation and tracking activity such as Microsoft Project. Each software tracking system offers the ability to publish project status, to a website interface, with easy to read graphic dash boards that show project green, yellow, or red status (as well as drill down details). This project status is created by the PM’s staff and/or development contractor entries in the planning and tracking application.

Passive oversight would be accomplished by MDA oversight reviewers continuously monitoring each program’s published status to track program health. Programs would continuously publish status, for all authorized reviewers to access passively through web accessible dashboards. If a project is showing green or yellow, no oversight action would be taken or allowed. If a project is showing red, an appropriate level of oversight action would start, beginning with a phone meeting to better understand the situation.

For green and yellow status programs, “letter” MDA approvals could be processed to eliminate the lengthy MDA decision cycle. ACAT MDA authority would continue as it currently exits and could be delegated downward based on successful passive oversight results. The freed up oversight staff time, and fewer meetings, could be used to reduce the size of oversight staffs and support contractors at all levels, thereby saving resources. PEOs and PMs would be fully accountable for any purposely misrepresented program status and punished accordingly.

3. OTA Prototype to Production – The most useful Congressional change to contracting law over my career is the current extension of OTA prototyping authorization to all DoD programs. Not only does the proper use of OTA prototyping help a program quickly discover COTS and non-traditional defense contractor products, it also helps quickly eliminate ineffective solutions, thereby, using less program time and resources.

Commercial companies use proof of concept prototypes (POC) to validate COTS products, new system ideas, and technology maturity, before full adoption into a product line. Using a series of prototypes to validate POC capabilities enables the technical crawl, walk, run needed to support speed to capability.

Low cost OTA prototypes can be used during the material solution analysis period to help inform Milestone A, followed by more extensive prototypes that inform Milestone B. Following Milestone B, OTAs can produce early limited production proof of concept prototypes to inform Milestone C and full rate FAR contracting. OTA Prototyping to Production” helps inform strong technical understanding early in the life of a program or modernization activity, while also helping to inform program planning and reporting to support passive oversight.

DoD OTA consortia, now processing billions of dollars in OTA transactions each year, are helping to unburden FAR contract staffs, while shortening contracting time for early program phases. Time is money and long drawn out and overly protested contract activity is a large expense in the lifecycle of a program.

4. Strong Technical Leadership – Excessive acquisition oversight and bureaucratic processes have driven many strong technical people away from a Navy acquisition career path. Couple that with the DoD culture that believes any Defense Acquisition University (DAU) qualified person can manage any program, and the Navy has a perfect storm in the face of peer-competitor National Security challenges. Simultaneously, systems have become so complex and integrated that writing a FAR build specification contract and delivering a successful cost, schedule, performance program is highly risky.

To know that acquisition is on the wrong path, one must only ask how the Navy invented and delivered highly technical nuclear submarines, submarine launched ballistic missiles, AEGIS ships, and Tomahawk missiles, all within cost, performance, and short timeline schedules; something that isn’t repeated today despite the extra DAU training, information productivity tools, automated design, and automated tracking systems.

Since those early Navy acquisition successes, the workforce has migrated away from deep technical understanding of the systems being built. At the same time, most PMs use their technical directors as trouble shooters and technical commenters, rather than direct program line authority. Because deep technical understanding is not a requirement for assignment as PM or Assistant PM, programs are often managed with little understanding of technical issues that sidetrack effective program delivery.

Independent of the program requirements, budgeting, and contracting activities, it is almost always technical challenges and solutions that define acquisition program success or failure. By putting a “strong technical manager” directly underneath a PM, to manage all things technical, the PM is freed up to focus on the constant challenges of budgeting, contracting, and oversight, while also making final decisions on technical issues.

A review of the history of the AEGIS weapon and shipbuilding program, or the Strategic Systems Program, shows that both organizations used a strong technical manager directly in line under the PM. These were the people often promoted to the PM position, following that senior technical position, thereby insuring that the PM also had deep technical knowledge of the program, and likewise had usually been with a single program for a large portion of their career.

By implementing “Strong Technical Leadership,” in conjunction with the recommended changes for program “Speed to Capability,” the workforce will migrate toward stronger technical understanding by rewarding program technical experience and expertise. This also facilitates trading support contractor briefings and acquisition documents for engineering planning, tracking, and automated documentation tools, while incentivizing a workforce that leverages these automation aids.

Improving the DoD acquisition processes has been a constant theme for the past twenty years. To date, all of the well intentioned policy changes have not solved the problem of long expensive acquisition programs that fail to deliver as planned. If anything it has gotten worse. These four ideas:

  1. Speed to Capability
  2. Passive Oversight
  3. OTA Prototype to Production
  4. Strong Technical Leadership

… require no significant DoD 5000 acquisition policy changes and no Congressional law changes. By implementing these changes Navy would begin to adopt a buy before build culture and remain closer to the leading edge of commercial IT capability. Adopting these relatively easy changes would give the Fleet Design vision a fighting chance.

 

* FAR Subpart 12.101 Policy.

“Agencies shall—

(a) Conduct market research to determine whether commercial items or non-developmental items are available that could meet the agency’s requirements;

(b) Acquire commercial items or non-developmental items when they are available to meet the needs of the agency; and

(c) Require prime contractors and subcontractors at all tiers to incorporate, to the maximum extent practicable, commercial items or non-developmental items as components of items supplied to the agency.”

Posted in Cybersecurity, DoD IT Acquisition, Global Perspectives, Leadership, Technology Evolution | 19 Comments

Time for Transformational Cybersecurity Part II

The last post, Transformational Cyber Security Part I, discussed exciting inventions that turn cybersecurity upside-down by preventing malware from freeloading CPU instructions in a properly configured software defined data center (SDDC), thereby rendering on-premise cyber attacks null and void.  It also discussed root-of-trust encrypted metavisor technology that protects applications and data operating in the cloud.

Now, the cyber vulnerability challenge becomes adopting these technologies into the massive global IT infrastructure, faster than cyber attacks can debilitate significant elements of our world. This post introduces additional cyber technologies to enable protection against cyber intrusion between customer offices and the cloud. It includes secure cloud data and distributed ledger technologies to protect all transaction activities that intersect with the cloud.

But first it is useful to consider who is responsible for information protection as commercial cloud offerings continue to grow. The IT world is in the middle of a paradigm shift from private data centers, to combinations of private and commercial cloud services that offer the promise of reducing or eliminating IT staffs in favor of IT services delivered to us users anytime, anyplace, and worry-free. This model is similar to the way we enjoy worry-free electric power; at least most of the time.

While first world countries have come to trust that the commercial electricity service and the electric grid moving it are very reliable, many homes, most large businesses, and all of the world’s data centers use backup electric generators to reduce or eliminate cripping losses of electric power. Just as the world cannot long survive without electricity, it now cannot survive without information flow. Consider, for example, that all of civilization’s necessities are now controlled by information systems and processes that enable efficient food flow, water flow, money flow, energy flow, and information flow itself, where 95% of all telco information transits undersea fiber optic cables.

Like electricity, information must be protected from mishandling, data-theft, data alteration, and data-flow interruptions. In this respect, cloud providers do a good job of engineering security into their cloud systems, and provide independent verification of compliance certifications such as HIPPA to protect health records, ISO 27018 to protect personally identifiable information (PII), and Cloud Security Alliance (CSA), to help cloud customers assess how cloud service providers follow and comply with industry best practices, standards, and regulations.

Commercial cloud services have opened amazingly positive low barrier of entry opportunities for new-start companies, plus medium and large businesses alike, but they cannot fully solve the challenge of data liability, which must continue to be the customers responsibility. That responsibility forces all customers to self-protect and backup all sensitive data, including PII, financial, intellectual property, private applications, and company process information.  To do so, cloud customers must be augmented by some form of extra-cloud protection, or use a hybrid private/commercial cloud architecture, similar to the way we use backup generators for our critical electricity needs. We know this because even with cloud providers doing their best to support the protection of customer data, as daily news reports attest, cyber attacks, like ransomware, continues to plague the world.

By protecting the telco connections from a customer to the cloud and then extending additional cybersecurity technologies further up the IT stack, commercial cloud-only, or hybrid-cloud architectures will start to deliver high-reliability cyber protection to cloud users and the clouds they ride. One company* now offers a Software Defined Perimeter (SDP) service that transparently authenticates each user device, followed by authenticating the user and the data center or cloud application that the user is accessing. In this way the SDP is creating a private network enclave similar to the way the DoD divides users across different security enclaves in order to protect classified data. If desired, this SDP can be established with a dedicated network path all the way back to a telco provider.

Another company** has invented a database technology that is capable of searching encrypted data without holding the encryption keys to that data.  The implications of this semi-homomorphic encryption invention is massive. It enables the creation of a zero-trust trust data-layer, thereby eliminating the data-layer cyber threat surface, where data spends 99+% of it’s time. This allows cybersecurity to spend cyber resources focused on the small remaining cyber threat surface.

With this revolutionary data-layer technology, this same company** discovered they could leverage the secure data-layer to create secure immutable distributed ledgers that protect and verify all transaction processes. Secure distributed ledgers is the goal and the hype behind blockchain technology, first introduced by the inventors of the Bitcoin cryptocurrency. Immutable distributed ledgers are desired because they are not vulnerable to a central data location, they cannot be altered without the string of block participants agreeing to the change, and they are transparent for authorized audits without fear of illegal ledger modification. The importance of this company’s technology is that it secures the distributed ledger transaction blocks in zero-trust layers and is massively scalable, unlike the challenge with current blockchain ledgers.

The last company*** in this cyber secure stack is a company that has created a secure distributed artificial intelligence (AI) technology that distributes smart-agents anywhere within an information processing echo-system, like supply-chains, industrial processes, and AI augmented human control processes such as military command and control.  These AI agents operate within a common software support platform and are assigned behaviors using a library of behaviors tailored to the needed automated task.

By increasing the security at each layer of this IT infrastructure stack, as shown in this diagram, cybersecurity is now moved from a high risk profile, hoping that ransomware and other malware will not disrupt operations, to a highly security and reliable IT system, equivalent to military grade cybersecurity.

As the world continues accelerating into the AI future of autonomous car and truck transportation, 3D printing that is revolutionizing manufacturing, and robotics promising Star Wars like helpers, the big elephant in the room is cyber vulnerability. Without solving this challenge, not only will our National infrastructure continue to be at risk, but our software driven government and businesses processes will be unable to adequately compete in this new global landscape!

 

*www.vidder.com*  **www.craxel.com  ***www.cougaarsoftware.com

****AWS Customer Agreement  11. Limitations of Liability.

“WE AND OUR AFFILIATES AND LICENSORS WILL NOT BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL OR EXEMPLARY DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, REVENUES, CUSTOMERS, OPPORTUNITIES, GOODWILL, USE, OR DATA), EVEN IF A PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. FURTHER…”
Posted in Cybersecurity, DoD IT Acquisition, Global Perspectives, Technology Evolution | 2 Comments

Time for Transformational Cybersecurity!

One of the hardest things to change in human society is a long-standing myth. The current well accepted myth about information technology (IT) systems is that they cannot be defended against well-funded, determined hackers.  Because such myths are rarely questioned, the cyber security workforce takes it as a given that even highly secured IT systems will be compromised at some point. The good news is that myth-busting cyber technologies are now available to transform cybersecurity from today’s major government, financial, and consumer challenge, into a future where all but deep insider cyber intrusion is impossible! 

Current virus protection, firewalls, and newer high-technology variants of these boundary defense technologies do provide a measure of cyber defense-in-depth. They do not, however, fully protect against Tier V-VI level threats as shown in this graphic from the January 2013 Defense Science Board Report, Resilient Military Systems and the Advance Cyber Threat.

As described by this DSB report, hackers can be characterized as operating in the Tier I-II thousand-dollar club, the Tier III-VI million-dollar club, or the Tier V-VI billion-dollar hacker club.  As shown, these hacker attacks broadly range from nuisance attacks to existential societal change attacks. V-VI Tier hackers are those funded through large countries led by United States, China, Russia, Iran, and, North Korea. These full spectrum attacks include equipment modifications through supply chain access, deep insider threats, advanced persistent threats (APT), and other techniques that break encryption or gain more than electronic only access. Tier III-IV hackers, funded through organized crime and smaller countries, are able to achieve disruption through malware including: phishing attacks, denial of service attacks, ransomware, and related attack variants. Tier I-II hackers use easily attainable dark-web tools, such as password crackers, to exploit known vulnerabilities generally blocked by virus protection, virtual private networks, and related tools.

The exponential growth of cyber-attacks, as evidenced by newspaper headlines describing massive loss of our personal information, including credit information and passwords, is now a Presidential level challenge and has elevated cyber to a U.S. National Security warfare area. Not often discussed or considered is the fact that every cyber malware attack must borrow a Computer Processing Unit (CPU) instruction from the attack target system in order for attacking software to operate the malware instructions! In the physical world equivalent, such as bank robberies, criminals must borrow access to city streets, bank buildings, and bank vaults to conduct successful robberies. Fortunately, in cyber space, new synergistic technologies are now available to prevent malware from borrowing CPU instructions, thereby significantly enhancing cyber defense-in-depth. Unfortunately, most organizations are reluctant to purchase this enhanced cybersecurity because they are confused by all of the cybertool hype and  fall back on the mythology that persistent cyber intruders will always win so what they have is good enough.

Medieval walled castles can be thought of as a physical representation of today’s cyber security situation. Although high walled castles provided good city defenses for over 900 years, they quickly became obsolete as the synergistic inventions of gunpowder and cannons spread in the 14th century. Until then, castles were effective at keeping out most small intruder gangs, but were unable to prevent a persistent siege from a large army that could eventually cross over, under, or through castle moats, walls, and gates to breach walled cities. Likewise, today’s IT system cyber boundary-defenses slow down cyber hackers but do not completely stop persistent and well-funded hackers, working over long periods of time. Just as castle moats, gates, and walls were no match for the gunpowder and cannons of the 14th century, current cyber boundary defenses are no match for today’s advanced persistent cyber threats. Just as walled cities gave way to modern active defensive weapons, cyber boundary defenses must now give way to more effective cyber-technologies.

Like the synergistic technologies of gunpowder and cannons, the key to hack-proof cyber security is new patented technologies that synergistically integrate robust encryption, high-performance computing, and virtualization. Using these technologies can eliminate all but deep-insider threats, thereby eliminating all but the most persistent Tier V-VI cyber-attacks. The castle equivalent of this cyber security technology would be hiding all castles behind invisibility cloaks to prevent attackers from moving to, into, within, or from a walled city.

So, what’s so different about combining encryption, high-performance computing, and virtualization to eliminate cyber security threats? Encryption has been used for thousands of years as a method to hide information, plans, or other secrets.  As computer performance has improved, so have the strength of encryption techniques and the competing cryptology techniques to break encryption.  Strong encryption, however, remains one of the most effective ways to prevent hackers from obtaining useful information, be it in transit, or stored in databases and backup storage media. Now, thanks to the Trusted Computing Group consortium, Trusted Platform Module (TPM) chips can be embedded on computer server boards, to provide NSA approved strong encryption on all server and desktop CPU motherboards.

The problem is that all but the most modern data centers contain extensive cyber threat opportunities because CPUs are used to enable the functions of every major data center subsystem including servers, storage systems, network devices, and supported desktops. Each of these subsystem CPUs provide hackers the threat surface to borrow CPUs for their malware. In addition, even though TPM chips, providing strong hardware based encryption, have been available for a decade, these optional chips have been largely ignored by IT manufacturers because the “everything can be hacked myth” argues against the small extra system cost and TPM setup administration. What is changing this situation is the rapid growth of Software Defined Data Center (SDDC) technologies that enable fully virtualizing modern data centers. A fully virtualized SDDC forces all application, server, storage, network, and desktop functions, to operate under a single set of server CPUs thereby eliminating all independent storage system CPUs, network system CPUs, server CPUs, and fat-client desktop CPUs.

By thinking differently about how these synergistic IT technologies can be utilized differently to secure IT systems, several small startup companies are moving IT systems from walls and moats to modern active cyber-defenses. One startup company* has patented Hyper Converged Infrastructure (HCI) server technologies that synergistically integrate high-performance computing, TPM hardware encryption, and SDDC virtualization to prevent all externally introduced malware from obtaining CPU instructions, and thereby exposing and preventing any malware from operating. After verifying all startup software is valid and clean of malware (attested), any malware introduced into the data center through phishing attacks, denial of service attacks, and all other hacker attacks including zero-day attacks, is immediately exposed and recognized as non-authorized software when an unattested CPU instruction is requested. That instruction is then automatically moved into a shadow netw0rk for observation and analysis while the SDDC continues functioning normally.

But what happens when a SDDC is inadvertently or maliciously attested with embedded malware, or what if the software is only operated within a commercial cloud service infrastructure? Another startup company** has addressed this problem by creating an encrypted metavisor shield surrounding the guest operating system and all application VMs or application containers. The metavisor, using encrypted communications both ways, is transparent to the guest cloud system because it presents itself to the guest OS as the cloud hypervisor, and to the cloud hypervisor as the guest OS, thereby enabling process integrity checks that are abstracted into the independent metavisor layer. The metavisor monitors system instruction or memory calls for abnormal activity and alerts system administrators if the system is compromised and needs to be re-attested.

When these technologies are combined, even greater cybersecurity is available by creating a hybrid 0n-prem/cloud deployment that securely manages all root encryption keys on-prem, and securely extends those root keys to the in-cloud guest operations. This is accomplished by augmenting the on-prem HCI SDDC with the encrypted metavisor layer to ensure that any SDDC insider attack (inadvertent or malicious) that introduces malware into the attested SDDC is again captured and observed prior to operation.

Top White-Hat hackers, have tried to hack these technologies and agree that when properly set up, such an SDDC cannot be hacked without indavertant or malicious insider attacks to install and attest malware into the system.

This integrated combination of new technologies fully supports the secure deployment of multi-cloud and multi-platform IT environments by further abstracting VMs or containers away from the physical hardware infrastructure on any release of Amazon Web Services, Microsoft Azure Cloud Services, Google Cloud Platform, or other commercial cloud providers.

Given the increased sophistication and prevalence of today’s cyber-attacks, even the best perimeter defenses can’t stop hackers from gaining access to corporate or government datacenters. When that happens, these new synergistic technologies prevent attackers from borrowing CPU instructions and extend that protection into commercial cloud services by abstracting all cloud guest activity across the encrypted metavisor to prevent malware from compromising the system or exfiltrating data.

 

This post has only covered the bottom layers of IT infrastructure. The next post will extend this discussion by introducing new technologies that protect encrypted data and transaction ledgers, riding upon the SDDC and cloud infrastructure layers.

* RuggedCloud.com; **BracketComputing.com

Posted in Cybersecurity, DoD IT Acquisition, Global Perspectives, Technology Evolution | Tagged , , , | 4 Comments

My Mistake — Congress has Already Enabled DoD “Speed to Capability”

I am embarrassed to admit that, until now, I had fully believed the myth that Congressional Laws and Department of Defense (DoD) policies require 10-20 years to develop and field information intensive acquisition programs. Because of those beliefs, I have continuously ranted about Federal Law and DoD policy changes needed to enable Information Technology (IT) intensive programs to deliver “speed-to-capability” using leading-edge IT. I am passionate about this need because, in my opinion, our Nation’s warfighters will not prevail in future conflicts without leveraging the leading-edge IT products and services available to every other country through the $4 trillion dollar global IT market.

Fortunately, Congress is leading the charge! Starting in 1993, Congressional statutes first authorized DoD the authority to use Other Transaction Agreement (OTA) contracting to rapidly build Science & Technology (S&T) prototypes that could take better advantage of commercial and non-development products and services. The purpose of these OTA prototype activities is to allow Non Traditional Defense Contractors (NTDC) to sell to DoD as easily as normal industry-to-industry contracts. In turn the Department is able to more easily discover commercial and non-development products and services from companies not always interested in investing time and resources in certified DoD cost accounting systems, and dealing with long and expensive Federal Acquisition Regulation (FAR) contracting processes.

For these reasons, following my June 12th post, I made it a point to learn more about the OTA Statutes found in Title 10 United States Code Sections 2371, 2371b, and 2373. This new interest was driven by repeated Congressional budget language berating DoD acquisition leadership for not making better use of OT Agreements, FAR Part 12, commercial item procurement, and FAR Part 13, simplified acquisition. To understand this Congressional pressure one need only look at the findings of the House Armed Services Committee Panel on Defense Acquisition Reform Findings and Recommendations, March 23 2010 (page 15) that states:

  • “Only 16% of all IT [intensive] projects complete on time and on budget.
  • 31%are cancelled before completion.
  • The remaining 53% are late and over budget, with the typical cost growth exceeding the original budget by more than 89%.
  • Of the IT projects that are completed, the final product contains only 61% of the originally specified features.”

NASA was the first agency to be authorized OTA authority in 1952.  OTA was not authorized for DoD until DARPA was allowed to use it for prototypes in 1993. Since then, Congress has updated these statues every few years and have now authorized the use of OTAs by all Military Service projects and programs in support of rapid prototyping and, as of FY2016, early production systems. Most recently, H.R. 2810 – National Defense Authorization Act for Fiscal Year 2018, again modified OTA statues by removing dollar approval thresholds and allowing OTA prototypes to transition to sole source production if the prototypes were developed with competition, such as using an OTA consortium. Further, this Congressional language recommends the use of OT Agreements as a preferred choice:

“…the committee remains frustrated by an ongoing lack of awareness and education regarding other transactions, particularly among senior leaders, contracting professionals, and lawyers. This lack of knowledge leads to an overly narrow interpretation of when OTAs may be used, narrow delegations of authority to make use of OTAs, a belief that OTAs are options of last resort for when Federal Acquisition Regulation (FAR) based alternatives have been exhausted, and restrictive, risk averse interpretations of how OTAs may be used. These behaviors force innovative projects and programs into unnecessarily restrictive contracting methods, needlessly adding bureaucracy, cost, and time.”

Taking advantage of OTA Statues, beginning in 1998, DoD has established ten OTA consortia centered around different system functional products as shown below:

These consortium agreements are managed through non-profit management companies supporting parent DoD organizations. Program or Project Managers submit problem statements to these consortia to identify possible product and service offerings to their program. As mentioned in my last post, the Consortium for Command, Control, Communications and Computer Technology (C5T) has over 500 member companies specializing in commercial and non-development Command, Control, Intelligence, Surveillance, and Reconnaissance (C2ISR) and Cyber products and services.

So what is an OTA?

OTA – IS NOT:

  • A standard procurement contract, grant or cooperative agreement;
  • Protestable.

OTA – IS

  • Exempt from many provisions of the FAR;
  • A legally binding instrument;
  • Similar to a commercial-sector contract.

Why would a Program Manager want to use an OTA? The answer is to discover new technologies and quickly validate these technologies against operational mission requirements and system needs. OT Agreements help do this because:

  • OT Agreements can be obligated and awarded in as little as 90 days;
  • Innovators from Small Business and Non-Traditional Defense Contractors are more likely to participate in OTAs than traditional FAR contract processes;
  • Flexible Intellectual Property provisions improve both government and contractor opportunities;
  • Public/Private cooperative relationships are promoted;
  • Government program managers retain total project management control; and,
  • Project payments can be made based on measurable milestone achievement.

Unfortunately, the cultural challenge is that DoD, out of necessity, has always built it’s own military platforms, weapons, and supporting C2ISR capabilities. This need forced the Department to lead the development for most of today’s communications, networking, and computing technologies to include the famous DARPA internet development. In the process, the DoD 5000 acquisition process, plus supporting budgeting and requirements processes, were formed around “make” or “build” activities with little expectation of “buying” prebuilt solutions. Those “make” processes are still valid for the majority of DoD’s platforms, sensors, and weapons, but less so for information dominant systems where commercial information and technology (IT) is growing exponentially, while serving the $4 trillion dollar global IT market.

By better leveraging this $4T commercial IT market, and using the procurement tools Congress has authorized, DoD can turn most IT intensive program requirements into predominantly “buy” programs, while remaining near the leading edge of IT, rather than trailing 5-10 IT generations behind. Many of the reasons for the unacceptable 2012 Congressional IT Program report is that those programs were slowed down by culturally comfortable “make” requirements, budgeting, and acquisition processes, while exponential IT market growth created a shifting sand of IT hardware and software leaving them with end-of-life IT components even before new systems were deployed. 

  By actually listening to Congress and taking advantage of these current acquisition authorities, DoD could align with the 12-18 month IT upgrade cycles needed to remain on the leading edge of IT, as the world enters the era of artificial intelligence and cognitive computing. This can be done by dropping the time wasting top down “make” requirements and substituting early OTA prototypes to discover the latest commercial or non-development opportunities capable of quickly satisfying mission requirements and informing system “buy” requirements with greater empirical proof-of-concept knowledge.  By taking greater advantage of OTA consortium opportunities, acquisition programs can move more quickly through Milestone A and B, with demonstrated operational solutions capable of informing Milestone C full rate production decisions. If that solution then includes a predominance of commercial items, FAR Part 12 contracts can help speed C2ISR production into Full Operational Capability (FOC).

Fortunately, by using OT Agreements, Congress has fully enabled forward leaning acquisition executives and Program Managers to rapidly deliver IT intensive war-fighting capabilities delivered on leading-edge IT hardware and software. These C2ISR capabilities will be critical enablers of future DoD capability such as the U.S. Navy’s New Fleet Design vision. In the face of the commercial IT market enabling other countries to challenge our Nation’s maritime dominance the time for change is now!

The father of Public Administration, Max Weber, theorized that organizational bureaucracy was the only way large organizations could be efficient, however, upon occasion would permit himself to hope that “some charismatic leader might arise to deliver mankind from the curse of its own creation.” We know from the U.S. Navy’s acquisition history, strong charismatic leaders are largely responsible for the strong Navy our Nation’s defenses rely upon today, i.e. Admirals Hyman Rickover (nuclear submarines), Red Rayborn (submarine ballistic missiles), and Wayne Meyer (AEGIS missile defense ships).

In these pressing times, it is now imperative, more than ever, for today’s charismatic Naval leaders to deliver a New Fleet Design to help sustain our Nation’s global leadership.

Posted in Uncategorized | 15 Comments