An IT Acquisition Joint Game Changer

Enterprise network & computing, coupled with distributed-services technologies are transforming legacy mission and business systems

Introduction

This post describes an acquisition concept that I am currently proposing for the DISA led Network Enabled Command Capability (NECC) Program. With this concept the DoD could transform current stove pipe system development and acquisition practices into an evolving enterprise capability through an innovative shared-development strategy that could be the start of a true IT acquisition “game changer.”

Enterprise network & computing, coupled with distributed-services technologies are transforming legacy mission and business systems from stand alone client-server systems to virtualized applications riding enterprise networking and computing infrastructure.  Where bandwidth is adequate & reliable, future applications will be provided as a software service thereby freeing operational forces from the complexity of local system installations and life-cycle maintenance.  In today’s complex defense environment it is time to adopt a new concept that will rapidly deliver future capability and reduced sustainment costs to our Joint war-fighters.

Interstate Hwy PicThe U.S. Interstate Highway program, arguably, is one of our Nations most successful shared-development efforts.  By leveraging Federal funding with State funding, our Nation has enjoyed a standardized highway infrastructure that is a critical element of our Country’s economic success.  Using a variant of this concept to enable Joint enterprise capability development could change the game by creating a shared-development process to deliver and sustain timely, modern, and Jointly standardized capability in support of our operational forces.

Concept Details

Through a shared-development, centralized-control concept, it is possible to deliver leading edge capability within operational cycles that will out-pace changes our warfighters face on the battlefield.  At the same time this concept can ensure rapid adoption of virtualized, distributed-services technologies needed to pace warfare capability for future forces.  Such a concept could be implemented as follows:

  1. Eliminate all future IT new start programs and build enterprise capability as upgrades to existing programs-of-record.  New start programs are forced into stove-piped systems by the current acquisition process required to fund and approve the program.
  2. Establish key Non-development Program Managers (NDPMs) over like groups of existing community-of-interest (COI) programs-of-record (POR). Task non-development PM’s with responsibility for programmatic and engineering leadership for that community of interest.
  3. Create a requirements process to prioritize capability improvements to the operational force infrastructure and applications.  Allow pilot programs at operational commands to substitute as the equivalent of formal requirements. Move proven pilot technology and new requirements into evolving programs-of-record through this shared-development process.
  4. Establish a Program funding control process under each Non-development PM.
    • This could be accomplished through a Non-development PM led governance board made up of community-of-interest PM representatives.  It could be called the  Program Funding and Compliance Board or PFCB.
    • The PFCB would be responsible for allocating COI funds and managing program compliance as agreed upon by the Board.
    • The COI Non-development PM would hold the final authority for program actions subject to senior acquisition manager concurrence and appropriate oversight gate reviews.
  5. Establish a System Engineering Standards and Architecture Certification Authority [could be called SESACA] for each COI Non-development PM.
    • The COI Chief Engineer would lead along with members from the COI programs-of-record.
    • The SESACA would be responsible for recommending funding, and approving specific standards, architecture, and certification requirements on all COI POR development activity.
    • This would include technical authority for all development proposals, collaborative engineering environment processes, and all certification requirements.
  6. Establish a COI POR Configuration Control Board (CCB) chaired by the COI Chief Engineer.
    • This CCB would be responsible for the configuration control of legacy and new operational capability.
    • This CCB would approve configuration changes for all aspects of the COI and all new COI system configuration elements in coordination with POR level CCB activity
  7. 3 Yr PipelineReview, approve, and manage 12-month shared-development projects using a three-year pipeline model managed through PFCB, SESACA and COI CCB processes. Development would be done within the current POR offices.
    • In steady state, this parallel development process would deliver several new project capabilities to the COI capability baseline every 12 months.
    • Each project would be prioritized and approved for engineering compliance by the SESACA and then authorized for funding by the PFCB.
    • Projects that failed to deliver and certify on time or within proposed specifications would be subject to cancellation and reduced activity in future development cycles.
  8. Allocate, under COI Non-development PM authority, a matching percentage of requested project funding depending upon:
    • The COI POR project request;
    • Value to the COI requirement priority;
    • Value to COI enterprise capability needs; and,
    • Confidence in the developing organization.
    • Funding allocation could vary from as high as 100% to as low as 20% depending upon the needs of the COI Non-development Program.
  9. Certify project deliveries for enterprise compliance to established certification standards as agreed to by the SESACA and approved by the  PFCB.
    • Delivered software would be maintained in a collaborative COI code repositories such as https://www.forge.mil and made freely available to all official DoD and/or government authorized users.
    • COI programs would pull from the COI repository and integrated into the POR  fielded capability as needed and in compliance with the COI CCB Joint configuration.
    • COI POR program offices would remain responsible for all fielding and sustainment of each POR capability.

In addition to creating a shared-development, centralized control model, the COI Joint capability would be built up from the set of baselines currently supporting operational forces.  By evolving these COI baselines through Jointly compliant, services technology upgrades, the operational forces will be assured continuity of capability” plus a continuous stream of new capability each 12-month cycle.

Each COI program would take advantage of any POR or other service-based technology to propose projects that would best support both Joint needs and component needs.  Requested matching funding would represent best estimates of value to the component and overall Joint capability.  In addition, new technology emerging from military laboratory, commercial off the shelf, or pilot development activity could be fed back into yearly proposals in support of COI requirements.  Such a process would begin to break the back of the onerous acquisition oversight process currently preventing rapidly changing information technologies from enhancing operational force capabilities.

Summary

Current programs and new start IT systems are not able to pace the changing threat and support warfare needs.  By establishing Non-development Program Managers with matching fund budget allocation, DoD has an opportunity to deliver sustained capability to warfare commanders in a dynamic and more effective shared-development process.  Such a shared-development model has not been applied to information technology mission infrastructure and application programs to date.  Establishing a strong governance process under a COI Non-development Program Office would make it possible to repeat the successful U.S. Interstate Highway model to develop interoperable Joint enterprise capability.  Through assured continuity of current capabilities and aggressive shared-development of future capabilities, our Joint forces, and the Military Component organizations that support them, will be able to meet operational demands and sustain superior warfare capability.

This entry was posted in DoD IT Acquisition, Technology Evolution. Bookmark the permalink.

20 Responses to An IT Acquisition Joint Game Changer

  1. Paul Tibbits says:

    Marv,
    Very interesting. I will discuss with our IT leadership at VA.
    Please keep in touch.
    Tnx…
    PT
    Paul Tibbits, MD
    Deputy CIO for Enterprise Development

    • Marv Langston says:

      Paul, Thanks for the endorsement. Great to see that you are over there helping also:-)

  2. That was an excellent post with some great information. We published some information on this topic too. You can see it here.

    http://bizconnectionsnow.com/blog/business-funding/business-funding-where-to-start/

    • Marv Langston says:

      Thanks Stephanie. In looking at the link you included, it appears that you are dealing more directly with small business start up funding challenges. In the sense that those small businesses could be creating interesting solutions for DoD command and control infrastructure or application capability then the ideas for an interstate highway leveraged funding model could help some of those businesses. The most likely scenario would be for those businesses that already doing work for one of the S&T organizations or perhaps were doing SBIR (small business independent research) development. Thanks again for adding your thoughts because it does open the discussion farther than I had been thinking. marv

  3. Bob Gourley says:

    Marv I’d like to throw another idea into the mix– something that in some ways could help address the major security issues we are all dealing with but could also enhance innovation and interoperability and functionality of delivered systems.

    The idea is this: DoD will establish a federal contractor cloud and provide every IT integrator and IT provider with a secure infrastructure for all work to be done.

    As we noodle that one, consider that the cost of a thin client box is around $100.00 per person and that cost can be handled by integrators who join this cloud. So now all data is protected. And many other design/engineering/testing/integration/interoperability benefits will arise. This could allow for enhanced shared development. Could also be done in a way that protects intellectual property from other companies when required.

    Any thoughts?

    Bob

    • Marv Langston says:

      Bob, great idea and one that could propel the idea of collaborative engineering across the DoD development community. This system could be managed as a service by a separate contract operating from a secure facility such as a DISA DECC. If we required that all new contracts had to operate from this cloud we could reduce the IA threat to our contractor base as well as potentially reducing the cost of development for DoD apps. Now how can we create this idea as a pilot…:-) marv

  4. Joe Mazzafro says:

    Excellent comparison between emerging IT topography and the Inter-State Highway system and the way it was achieved. The problem, of course, is not with the concept of enterprise networking & computing coupled with distributed-services technologies, but with the challenges of governance that you observe. Said more literally the issue will be getting program managers to give up control that will claim is necessary for accountability. Marine Corp Drill Instructors solved this problem decades ago —– you create an environment based on fear at first that you don’t want to be the one that lets the team down. Over time with good leadership and the right incentives this morphs into the positive of wanting to be team player for the good of the team vice the individual

    joemaz

    • Marv Langston says:

      Joe, great human analogy. Wyman Howard comments to me that the ideas created here do not address the complication of leadership and the human side of the acquisition challenge. This notion of the human side of the challenge, as you astutely address, is worthy of a separate topic that I am considering adding to this blog site. Thanks, marv

  5. Bob Neilson says:

    Marv:
    Some interesting points in your post. A couple of comments:

    1.Your approach may stifle innovation. If you lock joint efforts into existing programs of record, it may very very hard to innovative. Individual developers will have difficulty influencing IT systems in DoD. Innovation comes from individuals and groups. Look at the apps for the iPhone. I think we need to break some china re: the role individuals and small groups play in DoD in future applications. Right now it is still controlled by major defense contractors. They have the power and want to maintain the their power base. Anchoring even joint efforts in existing programs of records supports that power base and stifles innovation.
    2. Requirements analyses are always reactionary. They looks backwards to what warfighters need. I think we ought to start thinking about “anticipating requirements” much like you did at DARPA. WE are not going to “win” present or future conflicts by developing requirements on the basis of what folks need now. We need to anticipate what we’ll need and use a wider audience than the usual suspects to innovate.

    Cheers,
    Bob N.

    • Marv Langston says:

      Bob, Thanks for bringing up both of these critical issues for our future successful National Security. My post does a poor job of explaining why I think that both innovation and requirements can be revolutionized by this evolutionary approach. What I am trying to imply by evolving existing program lines is not that we would continue to send dollars to the incumbent contractor base (although it does not prevent it) but rather that we improve that program line by adding in new functionality based upon successful pilot activity at locations such as our COCOMs and/or yearly technology exercises. The organizations that supply those pilot technologies would be biased toward lean innovative people but would not preclude our large contractors from being innovative as well. But the beauty of this idea is that for a change we would have an institutionalized conduit for rapidly fielding new pilot technology that currently has no access into the installed capability system base. From the requirements side the programs of record would operate under a set of umbrella requirements and the pilots would be applied where they fit the umbrella requirement. By using that concept, the detailed writing of requirements would be supplanted by the implementation of the successful pilot.

      Your notion of anticipating requirements with innovation aligns with my common analogy. That is that if I were asked to write the requirement and innovate a capability equivalent to an iPhone, I would likely have failed, not to mention the time it would have taken to complete such an endeavor within our DoD acquisition environment. The reason is that our ability to define a requirement is almost always informed by the art of the possible. Buy changing our paradigm toward the above discussion, the art of the possible becomes a knowledge to a much larger operator audience than our current cumbersome processes. Thanks, marv

  6. Rex Buddenberg says:

    Marv,

    First some synthesis. then a critique of your post. For that purpose, reordering your #6 to first; then the rest in the order you listed them.

    > 6. Establish a System Engineering Standards and Architecture Certification Authority [could be called SESACA] for each COI Non-development PM.

    Missing THE CRITICAL COMPONENT here: modularization model.
    To make this make sense, decompose any information system into its components:
    I. the internetwork
    A. terrestrial WAN
    B. radio-WAN
    C. LAN (w/in platforms)
    II. End systems
    A. sensors
    B. decision node
    C. actors
    What we gotta agree on is that all the comms systems (I.) have to be routable networks. And keep those PMs scrupulously away from anything on the other side of the routers that border their network segments. This expressly applies to programs like JTRS, MUOS, GBS…. The existence proof of getting this right is GIG-BE where the program delivered terrestrial WAN (and nothing else).
    And we gotta agree on the interface for end systems (II.) to the network. Hardware (ethernet), packaging protocol (e.g. SMTP/S/MIME), security (end-to-end, PKI-driven), precedence labeling (DSCP) and fault detection (SNMP). Recitation of the standards is easy (and almost trivial) once you have the modularization boundary identified.

    End synthesis critique; rest is comments in order:

    >1. Eliminate all future IT new start programs and build enterprise capability as upgrades to existing programs-of-record.

    I assume that you are here dealing with information systems, and mostly the decision support part of information systems. And not with the internetwork that connects them up. (and not with the sensors that originate the data).
    Given that, we’ve consistently found that ~85% of the content of any decision support system is mission-non-specific and it already exists. (We found that in the two dozen programs that are predecessor to GCCS). Getting this out of the reinvention and into the maintenance/improvement mode is indeed critical. Given my added caveats, this is right on the money.
    But note the caveats. Some of the components of the overall information system aren’t susceptible to this fix. The remaining 15% of the decision support node is also not susceptible to this fix, although the maintain-your-way-to-the-future approach bounds that problem effectively.

    > 3. Create a requirements process

    Note that the modularization model above decomposes the problem and different requirements processes apply to different parts of the model:
    Once we understand that all comms comes in the form of routable networks, the communications requirement problem becomes straightforward, quantifiable, and largely tech risk free. (The requirements analysis perspective becomes a plowshares–>swords internet one — what makes DoD’s internet different from commercial? The answers all fall into Ao, security, reach to mobile platforms, QoS categories).
    Further, there is unlikely to be a great requirements flail-ex about sensors and actors. The only instinct here that is important is to counter the mainframe priesthood tendency that persists in a lot of our systems — drive as much correlation as possible into the sense nodes (not the central computer) but don’t drive any of the multi-sensor fusion function out into the sensors.
    And even further, the decision node flail-exes are all about the 15%. The proper approach there is humility: there’s no way you nor I nor any other requirements body in an office is gonna get this right. Get prototypes in front of the users and engineer a short, effective (no up-and-over) feedback loop.

    The rest of the recommendations are generally OK. You should note that the forge.mil approach, while laudable, is dead in the water without software that has a GPL or similar open source license. Most DoD software (C2PC and GCCS included) has enough contractor tentacles attached to it that you can’t get it into a free open source software mode. Great idea, but simple establishment of forge.mil isn’t going to help much. And hiding it behind all the access controls will inhibit any cross-fertilization between DoD and civil emergency services.
    For those not familiar with the very successful model in the commercial world, Free Open Source Software is thriving. Active exercises (of whatever scale) are refreshed frequently: RedHat releases a new Fedora edition every 6 months (v.11 last week). Puppy releases a new version of Puppylinux on an irregular schedule, but at least 3-5 annually. These all follow the 85% observation above.

    Concluding

    > 1. New start programs are forced into stove-piped systems by the current acquisition process

    Think what the antonym of ‘stovepipe’ would be? We all know we don’t want stovepipes, but expressing what we don’t want doesn’t move us in the right direction …. what is it that we DO want? IMHO, the antonym of ‘stovepipe’ is ‘interchangeable parts’.
    Interchangeable parts is the foundation that makes the manufacturing part of the Industrial Revolution work. Now, 200 years later, we need to apply the same reasoning to our information systems. Each PM should be charged with delivering an interchangeable part.

    • Marv Langston says:

      Rex, I love your interchangeable parts analogy. The technology revolution that brought us the Winchester rifle and the Model A Ford, is precisely the concept that today’s services based enterprises are moving toward. I like your division of labor which much better articulates the layers that represent a modern enterprise. However, rather than limit the program of record evolution (which does not prevent a new start program when needed) to the application layer, I believe the concept applied to all of the enterprise layers. Your example of GIG Core demonstrated this concept without most people within the DoD acquisition world realizing that this is what we did. The result was a successful program that accomplished in two years what would normally have taken a decade with a 90% probability that it would have failed along the arduous path!!! marv

  7. Marv,

    I used your current posting to support my meetings of earlier this week and I am only now responding to you with my comments.

    I believe computing costs are in a deflationary spiral let me explain. Secure Cloud computing, or centralized systems infrastructure exists today and is rapidly layering on the security necessary to meet and exceed DoD IA requirements within the next 2-3 years. The Cloud includes the datacenter, electricity, cooling, system hardware and primary operating environments, data storage, 7x24x365 operations and support, etc. all for between $0.10 – $3.00 per server, per hour and available today. Frankly, it cost me $1,000 annually just to power my server which is approximately $0.11 per hour. Cloud computing will allow DoD to change the way the Services contract for and purchase computing services.

    5 years from today there may only be a small handful of successful Cloud providers since the scale required to economically compete has already been achieved and market share won. DoD SI’s will value add these Cloud stacks from these successful Cloud providers creating a “Private Cloud Enclave” within the larger Cloud supporting their specific customers. Services will have the option to purchase these computing services directly thus lowering their cost again and GFE the Cloud services necessary to their SI’s on a project by project basis just as they might provide a GFE server to their contractor today.

    So now that secure Cloud services are underway it’s time to turn our attention to those services that leverage the benefits of the Cloud to the fullest extent. Forge.mil does just that for both Government Purpose License (GPL) software development and sustainment projects via “SoftwareForge.mil” and Gated Projects like an NECC program via “ProjectForge.mil”

    SoftwareForge.mil since IOC on 17 March 2009 has some 1,900 members and 80+ DoD GPL projects underway. In 5 years I suspect SoftwareForge.mil may have some 500,000 members and 2,000 –5,000 software development projects underway just as Sun’s http://www.Java.net has achieved over the last 5 years.

    ProjectForge.mil is scheduled for LOA delivery in September 2009 and IOC sometime in October, or November 2009. ProjectForge.mil is a gated community operated from within the Cloud as a secure service licensed on a per subscriber basis. Those customers, or SI’s not wishing to use the Cloud today are free to choose and license the Forge.mil software and deploy into their project on their own systems under their complete control.

    If the Cloud represents the interstate highway system then ProjectForge.mil represents the COI’s or intrastate highway system. Only when ProjectForge.mil is served from within the Cloud can DoD leverage all the benefits of the interstate highway system.

    What are some of the benefits of this interstate highway (e.g. Forge.mil)?
    • Agile development & common evaluation criteria
    • Centralized software repository supporting possible code re-use
    • Parallel design, build, test and certification tracks serve to collapse time to market
    • Inclusive, collaborative problem solving
    • Geographic dispersion of team members (e.g. engineers, SME’s, customer, etc.)
    • Business Process Exchange (workflow – best practices) shared project to project
    • Centralized secure Cloud of “shared” build & test servers accessible by team members (subscribers) on-demand.
    • Visibility for all – Role based security allows participation with control
    • Peer review & recognition of individual performance “meritocracy”
    • Inclusive development and participation right down to the actual end user (war fighter)
    • Economy of scale aggregating all DoD subscribers under a single ELA for Forge.mil subscriptions served from the Cloud, or deployed to the program site.

    To complete the interstate highway system within DoD in this case “Forge.mil served from Cloud” the last key component is to change the DoD software procurement process to leverage this “framework, fabric, or interstate highway system” into possibly all existing and certainly new solution procurements.

    The economy of scale achieved by the Cloud and Forge.mil will save DoD $billions and those dollars can and should be repurposed in part to hire the best people from within the companies that serve DoD capable of providing the software components and solutions required to continue to successfully defend America.

    • Marv Langston says:

      John,

      Great points. I am working to get DoD or some component of DoD to pilot a cloud desktop environment to show a comparison to the current expensive licensed and fielded method used today. This is a small example of your larger discussion below. Also, I was told by Rob Vietmeyer that the correct address for forge.mil is: https://www.forge.mil. Thanks for the useful discussion.

  8. Kurt says:

    Marv,
    Terrific concept. An interesting concept is to extend the model to include partner nations with a freeware approach. The entry level system is free to nations in partnership with the US like HOA and some nations are more active contributors. Similar to industries model of a few premium subscribers pay for a larger number of base line users who are free. Think this model would help the nations we are reaching out to with IT.
    Cheers sir.
    V/R
    Kurt

    • Marv Langston says:

      Kurt, I like your international extension of this idea. If we could gain some traction on this in our own C2 Joint community then perhaps we could get this started… but then again perhaps we should start from the international side and work backwards given the bureaucratic entrenchment currently holding us back.

  9. Kurt says:

    Marv,
    Agree sir. Think the most likely way we will change our legacy architecture is a demand signal from the allies. Perhaps a system they have picked up from us that we discarded…. 🙂

  10. Joel Jackson says:

    Very nice content in this blog…thanks for sharing it.

    I work at Red Hat, where a “shared-development, centralized control model” is something we’re familiar with….and arguably have mastered. Linux was not created from an RFP, but rather from a very large shared development, centralized control model of development.

    It’s great news to hear DoD minds like yourself talk about this.

    But remember the quote about how the problems of today cannot be solved by the minds that created them…Ive seen that quoted a couple times on this blog:) So Im wondering if it makes sense to create this “walled garden” for shared development within the same community that created the problems? Especially when it comes to default infrastructure.

    As long as the DoD has control over who has access and who has permission to use it, the opportunities for TRULY quantum innovation are limited. Most DoD executives then proceed to tell me “security is why we cant let everyone play in our garden”. Disagree…and we don’t think you have to compromise on security one bit. Doing the “walled garden” approach leaves you open to missing out on that quantum innovation, and that can be a staggering price to pay.

    You probably recognize this quote from Tim Berners-Lee: The decision to make the Web an open system was necessary for it to be universal. You can’t propose that something be a universal space and at the same time keep control of it.

    Let’s put that into perspective. When Tim B-L created the web, it was to facilitate the sharing of theoretical physics papers and data. The nature of those theoretical works happened to be so theoretical that there were no images to relate, so the initial web had no support for graphics. It then fell to Marc Andresen, the undergrad at NCSA, to create a web browser and add support for image tags that made him famous and the Web a mainstream technology. Quantum innovation outside the walled garden.

    The thinking that DoD cant participate in an open (to everyone) community and gain quantum innovation is totally misunderstood. In 2001 we did it with NSA.

    As we know, the NSA has a mission—to not just be the nation’s code keeper and code breaker, but also charged with the security of the nation’s infrastructure. There are a strong set of security protections called Mandatory Access Controls that back in 2001, were the domain of expensive proprietary operating systems running on very expensive proprietary hardware. These systems were cumbersome to deploy and maintain.

    The implementation used by the NSA was good at protecting classified information. But as more systems came online, with new types information—financial records, healthcare records, etc.–it was quickly realized that the old approach not only did not protect the information itself, it it did not protect the applications with access to that information.

    New attempts were made, all resulting in failure, all because they were developed in isolation without opportunity for feedback (walled garden approach). So a radically different approach was tried in 2001: introduce a new model to the Linux and open source community, known as Security-Enhanced Linux. The NSA realized that just as critical as the technology is itself, there was an opportunity for people to test, contribute, and comment on the technology while it was being developed (shared development, centralized control approach).

    The next problem was: How to put this new technology in the hands of the private sector. NSA then worked with us here at Red Hat to take SELinux and integrate into the mainstream Linux kernel and into an enterprise Linux distribution. That distribution is now the most secure modern mainstream operating system available, and is not only being used by the gov’t to protect its secrets, but now also by banks, hospitals, and other organizations to protect their systems and ultimately your data.

    Quantum innovation.

    If NSA, the most secure agency in the world can do this, why cant other DoD agencies do this for ALL their default infrastructure?

    Imagine that the DoD first sets up an open technology lab that is not only 100% open source, but that they begin with the premise that they are going to be 100% full-fledged participants in open source. For an example, let say they are going to choose content management, identity management, system management and file management tools that are 100% open source.

    Let’s look at the proposition for content management.

    Whatever the content management requirements are from the system, there can be no requirement that is so government specific, so related to national security that they cannot share the architecture or the code of the system. Just like with crypto: the whole crypto community rejects the premise that any code can be secure without subjecting the code to scrutiny. (And the fact that every highly funded proprietary DRM “solution” has been undermined in a matter of weeks proves the ridiculousness of the idea that secure crypto can be developed in a vacuum).

    Now, the authentication, authorization, and access control system of the content management system can be highly secure, using sources and methods known only to the DoD, but when they spit out their magic digital signatures, the signatures work with the API to lock and unlock data, content editing features, user account creation, access controls, etc.

    So now we have a hybrid model: true OSS (meaning open to everyone) being the DEFAULT for infrastructure, and gov OSS for special features that interop or control corners of the system.

    Nokia and Google do this with cellphones. SONY does this with DTV’s and games.

    Likewise it should be no secret that the military uses numerically accurate math libraries, or high-grade secure authentication. What the missiles point at (targeting) and the specific keys for activating those missiles–those are things to protect (inside the walled garden).

    • Marv Langston says:

      Joel, great comments and a subject that I have not yet written a blog post about (I think you have just done that for us all:-)). I believe that our Nation and the globe needs to adopt a fully transparent philosophy for the entire IT infrastructure. Until we do so, we will not be able to create an effective cyber-security future for us all.

      I agree with you that the SELinux effort is a very positive step in the right direction. Tim B-L’s comment is spot on and we have to break this cycle for us and our children… Thanks for jumping in…

    • Marv Langston says:

      Joel J, great points. I often use the open source software development models like Java Community Process as examples of how we should be developing using collaborative technical teams that both solve technical challenges and agree upon the standards and patterns that need to be used and certified into. That is the bottom up part of governance that I believe we must adopt and make efficient. Unfortunately most of our acquisition oversight bureaucrats think governance means top down direction. We must come up with a way to learn from the work you are doing. marv

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.