> Under the law, agency chief information officers are required to develop policies within 180 days of enactment that implement the act. Those policies need to ensure that custom-developed code aligns with best practices, establish a process for making the metadata for custom code publicly available, and outline a standardized reporting process.
> Per the new law, metadata includes information about whether custom code was developed under a contract or shared in a repository, the contract number, and a hyperlink to the repository where the code was shared.
Sadly it doesn't sound like the law requires agencies to make the code publicly open source, it just requires inter-agency sharing (bill full text [1]). They only need to share "metadata" publicly.
> Sadly it doesn't sound like the law requires agencies to make the code publicly open source, it just requires inter-agency sharing
This is a good first step. The next would be sharing with states, municipalities and universities. Public sharing disperses a lot of IT responsibility that presently doesn’t exist.
And they will eventually learn that it's easier to share with the public at large than with a neighboring department.
At least that's my experience in a commercial setting: it's easier to publish something without restriction than to share it with a specific hardware or software partner only. The latter creates all kinds of questions around neutrality, applicability of NDAs, licensing, and so on.
What would be more interesting is to require all private companies who are doing US government contracts, especially the ones who are handling classified projects to do the same as these US agencies!!!
Gov-acquired software can be architected to separate open-source components from classified components. This enables reuse of commercial (open or closed) software with the economics and rapid iteration of larger markets. For open-source components, this enables public collaboration on COTS, with non-public collaboration on classified GO(v)TS components.
> The BRL-CAD source code repository is the oldest known public version-controlled codebase in the world that's still under active development, dating back to 1983-12-16 00:10:31 UTC.
It's practical for software vendors on platforms [1] with virtualization, which have been gradually increasing over the past decade, including Windows, ChromeOS and Android.
It doesn't help with doing any splitting (e.g. 20 years ago) but in current era where software is architected as micro-services and packaged for containers and VMs, software is more likely to be "born as reusable component".
I've signed many contracts with this clause. I've always interpreted it as "if you post what you did on HN and everyone responds saying you clearly shouldn't have done that", then it isn't best practice.
I mean, it’s not like we’ve ever seen this with the agile movement /s.
I’ve gone through “agile transitions” in government contracting, at a high level it starts out with a high concept idea of reducing lead times and increasing productivity. Then directives get handed down through layers of management, the decision is made to adopt Scrum or SAFe™, that gets handed down to middle management, who tailor the process in ways that specifically benefit themselves, and you end up with waterfall done poorly and with extra steps™.
What will happen is that there will be very loose definitions of source code and flexible definitions timing when code is released. If an agency does not want to share, they’ll find a way to evade, and still check off the box.
Sure, but won't there also be some agencies who voluntarily implement sharing in the spirit of the law? And won't that be positive for them and their dept's reputation?
Most DOE contracts (the ones the government has with the university or consortium running the lab) usually say something to the effect of “unless you can prove this source code is marketable/SBIR worthy, you can keep it private or open source it (but not under the GPL). There’s exceptions, but the reasoning was also there to say that _other_ contractors should retain the ability to modify source code and similarly not release it (presumably defense was my guess).
So the bar has been high to
keep it private for $$$ reasons, but you could always keep it private for any other reason.
DOE Code is the program that ostensibly tracks the open source software, usually just through github organizations. OSTI is the division that tracks all IP and research.
Some things, like ZFSOnLinux are already shared publicly. The repository is now the OpenZFS repository and it has made many people's lives better. I know it has made my life easier. The open source development model benefited LLNL, which got a much better code base than they would have developed on their own. :)
There are other things already shared publicly like NASA IKOS:
That one gets far less attention from third parties than it should. If it could be developed into a general purpose sound static analyzer that handles multithreading, it would help to improve many other projects.
> IN GENERAL.—This Act shall not apply to classified source code or source code developed primarily for use in a national security system (as defined in section 11103 of title 40, United States Code).
The exemptions are extremely broad in section 4 of the act. I don't expect anything interesting to come of this reporting. Or for any money to be saved.
And I imagine there will likely be a sudden increase in the number of classified software projects and national security systems in the next 180 days. This may very well be another case of a law trying to make things better, but ultimately having the opposite effect.
Even worse, it requires establishing new policies and staffing enforcement. They'll need to throw bodies at this effort, and so people building sytems will have even more pesky forms and policies and permissions to deal with. This will add cost and have very little, if any, positive impact.
This is typically the “smart but cynical” position I hear from some bureaucratic actors and those who aspire to become like them.
It’s the sophisticated version of “Don’t attempt any change” brigade’s position.
My observations from a lifetime in very large, cumbersome orgs is that improvement only comes from change and in highly dysfunctional, low-performance and low-ambition environments almost any reasonable change, supported by a really tiny number of engaged participants with a clue, leads to outsized positive step changes.
Even better, doing this as a sustained, tide-coming-in approach over several years can create more engaged people with a clue and slow transition to high-ambition, moderate-to-good performance cultures.
It’s worth the effort if you’re not doing it alone, and know that all the attempts pay off as part of a cumulative push. It changes lives both in the service delivery org, as well as those they’re supposed to support.
Sounds like you came through frustrated cynicism yourself to a kind of enlightened optimism and want to call it realist. I really appreciate this point of view and I'd like to get there myself, but here's the rub:
> tide-coming-in approach over several years
This phrase does a bunch of work, and seems to almost agree with the cynical perspective that individual small positive changes (or the more common failed attempt at the same) are futile. But if the difference between optimism and cynicism was only a matter of being patient and persistent, then we should be able to observe things getting better over time in a relatively consistent way.
Is that happening? Honest question, what large organizations can we point to that are better or more effective than they were 5, 10, 50 years ago? (And: for any situations where improvement happened, was it really a tiny number of engaged participants doing bottom-up change, or was it top-down change by some kind of executive decree?)
Youth without perspective will have a hard time answering maybe, but if the youth and wise old heads are both trending cynical at the same time then maybe the cynical position is actually true, and patience/perspective are simply not as relevant as the optimist would hope. My own experience is probably somewhere between youth/wisdom, and I tend to avoid large orgs as much as possible! But as an outsider, it looks like large orgs are all dysfunctional by default and only get more dysfunctional over time, with or without external pressures forcing that situation. Maybe there's a bureaucratic version of the laws of thermodynamics at work here.. the phenomenon of entropy isn't really cynical or optimistic or pessimistic after all, it's just the way things are.
Some people jump early. I’ve stayed long enough - or returned later on request after doing a startup - to see whole orgs and product lines in telcos (!!), banks (!!!) and even government agencies (yeah, wasn’t really expecting that tbh) getting structurally better over time due to concerted effort of a relatively small group of folks.
I’ve been part of turnarounds where senior execs have said that the three hundred people here will lose their job if nothing changes. I still talk to some of those teams that transformed themselves and others, and made it.
> Per the new law, metadata includes information about whether custom code was developed under a contract or shared in a repository, the contract number, and a hyperlink to the repository where the code was shared.
Sadly it doesn't sound like the law requires agencies to make the code publicly open source, it just requires inter-agency sharing (bill full text [1]). They only need to share "metadata" publicly.
[1] https://www.congress.gov/bill/118th-congress/house-bill/9566...