An open source wish list

After reviewing a few code-dependent scientific papers recently, I’ve been thinking about reproducibility. Is there a minimum requirement for scientific code, or should we just be grateful for any code at all?

The sky’s the limit

Click to enlarge

I’ve come to the conclusion that there are a few things that are essential if you want anyone to be able to do more than simply read your code. (If that’s all you want, just add a code listing to your supplementary material.)

The number one thing is an open licence. (I recently wrote about how to choose one). Assuming the licence is consistent with everything you have used (e.g. you haven’t used a library with the GPL, then put an Apache licence on it), then you are protected by the indeminity clauses and other people can re-use your code on your terms.

After that, good practice is to improve the quality of your code. Most of us write horrible code a lot of the time. But after bit of review, some refactoring, some input from colleagues, you will have something that is less buggy, more readable, and more reusable (even by you!).

If this was a one-off piece of code, providing figures for a paper for instance, you can stop here. But if you are going to keep developing this thing, and especially if you want others to use it to, you should keep going.

Best practice is to start using continuous integration, to help ensure that the code stays in good shape as you continue to develop it. And after that, you can make your tool more citable, maybe write a paper about it, and start developing a user/contributor community. The sky’s the limit — and now you have help!

Other models

When I shared this on Twitter, Simon Waldman mentioned that he had recently co-authored a paper on this topic. Harrison et al (2021) proposed that there are three priorities for scientific software: to be correct, to be reusable, and to be documented. From there, they developed a hierachy of research software projects:

  • Level 0 — Barely repeatable: the code is clear and tested in a basic way.

  • Level 1 — Publication: code is tested, readable, available and ideally openly licensed.

  • Level 2 — Tool: code is installable and managed by continuous integration.

  • Level 3 — Infrastructure: code is reviewed, semantically versioned, archived, and sustainable.

There are probably still other models out there.— if you know if a good one, please drop it in the Comments.

References

Sam Harrison, Abhishek Dasgupta, Simon Waldman, Alex Henderson & Christopher Lovell (2021, May 14). How reproducible should research software be? Zenodo. DOI: 10.5281/zenodo.4761867

Equinor should change its open data licence

This is an open letter to Equinor to appeal for a change to the licence used on Volve, Northern Lights, and other datasets. If you wish to co-sign, please add a supportive comment below. (Or if you disagree, please speak up too!)


Open data has had huge impact on science and society. Whether the driving purpose is innovation, transparency, engagement, or something else, open data can make a difference. Underpinning the dataset itself is its licence, which grants permission to others to re-use and distribute open data. Open data licences are licences that meet the Open Definition.

In 2018, Equinor generously released a very large dataset from the decommissioned field Volve. Initially it was released with no licence. Later in 2018, a licence was added but it was a non-open licence, CC BY-NC-SA (open licences cannot be limited to non-commercial use, which is what the NC stands for). Then, in 2020, the licence was changed to a modified CC BY licence, which you can read here.

As far as I know, Volve and other projects still carry this licence. I’ll refer to this licence as “the Equinor licence”. I assume it applies to the collection of data, and to the contents of the collection (where applicable).

There are 3 problems with the licence as it stands:

  1. The licence is not open.

  2. Modified CC licences have issues.

  3. The licence is not clear and exposes licencees to risk of infringement.

Let's look at these in turn.

The licence is not open

The Equinor licence is not an open licence. It does not meet the Open Definition, section 2.1.2 of which states:

The license must allow redistribution of the licensed work, including sale, whether on its own or as part of a collection made from works from different sources.

The licence does not allow sale and therefore does not meet this criterion. Non-open licences are not compatible with open licences, therefore these datasets cannot be remixed and re-used with open content. This greatly limits the usefulness of the dataset.

Modified CC licences have issues

The Equinor licence states:

This license is based on CC BY 4.0 license 

I interpret this to mean that it is intended to act as a modified CC BY licence. There are two issues with this:

  1. The copyright lawyers at Creative Commons strongly advises against modifying (in particular, adding restrictions to) their licences.

  2. If you do modify one, you may not refer to it as a CC BY licence or use Creative Commons trademarks; doing so violates their trademarks.

Both of these issues are outlined in the Creative Commons Wiki. According to that document, these issues arise because modified licences confuse the public. In my opinion (and I am not a lawyer, etc), the Equinor licence is confusing, and it appears to violate the Creative Commons organization's trademark policy.

Note that 'modify' really means 'add restrictions to' here. It is easier to legally and clearly remove restrictions from CC licences, using the CCPlus licence extension pattern

The licence is not clear

The Equinor licence contains five restrictions:

  1. You may not sell the Licensed Material.

  2. You must give Equinor and the Volve license partners credit, and provide a link to these terms and conditions, as well as a copyright notice if applicable.

  3. You may not share Adapted Material under a license that prevents recipients from complying with these terms and conditions.

  4. You shall not use the Licensed Material in a manner that appears misleading nor present the Licensed Material in a distorted or incorrect manner. 

  5. The license covers all data in the dataset whether or not it is by law covered by copyright.

Looking at the points in turn:

Point 1 is, I believe, the main issue for Equinor. For some reason, this is paramount for them.

Point 2 seems like a restatement of the BY restriction that is the main feature of the CC-BY licence and is extensively described in Section 3.a of that licence

Point 3 is already covered by CC BY in Section 3.a.4.

Point 4 is ambiguous and confusing. Who is the arbiter of this potentially subjective criterion? How will it be applied? Will Equinor examine every use of the data? The scenario this point is trying to prevent seems already to be covered by standard professional ethics and 'errors and omissions'. It's a bit like saying you can't use the data to commit a crime — it doesn't need saying because commiting crimes is already illegal. 

Point 5 is strange. I don’t know why Equinor wants to licence material that no-one owns, but licences are legal contracts, and you can bind people into anything you can agree on. One note here — the rights in the database (so-called 'database rights') are separate from the rights in the contents: it is possible in many jurisdictions to claim sui generis rights in a collection of non-copyrightable elements; maybe this is what was intended? Importantly, Sui generis database rights are explicitly covered by CC BY 4.0.

Finally, I recently received an email communication from Equinor that stated the following:

[...] nothing in our present licencing inhibits the fair and widespread use of our data for educational, scientific, research and commercial purposes. You are free to download the Licensed Material for non-commercial and commercial purposes. Our only requirement is that you must add value to the data if you intend to sell them on.

The last sentence (“Our only requirement…”) states that there is only one added restriction. But, as I just pointed out, this is not what the licence document states. The Equinor licence states that one may not sell the licensed material, period. The email states that I can sell it if I add value. Then the questions are, "What does 'add value' mean?", and "Who decides?". (It seems self-evident to me that it would be very hard to sell open material if one wasn't adding value!)

My recommendations

In its current state, I would not recommend anyone to use the Volve or Northern Lights data for any purpose. I know this sounds extreme, but it’s important to appreciate the huge imbalance in the relationship between Equinor and its licensees. If Equinor's future counsel — maybe in a decade — decides that lots of people have violated this licence, what happens next could be quite unjust. Equinor can easily put a small company out of business with a lawsuit. I know that might seem unlikely today, but I urge you to read about GSI's extensive lawsuits in Canada — this is a real situation that cost many companies a lot of money. You can read about it in my blog post, Copyright and seismic data.

When it comes to licences, and legal contracts in general, I believe that less is more. Taking a standard licence and adding words to solve problems you don’t have but can imagine having — and lawyers have very good imaginations — just creates confusion.

I therefore recommend the following:

  • Adopt an unmodifed CC BY 4.0 licence for the collection as a whole.

  • Adopt an unmodifed CC BY 4.0 licence for the contents of the collection, where copyrightable.

  • Include copyright notices that clearly state the copyright owners, in all relevant places in the collection (e.g. data folders, file headers) and at least at the top level. This way, it's clear how attribution should be done.

  • Quell the fear of people selling the dataset by removing as many possible barriers to using the free version as possible, and generally continuing to be a conspicuous champion for open data.

If Equinor opts to keep a version of the current licence, I recommend at least removing any mention of CC BY, it only adds to the confusion. The Equinor licence is not a CC BY licence, and mentioning Creative Commons violates their policy. We also suggest simplifying the licence if possible, and clarifying any restrictions that remain. Use plain language, give examples, and provide a set of Frequently Asked Questions.

The best path forward for fostering a community around these wonderful datasets that Equinor has generously shared with the community, is to adopt a standard open licence as soon as possible.

How can technical societies support openness?

spe-logo-blue.png__314x181_q85_subsampling-2.png

There’s an SPE conference on openness happening this week. Around 60 people paid the $400 registration fee — does that seem like a lot for a virtual conference? — and it’s mostly what you’d expect: talks and panel discussions. But there’s 20 minutes per day for open discussion, and we must be grateful for small things! For sure, it is always good to see the technical societies pay attention to open data, open source code, and open access content.

But what really matters is action, and in my breakout room today I asked about SPE’s role in raising the community’s level of literacy around openness. Someone asked in turn what sorts of things the organization could do. I said my answer needed to be written down 😄 so here it is.

To save some breath, I’m going to use the word openness to talk about open access content, open source code, and open data. And when I say ‘open’, I mean that something meets the Open Definition. In a nutshell, this states:

“Open data and content can be freely used, modified, and shared by anyone for any purpose

Remember that ‘free’ here means many things, but not necessarily ‘free of charge’.

So that we don’t lose sight of the forest for the tree, my advice boils down to this: I would like to see all of the technical societies understand and embrace the idea that openness is an important way for them to increase their reach, improve their accessibility, become more equitable, increase engagement, and better serve their communities of practice.

No, ‘increase their revenue’ is not on that list. Yes, if they do those things, their revenue will go up. (I’ve written about the societies’ counterproductive focus on revenue before.)

Okay, enough preamble. What can the societies do to better serve their members? I can think of a few things:

  • Advocate for producers of the open content and technology that benefits everyone in the community.

  • Help member companies understand the role openness plays in innovation and help them find ways to support it.

  • Take a firm stance on expectations of reproducibility for journal articles and conference papers.

  • Provide reasonable, affordable options for authors to choose open licences for their work (and such options must not require a transfer of copyright).

  • When open access papers are published, be clear about the licence. (I could not figure out the licence on the current most read paper in SPE Journal, although it says ‘open access’.)

  • Find ways to get well-informed legal advice about openness to members (this advice is hard to find; most lawyers are not well informed about copyright law, nevermind openness).

  • Offer education on openness to members.

  • Educate editors, associate editors, and meeting convenors on openness so that they can coach authors, reviewers., and contributors.

  • Improve peer review machinery to better support the review of code and data submissions.

  • Highlight exemplary open research projects, and help project maintainers improve over time. (For example, what would it take to accelerate MRST’s move to an open language? Could SPE help create those conditions?)

  • Recognize that open data benchmarks are badly needed and help organize labour around them.

  • Stop running data science contests that depend on proprietary data.

  • Put an open licence on PetroWiki. I believe this was Apache’s intent when they funded it, hence the open licences on AAPG Wiki and SEG Wiki. (Don’t get me started on the missed opportunity of the SEG/AAPG/SPE wikis.)

  • Allow more people from more places to participate in events, with sympathetic pricing, asynchronous activities, recorded talks, etc. It is completely impossible for a great many engineers to participate in this openness workshop.

  • Organize more events around openness!

I know that SPE, like the other societies, has some way to go before they really internalize all of this. That’s normal — change takes time. But I’m afraid there is some catching up to do. The petroleum industry is well behind here, and none of this is really new — I’ve been banging on about it for a decade and I think of myself as a newcomer to the openness party. Jon Claerbout and Paul de Groot must be utterly exhausted by the whole thing!

The virtual conference this week is an encouraging step in the right direction, as are the recent SPE datathons (notwithstanding what I said about the data). Although it’s a late move — making me wonder if it’s an act of epiphany or of desperation — I’m cautiously encouraged. I hope the trend continues and picks up pace. And I’m looking forward to more debate and inspiration as the week goes on.

Projects from the Geothermal Hackathon 2021

geothermal_skull_thumb.png

The second Geothermal Hackathon happened last week. Timed to coincide with the Geosciences virtual event of the World Geothermal Congress, our 2-day event brought about 24 people together in the famous Software Underground Chateau (I’m sorry if I missed anyone!). For comparison, last year we were 13 people, so we’re going in the right direction! Next time I hope we’re as big as one of our ‘real world’ events — maybe we’ll even be able to meet up in local clusters.

Here’s a rundown of the projects at this year’s event:

Induced seismicity at Espoo, Finland

Alex Hobé, Mohsen Bazagan and Matteo Niccoli

Alex’s original workflow for creating dynamic displays of microseismic events was to create thousands of static images then stack them into a movie, so the first goal was something more interactive. On Day 1 Alex built a Plotly widget with a time zoomer/slider in a Jupyter Notebook. On day 2 he and Matteo tried Panel for a dynamic 3D plot. Alex then moved the data into LLNL Visit for fully interactive 3D plots. The team continues to hack on the idea.

geothermal_hack_2021_seismic.png

Fluid inclusions at Coso, USA

Diana Acero-Allard, Jeremy Zhao, Samuel Price, Lawrence Kwan, Jacqueline Floyd, Brendan, Gavin, Rob Leckenby and Martin Bentley

Diana had the idea of a gas analysis case study for Coso Field, USA. The team’s specific goal was to develop visualization tools for interetpaton of fluid inclusion gas data to identify fluid types, regions of permeability, and geothermal processes. They had access to analyses from 29 wells, requiring the usual data science workflow: find and load the data, clean the data, make some visualizations and maps, and finally analyse the permeability. GitHub repo here.

geothermal_hack_2021_fluid-incl.png

Utah Forge data pipeline

Andrea Balza, Evan Bianco, and Diego Castañeda

Andrea was driven to dive into the Utah FORGE project. Navigating the OpenEI data portal was a bit hit-and-miss, having to download files to get into ZIP files and so on (this is a common issue with open data repositories). The team eventually figured out how to programmatically access the files to explore things more easily — right from a Jupyter Notebook. Their code for any data on the OpenEI site, not just Utah FORGE, so it’s potentially a great research tool. GitHub repo here.

geothermal_hack_2021_forge.png

Pythonizing a power density estimation tool

Irene Wallis, Jan Niederau, Hannah Wood, Will Middlebrook, Jeff Jex, and Bill Cummings

Like a lot of cool hackathon projects, this one started with spreadsheet that Bill created to simplify the process of making power density estimates for geothermal fields under some statistical assumptions. Such a clear goal always helps focus the mind and the team put together some Python notebooks and then a Streamlit app — which you can test-drive here! From this solid foundation, the team has plenty of plans for new directions to take the tool. GitHub repo here.

geothermal_hack_2021_streamlit2.png
geothermal_hack_2021_streamlit1.png

Computing boiling point for depth

Thorsten Hörbrand, Irene Wallis, Jan Niederau and Matt Hall

Irene identified the need for a Python tool to generate boiling-point-for-depth curves, accommodating various water salinities and chemistries. As she showed during her recent TRANSFORM tutorial (which you must watch!), so-called BPD curves are an important part of geothermal well engineering. The team produced some scripts to compute various scenarios, based on corrections in the IAPWS standards and using the PHREEQC aqueous geochemistry modeling software. GitHub repo here.

geothermal_hack_2021_bpd-curves.png

A big Thank You to all of the hackers that came along to this virtual event. Not quite the same as a meatspace hackathon, admittedly, but Gather.town + Slack was definitely an improvement over Zoom + Slack. At least we have an environment in which people can arrive and immediately get a sense of what is happening in the event. When you realize that people at the tables are actually sitting in Canada, the US, the UK, Switzerland, South Africa, and Auckland — it’s clear that this could become an important new way to collaborate across large distances.

geothermal_hack_2021_chateau.png

Do check out all these awesome and open-source projects — and check out the #geothermal channel in the Software Underground to keep up with what happens next. We’ll be back in the future — perhaps the near future! — with more hackathons and more geothermal technology. Hopefully we’ll see you there! 🌋