Hard things that look easy

After working on a few data science (aka data analytics aka machine learning) problems with geoscientific data, I think we've figured out the 10-step workflow. I'm happy to share it with you now:

  1. Look at all these cool problems, machine learning can solve all of these! I just need to figure out which model to use, parameterize it, and IT'S GONNA BE AWESOME, WE'LL BE RICH. Let's just have a quick look at the data...
  2. Oh, there's no data.
  3. Three months later: we have data! Oh, the data's a bit messy.
  4. Six months later: wow, cleaning the data is gross and/or impossible. I hate my life.
  5. Finally, nice clean data. Now, which model do I choose? How do I set parameters? At least you expected these problems. These are well-known problems.
  6. Wait, maybe there are physical laws governing this natural system... oh well, the model will learn them.
  7. Hmm, the results are so-so. I guess it's harder to make predictions than I thought it would be.
  8. Six months later: OK, this sort of works. And people think it sounds cool. They just need a quick explanation.
  9. No-one understands what I've done.
  10. Where is everybody?

I'm being facetious of course, but only a bit. Modeling natural systems is really hard. Much harder for the earth than for, say, the human body, which is extremely well-known and readily available for inspection. Even the weather is comparitively easy.

Coupled with the extreme difficulty of the problem, we have a challenging data environment. Proprietary, heterogeneous, poor quality, lost, non-digital... There are lots of ways the data goblins can poop on the playground of machine learning.

If the machine learning lark is so hard, why not just leave it to non-artificial intelligence — humans. We already learned how to interpret data, right? We know the model takes years to train. Of course, but I don't accept that we couldn't use some of the features of intelligently applied big data analytics: objectivity, transparency, repeatability (by me), reproducibility (by others), massive scale, high speed... maybe even error tolerance and improved decisions, but those seem far off right now.

I also believe that AI models, like any software, can encode the wisdom of professionals — before they retire. This seems urgent, as the long-touted Great Crew Change is finally underway.

What will we work on?

There are lots of fascinating and tractable problems for machine learning to attack in geoscience — I hope many of them get attacked at the hackathon in June — and the next 2 to 3 years are going to be very exciting. There will be the usual marketing melée to wade through, but it's up to the community of scientists and data analysts to push their way through that with real results based on open data and, ideally with open code.

To be sure, this is happening already — we've had over 25 entrants publishing their solutions to the SEG machine learning contest already, and there will be more like this. It's the only way to building transparent problem-solving systems that we can all participate in and, ultimately, trust.

What machine learning problems are most pressing in geoscience?
I'm collecting ideas for projects to tackle in the hackathon. Please visit this Tricider question and contribute your comments, opinions, or ideas of your own. Help the community work on the problems you care about.

News and updates and a sandwich

Plans for the hackathon in Paris in June are well underway. We now have two major sponsors: Dell EMC and now Total E&P too will be supporting the event with generous funding. Bolstered by this, I've set a goal of getting 50 participants in the event. Imagine that!

If you would like to help us reach this goal, please consider printing out some of these posters (right) and putting them up in your place of work or study >> hi-res PDF << It should even be readable in black & white, if that's your only option.

You can find links to everything you need to know about the event at agilescientific.com/paris.

Le grand sandwich délicieux

The hackathon is really just the filling in a delicious Parisian sandwich of geocomputing goodness. The bread at the bottom is the Hacker Bootcamp on 9 June. The filling is the hackathon weekend... and the final piece is the EAGE workshop on machine learning. Convened by geoscientists at Total and IFP, it should be a great day of knowledge sharing and discussion. I can't wait.

11 days to go!

There are only 11 days left to take part in the SEG Machine Learning contest, in which you are challenged to predict lithologies in two wells, given some wireline logs and lithologies in several other nearby wells. Everything you need to get started, even if you've never tried anything like this before, is right here. See Brendon Hall's TLE article for more deets.

The radio show for geo-nerds

Undersampled Radio is still going strong. We just recorded episode 32 today. Last week's chat with Prof Chris Jackson (Imperial College London) — who's embarking on a GSA lecture tour this year — was a real cracker, check it out:

The other thing you need to know about Chris is that he's started writing his blog again. It's awesome, of course, and you should probably just go and read it now...

Silos are a feature, not a bug

If you’ve had the same problem for a long time, maybe it’s not a problem. Maybe it’s a fact.
— Yitzhak Rabin

"Break down the silos" is such a hackneyed phrase that it's probably not even useful any more. But I still hear it all the time — especially in our work in governments and large enterprises. It's the wrong idea — silos are awesome.

The thing is: people like silos. That's why they are there. Whether they are project teams, organizational units, technical communities, or management layers, silos are comfortable. People can speak freely. Everyone shares the same values. There is trust. There is purpose.

The problem is that much of the time — maybe most of the time, for most people — you want silos not to matter. Don't confuse this with not wanting them to exist. They do exist: get used to it. So now make them not matter. Cope don't fix. 

Permeable seals

In the context of groups of humans who want to work together, what do permeable silos look like? I mean really leaky ones.

The answer is: it depends. Here are the features they will have:

  • They serve their organization. The silo must serve the organization or community it's part of. I think a service-oriented mindset gets the best out of people: get them asking "How can I help?". If it is not serving anyone, the silo needs to die.
  • They are internally effective. This is the whole point of the silo, so this had better be true. Make sure people can do a better job because of the efficiencies of the silo. Resources are provided. Responsibilities are understood. The shared purpose must result in great things... if not, the silo needs to die.
  • They are open. This is the leakiness criterion. If someone needs something from the silo, it must be obvious how to get it, and the cost of getting it must be very low. If someone wants to join the silo, it's obvious how to do this, and they are welcomed. If something about the silo needs to change, there is a clear path to making this known.
  • They are transparent. People need to know what the silo is for. If people look in, they can see how things work. Don't build secret clubs, black boxes, or other dark places. Conversely, if people in the silo want to look outside, they can. Importantly: if the silo's level of transparency doesn't make you uncomfortable, you're not doing enough of it.

The openness is key. Ideally, the mechanism for getting things from the silo is the same one that the silo's own inhabitants use. This is by far the simplest, cheapest way to nail it. Think of it as an interface; if you're a programmer, think of it as an API. Indeed, in many cases, it will involve an actual API. If this does not exist, other people will come up with their own solutions, and if this happens often enough, the silo will cease to be useful to the organization. Competition between silos is unhelpful.

Build more silos!

A government agency can be a silo, as long as it has a rich, free interface for other agencies and the general public to access its services. Geophysics can be a silo, as long as it's easy for a wave-curious engineer to join in, and the silo is promoting excellence and professional development amongst its members. An HR department can be a silo, as long as its practices and procedures are transparent and people can openly ask why the heck they still use Myers–Briggs tests.

Go and build a silo. Then make it not matter most of the time.


Image: Silos by Flickr user Guerretto, licensed CC-BY.

x lines of Python: machine learning

You might have noticed that our web address has changed to agilescientific.com, reflecting our continuing journey as a company. Links and emails to agilegeoscience.com will redirect for the foreseeable future, but if you have bookmarks or other links, you might want to change them. If you find anything that's broken, we'd love it if you could let us know.


Artificial intelligence in 10 lines of Python? Is this really the world we live in? Yes. Yes it is.

After reminding you about the SEG machine learning contest just before Christmas, I thought I could show you how you train a model in a supervised learning problem, then use it to make predictions on unseen data. So we'll just break a simple contest entry down into ten easy steps (note that you could do this on anything, doesn't have to be this problem). 

A machine learning primer

Before we start, let's review quickly what a machine learning problem looks like, and introduct a bit of jargon. To begin, we have a dataset (e.g. the 'Old' well in the diagram below). This consists of records, called instances. In this problem, each instance is a depth location. Each instance is a feature vector: a row vector comprising attributes or features, which in our case are wireline log values for GR, ILD, and so on. Each feature vector is a row in a matrix we conventionally call \(X\). Associated with each instance is some target label — the thing we want to predict — which is a continuous quantity in a regression problem, discrete in a classification problem. The vector of labels is usually called \(y\). In the problem below, the labels are integers representing 9 different facies.

You can read much more about the dataset I'm using in Brendon Hall's tutorial (The Leading Edge, October 2016).

The ten steps to glory

Well, maybe not glory, but something. A prediction of facies at two wells, based on measurements made at 10 other wells. You can follow along in the notebook, but all the highlights are included here. We start by loading the data into a 'dataframe', which you can think of like a spreadsheet:

Now we specify the features we want to use, and make the matrix \(X\) and label vector \(y\):

  features = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']
  X = df[features].values
  y = df.Facies.values

Since this dataset is all we have, we'd like to set aside some data to test our model on. The library we're using, scikit-learn, has functions to do this sort of thing; by default, it'll split \(X\) and \(y\) into train and test datasets, with 25% of the data going into the test part:

  X_train, X_test, y_train, y_test = train_test_split(X, y)

Now we're ready to choose a model, instantiate it (with some parameters if we want), and train the model (i.e. 'fit' the data). I am calling the trained model augur, because I like that word.

  from sklearn.ensemble import ExtraTreesClassifier
  model = ExtraTreesClassifier()
  augur = model.fit(X_train, y_train)

Now we're ready to take the part of the dataset we reserved for validation, X_test, and predict its labels. Then we can compare those with the known labels, y_test, to see how well we did:

  y_pred = augur.predict(X_test)

We can get a quick idea of the quality of prediction with sklearn.metrics.accuracy_score(y_test, y_pred), but it's more interesting to look at the classification report, which shows us the precision and recall for each class, along with their harmonic mean, the F1 score:

  from sklearn.metrics import classification_report
  print(classification_report(y_test, y_pred))
classification_report.png

Each row is a facies (facies 1, facies 2, etc.). The support is the number of instances representing that label. The key number here is 0.63 — we can regard this as an expression of the accuracy of our prediction. If that sounds low to you, I encourage you to enter the machine learning contest! If it sounds high, that's because it is — it's much too high. In fact, the instances of our dataset are not independent: they are spatially correlated (in depth). It would be smarter not to remove some random samples for validation, but to reserve entire wells. After all, this is how we typically collect subsurface data: one well at a time.

But now we're getting into the weeds of data science. I'll let you venture in there on your own...