90 years of well logs

Today is the 90th anniversary of the first well log. On 5 September 1927, three men from Schlumberger logged the Diefenbach [sic] well 2905 at Dieffenbach-lès-Wœrth in the Pechelbronn heavy oil field in the Alsace region of France.

The site of the Diefenbach 2905 well. © Google, according to terms.

The site of the Diefenbach 2905 well. © Google, according to terms.

 
Pechelbronn_log_plot.png

The geophysical services company Société de Prospection Électrique (Processes Schlumberger), or PROS, had only formed in July 1926 but already had sixteen employees. Headquartered in Paris at 42, rue Saint-Dominique, the company was attempting to turn its resistivity technology to industrial applications, especially mining and petroleum. Having had success with horizontal surface measurements, the Diefenbach well was the first attempt to measure resistivity in a wellbore. PROS went on to become Schlumberger.

The resistivity prospecting system had been designed by the Schlumberger brothers, Conrad (1878–1936, a professor at École des Mines) and Maurice (1884–1953, a mining engineer), over the period from about 1912 until 1923. The task of adapting the technology was given to Henri Doll (1902–1991), Conrad's son-in-law since 1923, and the Alsatian well was to be the first field test of the so-called "electrical coring" method. The client was Deutsche Erdöl Aktiengesellschaft, now DEA of Hamburg, Germany.

As far as I can tell, the well — despite usually being called "the Pechelbronn well" — was located at the site of a monument at the intersection of Route de Wœrth with Rue de Preuschdorf in Dieffenbach-lès-Wœrth, about 3 km west of Merkwiller-Pechelbronn. Henri Doll logged the well with Roger Jost and Charles Scheibli. Using rudimentary equipment, they logged about 145 m of the 488-metre hole, starting at 279 m MD, taking a reading every metre and plotting the log by hand. Yesterday I digitized this log; download it in LAS format here


Pechelbronn_thumbnail.png

The story of what the Schlumberger brothers and Henri Doll achieved is fascinating; I recommend reading Don Hill's brief history (2012) — it's free to read at Wiley. The period of invention that followed the Pechelbronn success was inspiring.

If you're looking at well logs today, take a second to thank Conrad, Maurice, and Henri for their remarkable idea.

PS If you're interested in petroleum history, the AOGHS page This Week is worth a look.


The French television programme Midi en France recorded this segment about the Pechelbronn field in 2014. The narration is in French, "The fields of maize gorge on sunshine, the pumps on petroleum...", but there are some nice pictures to look at.

References and bibliography

Clapp, Frederick G (1932). Oil and gas possibilities of France. AAPG Bulletin 16 (11), 1092–1143. Contains a good history of exploration and production from the Oligocene sands in Pechelbronn, up to about 1931 (the field produced up to 1970). AAPG Datapages.

Delacour, Jacques (2003). Une technique de prospection minière et pétrolière née en Pays d'Auge. SABIX 34, September 2003. Available online.

École des Mines page on Conrad Schlumberger at annales.org.

Hill, DG (2012). Appendix A: Historical Review (Milestone Developments in Petrophysics). In: Buryakovsky, L, Chilingar, GV, Rieke, HH, and Shin, S (2012). Petrophysics: Fundamentals of the Petrophysics of Oil and Gas Reservoirs, John Wiley & Sons, Inc., Hoboken, NJ, USA. doi: 10.1002/9781118472750.app1. A nice potted history of well logging, including important dates.

Musée Français du Pétrole website, http://www.musee-du-petrole.com/historique/

Pike, B and Duey, R (2002). Logging history rich with innovation. Hart's E&P Magazine. September 2002. Available online. Interesting article, but beware: there are one or two inaccuracies in this article, and I believe the image of the well log is incorrect.

Attribution is not permission

Onajite_cover.png

This morning a friend of mine, Fernando Enrique Ziegler, a pore pressure researcher and practitioner in Houston, let me know about an "interesting" new book from Elsevier: Practical Solutions to Integrated Oil and Gas Reservoir Analysis, by Enwenode Onajite, a geophysicist in Nigeria... And about 350 other people.

What's interesting about the book is that the majority of the content was not written by Onajite, but was copy-and-pasted from discussions on LinkedIn. A novel way to produce a book, certainly, but is it... legal?

Who owns the content?

Before you read on, you might want to take a quick look at the way the book presents the LinkedIn material. Check it out, then come back here. By the way, if LinkedIn wasn't so damn difficult to search, or if the book included a link or some kind of proper citation of the discussion, I'd show you a conversation in LinkedIn too. But everything is completely untraceable, so I'll leave it as an exercise to the reader.

LinkedIn's User Agreement is crystal clear about the ownership of content its users post there:

[...] you own the content and information that you submit or post to the Services and you are only granting LinkedIn and our affiliates the following non-exclusive license: A worldwide, transferable and sublicensable right to use, copy, modify, distribute, publish, and process, information and content that you provide through our Services [...]

This is a good user agreement [Edit: see UPDATE, below]. It means everything you write on LinkedIn is © You — unless you choose to license it to others, e.g. under the terms of Creative Commons (please do!).

Fernando — whose material was used in the book — tells me that none of the several other authors he has asked gave, or were even asked for, permission to re-use their work. So I think we can say that this book represents a comprehensive infringement of copyright of the respective authors of the discussions on LinkedIn.

Roles and reponsibilities

Given the scale of this infringement, I think there's a clear lack of due diligence here on the part of the publisher and the editors. Having said that, while publishers are quick to establish their copyright on the material they publish, I would say that this lack of diligence is fairly normal. Publishers tend to leave this sort of thing to the author, hence the standard "Every effort has been made..." disclaimer you often find in non-fiction books... though not, apparently, in this book (perhaps because zero effort has been made!).

But this defence doesn't wash: Elsevier is the copyright holder here (Onajite signed it over to them, as most authors do), so I think the buck stops with them. Indeed, you can be sure that the company will make most of the money from the sale of this book — the author will be lucky to get 5% of gross sales, so the buck is both figurative and literal.

Incidentally, in Agile's publishing house, Agile Libre, authors retain copyright, but we take on the responsibility (and cost!) of seeking permissions for re-use. We do this because I consider it to be our reputation at stake, as much as the author's.

OK, so we should blame Elsevier for this book. Could Elsevier argue that it's really no different from quoting from a published research paper, say? Few researchers ask publishers or authors if they can do this — especially in the classroom, "for educational purposes", as if it is somehow exempt from copyright rules (it isn't). It's just part of the culture — an extension of the uneducated (uninterested?) attitude towards copyright that prevails in academia and industry. Until someone infringes your copyright, at least.

Seek permission not forgiveness

I notice that in the Acknowledgments section of the book, Onajite does what many people do — he gives acknowledgement ("for their contributions", he doesn't say they were unwitting) to some the authors of the content. Asking for forgiveness, as it were (but not really). He lists the rest at the back. It's normal to see this sort of casual hat tip in presentations at conferences — someone shows an unlicensed image they got from Google Images, slaps "Courtesy of A Scientist" or a URL at the bottom, and calls it a day. It isn't good enough: attribution is not permission. The word "courtesy" implies that you had some.

Indeed, most of the figures in Onajite's book seem to have been procured from elsewhere, with "Courtesy ExxonMobil" or whatever passing as a pseudolicense. If I was a gambler, I would bet that the large majority were used without permission.

OK, you're thinking, where's this going? Is it just a rant? Here's the bottom line:

The only courteous, professional and, yes, legal way to re-use copyrighted material — which is "anything someone created", more or less — is to seek written permission. It's that simple.

A bit of a hassle? Indeed it is. Time-consuming? Yep. The good news is that you'll usually get a "Sure! Thanks for asking". I can count on one hand the number of times I've been refused.

The only exceptions to the rule are when:

  • The copyrighted material already carries a license for re-use (as Agile does — read the footer on this page).
  • The copyright owner explicitly allows re-use in their terms and conditions (for example, allowing the re-publication of single figures, as some journals do).
  • The law allows for some kind of fair use, e.g. for the purposes of criticism.

In these cases, you do not need to ask, just be sure to attribute everything diligently.

A new low in scientific publishing?

What now? I believe Elsevier should retract this potentially useful book and begin the long process of asking the 350 authors for permission to re-use the content. But I'm not holding my breath.

By a very rough count of the preview of this $130 volume in Google Books, it looks like the ratio of LinkedIn chat to original text is about 2:1. Whatever the copyright situation, the book is definitely an uninspiring turn for scientific publishing. I hope we don't see more like it, but let's face it: if a massive publishing conglomerate can make $87 from comments on LinkedIn, it's gonna happen.

What do you think about all this? Does it matter? Should Elsevier do something about it? Let us know in the comments.


UPDATE Friday 1 September

Since this is a rather delicate issue, and events are still unfolding, I thought I'd post some updates from Twitter and the comments on this post:

  • Elsevier is aware of these questions and is looking into it.
  • Re-read the user agreement quote carefully. As Ronald points out below, I was too hasty — it's really not a good user agreement, LinkedIn have a lot of scope to re-use what you post there. 
  • It turns out that some people were asked for permission, though it seems it was unclear what they were agreeing to. So the author knew that seeking permission was a good idea.
  • It also turns out that at least one SPE paper was reproduced in the book, in a rather inconspicuous way. I don't know if SPE granted rights for this, but the author at least was not identified.
  • Some people are throwing the word 'plagiarism' around, which is rather a serious word. I'm personally willing to ascribe it to 'normal industry practices' and sloppy editing and reviewing (the book was apparently reviewed by no fewer than 5 people!). And, at least in the case of the LinkedIn content, proper attribution was made. For me, this is more about honesty, quality, and value in scientific publishing than about misconduct per se.
  • It's worth reading the comments on this post. People are raising good points.

Part of the thumbnail image was created by Jannoon028 — Freepik.com — and licensed CC-BY.

x lines of Python: read and write CSV

A couple of weeks ago, in Murphy's Law for Excel, I wrote about the dominance of spreadsheets in applied analysis, and how they may be getting out of hand. Then in Organizing spreadsheets I wrote about how — if you are going to store data in spreadsheets — to organize your data so that you do the least amount of damage. The general goal being to make your data machine-readable. Or, to put it another way, to allow you to save your data as comma-separated values or CSV files.

CSV is the de facto standard way to store data in text files. They are human-readable, easy to parse with multiple tools, and they compress easily. So you need to know how to read and write them in your analysis tool of choice. In our case, this is the Python language. So today I present a few different ways to get at data stored in CSV files.

How many ways can I read thee?

In the accompanying Jupyter Notebook, we read a CSV file into Python in six different ways:

  1. Using the pandas data analysis library. It's the easiest way to read CSV and XLS data into your Python environment...
  2. ...and can happily consume a file on the web too. Another nice thing about pandas. It also writes CSV files very easily.
  3. Using the built-in csv package. There are a couple of standard ways to do this — csv.reader...
  4. ...and csv.DictReader. This library is handy for when you don't have (or don't want) pandas.
  5. Using numpy, the numeric library for Python. If you just have a CSV full of numbers and you want an array in the end, you can skip pandas.
  6. OK, it's not really a CSV file, but for the finale we read a spreadsheet directly from Google Sheets.

I usually count my lines diligently in these posts, but not this time. With pandas you're looking at a one-liner to read your data:

df = pd.read_csv("myfile.csv")

and a one-liner to write it out again. With csv.DictReader you're looking at 3 lines to get a list of dicts (but watch out: your numbers will be strings). Reading a Google Doc is a little more involved, not least because you'll need to set up an app and get an API key to handle authentication.

That's all there is to CSV files. Go forth and wield data like a pro! 

Next time in the xlines of Python series we'll look at reading seismic station data from the web, and doing a bit of time-series analysis on it. No more stuff about spreadsheets and CSV files, I promise :)


The thumbnail image is based on the possibly apocryphal banksy image of an armed panda, and one of texturepalace.com's CC-BY textures.

Organizing spreadsheets

A couple of weeks ago I alluded to ill-formed spreadsheets in my post Murphy's Law for Excel. Spreadsheets are clearly indispensable, and are definitely great for storing data and checking CSV files. But some spreadsheets need to die a horrible death. I'm talking about spreadsheets that look like this (click here for the entire sheet):

Bad_spreadsheet_3.png

This spreadsheet has several problems. Among them:

  • The position of a piece of data changes how I interpret it. E.g. a blank row means 'new sheet' or 'new well'.
  • The cells contain a mixture of information (e.g. 'Site' and the actual data) and appear in varying units.
  • Some information is encoded by styles (e.g. using red to denote a mineral species). If you store your sheet as a CSV (which you should), this information will be lost.
  • Columns are hidden, there are footnotes, it's just a bit gross.

Using this spreadsheet to make plots, or reading it with software, with be a horrible experience. I will probably swear at my computer, suffer a repetitive strain injury, and go home early with a headache, cursing the muppet that made the spreadsheet in the first place. (Admittedly, I am the muppet that made this spreadsheet in this case, but I promise I did not invent these pathologies. I have seen them all.)

Let's make the world a better place

Consider making separate sheets for the following:

  • Raw data. This is important. See below.
  • Computed columns. There may be good reasons to keep these with the data.
  • Charts.
  • 'Tabulated' data, like my bad spreadsheet above, with tables meant for summarization or printing.
  • Some metadata, either in the file properties or a separate sheet. Explain the purpose of the dataset, any major sources, important assumptions, and your contact details.
  • A rich description of each column, with its caveats and assumptions.

The all-important data sheet has its own special requirements. Here's my guide for a pain-free experience:

  • No computed fields or plots in the data sheet.
  • No hidden columns.
  • No semantic meaning in formatting (e.g. highlighting cells or bolding values).
  • Headers in the first row, only data in all the other rows.
  • The column headers should contain only a unique name and [units], e.g. Depth [m], Porosity [v/v].
  • Only one type of data per column: text OR numbers, discrete categories OR continuous scalars.
  • No units in numeric data cells, only quantities. Record depth as 500, not 500 m.
  • Avoid keys or abbreviations: use Sandstone, Limestone, Shale, not Ss, Ls, Sh.
  • Zero means zero, empty cell means no data.
  • Only one unit per column. (You only use SI units right?)
  • Attribution! Include a citation or citations for every record.
  • If you have two distinct types or sources of data, e.g. grain size from sieve analysis and grain size from photomicrographs, then use two different columns.
  • Personally, I like the data sheet to be the first sheet in the file, but maybe that's just me.
  • Check that it turns into a valid CSV so you can use this awesome format.

      After all that, here's what we have (click here for the entire sheet):

    The same data as the first image, but improved. The long strings in columns 3 and 4 are troublesome, but we can tolerate them. Click to enlarge.

    Maybe the 'clean' analysis-friendly sheet looks boring to you, but to me it looks awesome. Above all, it's easy to use for SCIENCE! And I won't have to go home with a headache.


    The data in this post came from this Cretaceous shale dataset [XLS file] from the government of Manitoba. Their spreadsheet is pretty good and only breaks a couple of my golden rules. Here's my version with the broken and fixed spreadsheets shown here. Let me know if you spot something else that should be fixed!

    x lines of Python: read and write a shapefile

    Shapefiles are a sort-of-open format for geospatial vector data. They can encode points, lines, and polygons, plus attributes of those objects, optionally bundled into groups. I say 'sort-of-open' because the format is well-known and widely used, but it is maintained and policed, so to speak, by ESRI, the company behind ArcGIS. It's a slightly weird (annoying) format because 'a shapefile' is actually a collection of files, only one of which is the eponymous SHP file. 

    Today we're going to read a SHP file, change its Coordinate Reference System (CRS), add a new attribute, and save a new file in two different formats. All in x lines of Python, where x is a small number. To do all this, we need to add a new toolbox to our xlines virtual environment: geopandas, which is a geospatial flavour of the popular data management tool pandas.

    Here's the full rundown of the workflow, where each item is a line of Python:

    1. Open the shapefile with fiona (i.e. not using geopandas yet).
    2. Inspect its contents.
    3. Open the shapefile again, this time with geopandas.
    4. Inspect the resulting GeoDataFrame in various ways.
    5. Check the CRS of the data.
    6. Change the CRS of the GeoDataFrame.
    7. Compute a new attribute.
    8. Write the new shapefile.
    9. Write the GeoDataFrame as a GeoJSON file too.

    By the way, if you have not come across EPSG codes yet for CRS descriptions, they are the only way to go. This dataset is initially in EPSG 4267 (NAD27 geographic coordinates) but we change it to EPSG 26920 (NAD83 UTM20N projection).

    Several bits of our workflow are optional. The core part of the code, items 3, 6, 7, and 8, are just a few lines of Python:

        import geopandas as gpd
        gdf = gpd.read_file('data_in.shp')
        gdf = gdf.to_crs({'init': 'epsg:26920'})
        gdf['seafl_twt'] = 2 * 1000 * gdf.Water_Dept / 1485
        gdf.to_file('data_out.shp')

    That's it! 

    As in all these posts, you can follow along with the code in the Jupyter Notebook.

    Murphy's Law for Excel

    Where would scientists and engineers be without Excel? Far, far behind where they are now, I reckon. Whether it's a quick calculation, or making charts for a thesis, or building elaborate numerical models, Microsoft Excel is there for you. And it has been there for 32 years, since Douglas Klunder — now a lawyer at ACLU — gave it to us (well, some of us: the first version was Mac only!).

    We can speculate about reasons for its popularity:

    • It's relatively easy to use, and most people started long enough ago that they don't have to think too hard about it.
    • You have access to it, and you know that your collaborators (boss, colleagues, future self) have access to it.
    • It's flexible enough that it can do almost anything.
    Figure 1 from 'Predicting bed thickness with cepstral decomposition'.

    Figure 1 from 'Predicting bed thickness with cepstral decomposition'.

    For instance, all the computation and graphics for my two 2006 articles on signal processing were done in Excel (plus the FFT add-on). I've seen reservoir simulators, complete with elaborate user interfaces, in Excel. An infinity of business-critical documents are stored in Excel (I just filled out a vendor registration form for a gigantic multinational in an Excel spreadsheet). John Nelson at ESRI made a heatmap in Excel. You can even play Pac Man.

    Maybe it's gone too far:


    So what's wrong with Excel?

    Nothing is wrong with it, but it's not the best tool for every number-crunching task. Why?

    • Excel files are just that — files. Sometimes you want to do analysis across datasets, and a pool of data (a database) becomes more useful. And sometimes you wish nine different people didn't have nine different versions of your spreadsheet, each emailing their version to nine other people...
    • The charts are rather clunky and static. They don't do well with large datasets, or in data you'd like to filter or slice dynamically.
    • In large datasets, scrolling around a spreadsheet gets old pretty quickly.
    • The tool is so flexible that people get carried away with pretty tables, annotating their sheets in ways that make the printed page look nice, but analysis impossible.

    What are the alternatives?

    Excel is a wonder-tool, but it's not the only tool. There are alternatives, and you should at least know about them.

    For everyday spreadsheeting needs, I now use Google Sheets. Collaboration is built-in. Being able to view and edit a sheet at the same time as someone else is a must-have (probably Office 365 does this now too, so if you're stuck with Excel I urge you to check). Version control — another thing I'm not sure I can live without — is built in. For real nerds, there's even a complete API. I also really like the native 'webbiness' of Google Docs, for example being able to use web API calls natively, for example getting the current CAD–USD exchange rate with GoogleFinance("CURRENCY:CADUSD").

    If it's graphical analysis you want, try Tableau or Spotfire. I'm especially looking at you, reservoir engineers — you are seriously missing out if you're stuck in Excel, especially if you have a lot of columns of different types (time series, categories and continuous variables for example). The good news is that the fastest way to get data into Spotfire is... Excel. So it's easy to get started.

    If you're gathering information from people, like registering the financial details of vendors for instance, then a web form is your best bet. You can set one up in Google Forms in minutes, and there are lots of similar services. If you want to use your own servers, no problem: any dev worth their wages can throw one together in a few hours.

    If you're doing geoscience in Excel, like my 2006 self — filtering logs, or generating synthetics, or computing spectrums — your mind will be blown by spending a few hours learning a programming language. Your first day in Python (or Julia or Octave or R) will change your quantitative life forever.

    Excel is great at some things, but for most things, there's a better way. Take some time to explore them the next time you have some slack in your schedule.

    References

    Hall, M (2006). Resolution and uncertainty in spectral decomposition. First Break 24, December 2006, p 43–47.

    Hall, M (2006). Predicting stratigraphy with cepstral decomposition. The Leading Edge 25 (2, Special Issue on Spectral Decomposition). doi:10.1190/1.2172313


    UPDATE

    As a follow-up example, I couldn't resist sharing this recent story about an artist that draws anime characters in Excel.

    Newsflash: the Geophysics Hackathon is back!

    Mark your calendar: 22–24 September (right before SEG), at a downtown Houston location to be confirmed.

    We're filling the room with 50 geoscientists of all stripes. Interpreters, programmers, students, professionals... everyone is welcome. The plan: to imagine, design, and prototype some new tools in geophysics — all around the theme of machine learning. It's going to be awesome. 

    The schedule: we'll get started at 6 pm on Friday 22 September, and go till 10 pm. Then we pick it up again on Saturday morning, and go till 6 pm, and the same again on Sunday. Teams will present a demo to everyone on Sunday after 3 pm. There will be a few prizes, a few drinks, lots of food, and a lot of new geophysical tools and widgets. 

    If you want to know more about what a hackathon is, read my summary from the last one: Le grand hack! Or check out the project round-up posts, part 1 and part 2.

    If you're not sure you belong, I promise that you do. One of the prize-winning teams in Paris had no coding experience! And every team needs help with brainstorming, design, testing, and presentation. Absolutely anyone can contribute, and absolutely everyone will learn something.

    If you have some like-minded friends, bring them along! We need teams of 5 people, so if there are already 5 of you, you can start coding as soon as you walk in the door!

    If you can't be there yourself, please share this post with someone you know.

    When you're ready, click here to buy a ticket.


    Thank you as always to our sponsors so far: Dell EMC and Amazon AWS. If you'd like to sponsor the Houston event, please check this page out, or just get in touch.

    Another fossil book

    I'm thrilled to introduce the latest book in the 52 Things series!

    52 More Things You Should Know About Palaeontology is out. You can buy it direct from us, on Amazon.com and Amazon.co.uk, and it will soon be available all over the world via Amazon's other stores.

    In common with all the books from Agile Libre, it is a scholarly text with some weird features. For example:

    • It's fun and easy to read. Each of the 52 essays is only about 700 words long.
    • It costs $19, not $49 (I am not making that $49 up. Welcome to academic publishing!)
    • It's openly licensed, so you can re-use any of the content with attribution but without permission.
    • $2 from every sale goes to the Society of Vertebrate Paleontology to support their work.

    A book for everyone

    Like the first 52 Things on fossils, it's not just for palaeontologists. No matter who you are, I hereby guarantee that you'll find something useful and interesting in there, or your money back. I mean, just look at some of these chapters:

    • A closer look at fossil sex, by Benni Bomfleur & Hans Kerp — in flagrante!
    • A snake with four legs, by David Martill — chronic limb loss!
    • Birds of a fibula, by Jon Tennant — dino bones!
    • Fossils for sale, by Tony Doré, OBE, of Statoil — selling shells!
    • Gods and monsters, by Andrew Taylor — miracles!
    • How kangaroos got their bounce, by Benjamin P Kear — just so!
    • Impossible frogs in the Deccan Traps, by Michael Oates — igneous fossils!
    • In search of the Balearian mouse goat, by Alun Williams — mouse goats!
    • Interview with a Triceratops, by John Scanella — dinosaurs forever!
    • Micro-dung and its uses, by Wyn Hughes — tiny poo!
    • Traces in the terrarium, by Daniel Hembree — experimental ichnology!
    • Vertebrate palaeontology: more than fossil bones, by John Hutchinson — see dino run!

    A huge thanks to the 50(!) authors of this volume. Together, I estimate they have over 1000 years of experience to share. Imagine that for a moment. All that learning, centuries in the field, decades in the library, or squinting down microscopes... just to write an essay for you! 

    Massive thanks as well to Alex Cullum and Allard Martinius, both of Statoil. It takes a good deal of tenacity to rally 50 people to do anything, let alone write a book together... and they've done it twice. And they've nailed it again — check out what Prof David Polly (Indiana), president of the Society of Vertebrate Palaeontology had to say about the book:

    [It] looks fantastic. There is a lot of useful and high-level information in it, plus it is entertaining to read. I’m also pleased to see several SVP members in the author list. It deserves to be a great success. (The other books in the series are equally wonderful... having worked with eigenvectors daily for decades, I nevertheless learned something from Ruelicke’s chapter in the Geology volume.)

    I hope you enjoy the book too!


    Have you read 52 Things... Rock Physics? If you enjoyed it, or even if you didn't, we'd love a short review on Amazon.com :) Help spread some geophysics goodness.

    Fear and loathing in oil & gas

    Sometimes you have to swallow your fear. This is one of those times.

    The proliferation of 3D seismic in the 1980s was a major step forward for the petroleum industry. However, it took more than a decade for the 3D seismic method to become popular. During that decade, seismic equipment continued to evolve, particularly with the advent of telemetry recording systems that needed for doing 3D surveys offshore.

    Things were never the same again. New businesses sprouted up to support it, and established service companies and tech companies exploded size and in order to keep up with the demand and all the new work.

    Not so coincidently, another major shift happened in the late 1980s and early 1990s with the industry-wide shift to Sun workstations in order to cope with the crunching and rendering the overwhelming influx of all these digits. UNIX workstations with hilariously large cathode-ray tube monitors became commonplace. This industry helped make Sun and many other IT companies very wealthy, and once again everything was good. At least until Sun's picnic was trampled on by Linux workstations in the early 2000s, but that's another story...

    I think the advent of 3D seismic is one of many examples of the upstream oil and gas industry thriving on technological change. 3D seismic changed everything, facilitating progress in the full sense of the word and we never looked back. As an early career geoscientist, I don't know what the world was like before 3D seismic, but I have interpreted 2D data and I know it's an awful experience — even on a computer.

    Debilitating skepticism?

    Today, in 2017, we find ourselves in the middle of the next major transformation. Like 3D seismic before it, machine learning will alter yesterday's landscape beyond all recognition. We've been through all of this before, but this time, for some reason it feels different. Many people are cautious, unconvinced about whether this next thing will live up to the hype. Other people are vibrating with excitement viewing the whole thing with rose-coloured glasses. Still others truly believe that it will fail — assertively rejecting hopes and over-excited claims that yes, artificial intelligence will catapult us into a better world, a world beyond our wildest dreams.

    A little skepticism is healthy, but I meet a lot of people who are so skeptical about this next period of change that they are ignoring it. It feels to me like an unfair level of dismissal, a too-rigid stance. And it has left me rather perplexed: Why is there so much resistance and denial this time around? Why the apprehension?

    I'll wager the reason it is different this time because this change is happening to us, in spite of us, whether we like it or not. We're not in the driving seat. Most of us aren't even in the passenger seat. Unlike seismic technology and UNIX|Linux workstations, our sector has had little to do with this revolution. We haven't been pushing for it, instead, it is dragging us along with it. Worse, it's happening fast; even the people who are trying to keep up with it can barely hold on. 

    We need you

    This is the opportunity of a lifetime. It's happening. High time to crank up the excitement, get involved, be a part of it. I for one want you to be part of it. Come along with us. We need you, whether you like it or not. 


    This post was provoked by a conversation on LinkedIn.

    Subsurface Hackathon project round-up, part 2

    Following on from Part 1 yesterday, here are the other seven team projects from the hackathon:


    Interactive visualization of Water Table heights over many years.

    Interactive visualization of Water Table heights over many years.

    Water, water everywhere

    Water Underground: Martin Bentley (NMMU), Joseph Barraud (Rolls Royce), Rabah Cheknoun (UPPA)

    The team built readers for the groundwater data available from dinoloket.nl, both the groundwater levels and the hydrochemistry. They clustered the data by aggregating by month and then looking for similarities in levels in the boreholes and built an open Jupyter notebook.


      

     

     

    Seismic from noise

    OBSNoise: Fernando Villanueva-Robles (IPGP), Yann Huet (Setec-Lerm), Ngoc Huyen Luu (Ecole Polytechnique), Dorian Bagur (Telecom ParisTech), Jonathan Grandjean (Independent)

    The OBSNoise project investigated the application of machine learning to coherently stack ambient noise records collected from ocean bottom seismic (OBS) arrays in order to extract reservoir information. The team's results from synthetic data showed promise. If fully developed, this technology could be a virtually real-time monitoring system of dynamic reservoir properties.


    The Killers. Killing It. 

    The Killers. Killing It. 

    Global geochemical data analytics

    The Killers: Alexandre Sache, Violaine Delahaye, Karl Sache (all from Institute Polytechnique UniLaSalle), Côme Arvis, Guillaume Ligner (Ecole Polytechnique)

    Two geoscience undergrads and one automotive design student (I know right?) from UniLaSalle hooked up with two data science students from Ecole Polytechnique to interogate the massive GeoRoc database using some clever data analytics tricks and did some novel many-dimensional geochemical classifications.


    Team LogFix.

    Team LogFix.

    Fixing broken well data

    LogFix: Guillaume Coffin (Telecom Evolution), Florian Napierala (EISTI), Camille Gimenez (Université Paris-Saclay), Tristan Siméon (Université de Montpellier), Robert Leckenby (Independent)

    A truly pristine, calibrated, and corrected petrophysical data is so rare it has a sort of mythical status. Team LogFix used machine learning to identify bad-data zones, repair, QC, and fill-in missing sections. They got an impressive way with the problem, using a dataset from the Athabasca of Canada.


    Between the hand-drawn lines

    Automagical: Louis Poirier (Independent), Maggie Baber (Independent), Georg Semmler (GiGa infosystems), Björn Wieczoreck (GiGa infosystems), Jonas Kopcsek (GiGa infosystems)

    Automagical_Paris_Hackathon.png

    You don't need to believe in magic. Team Automagical used machine learning to create 3D geological models from 2D cross-sections sections. They trained a predictive model using a collection of standardized hand-drawn cross-sections from human geoscientists. The model learns how to propagate rocks throughout a 3D scene. Their goal is to be able to generate cross-sections along any direction through the model. The AI learned how to do geologically realistic interpolation on simple structures. What kind of geologic complexity is possible with more input from more cross-sections?


    The document on the left contains a log display with a lithology column. It's a 'hit'. The one on the right has no lithlogies and is a 'miss'. 

    The document on the left contains a log display with a lithology column. It's a 'hit'. The one on the right has no lithlogies and is a 'miss'.

     

    There's rocks in them hills! Hills of paper, that is

    Logs on the Rocks: Daniel Stanton (Leeds University), Jack Woolam (Leeds University), Adam Goddard (Leeds University), Henri Blondelle (AgileDD)

    If the oil and gas industry is to get more efficient, we better get really good at finding lithology and fluid information in the mountains of paper we've collectively built. Team Logs on the Rocks used CNNs to identify graphical depictions of rock types in a sea of unstructured PDFs and TIFFs. They introduced themselves as a team of non-coders, but these guys were were doing cloud computing on AWS and using NVIDIA's GPUs before the end of the weekend. 


    Robot vision for seismic interpretation

    It's not our FAULT! Claire Birnie (Leeds University), Carlos Alberto da Costa Filho (Edinburgh University), Matteo Ravasi (Statoil), Filippo Broggini (ETHZ), Gijs Straathof (SGS)

    Geologic feature recognition using machine learning. The goal was to assist seismic interpreters in detecting geologic features – faults, folds, traps, etc. – in seismic data . They used Haar cascade classifiers, which are routinely used for identifying faces or kittens or beer bottles in photographs and video streams, specially trained to work on seismic data. They used the awesome OpenCV library to build this technology. At the time of writing, their website appears to be maxed out for the month, so if you're dying to see it, leave them a comment on LinkedIn asking them increase their capacity. And in the meantime, you can check out their project's repo on GitHub.

    Kudos for the open source repo, team!


    It was thrilling to see such a large range of data and applications. Digital thin-sections, ground water maps, seismic data, well logs, cross-sections, information in unstructured documents, and so on. Thanks to each and every individual that showed up with their expertise and enthusiasm. We're all better off because of it.

    A quick reminder that our sponsors are awesome! Please high-five them next time you meet them...