New virtual training for digital geoscience

Looking to skill up before 2022 hits us with… whatever 2022 is planning? We have quite a few training classes coming up — don’t miss out! Our classes use 100% geoscience data and examples, and are taught exclusively by earth scientists.

We’re also always happy to teach special classes in-house for you and your colleagues. Just get in touch.

Special classes for CSEG in Calgary

Public classes with timing for Americas

  • Geocomputing: week of 22 November

  • Machine Learning: week of 6 December

Public classes with timing for Europe, Africa and Middle East

  • Geocomputing: week of 27 September

  • Machine Learning: week of 8 November

So far we’ve taught 748 people on the Geocomputing class, and 445 on the Machine Learning class — this wave of new digital scientists is already doing fascinating new work and publishing new research. I’m very excited to see what unfolds over the next year or two!

Find out more about Agile’s public classes by clicking this big button:

Are virtual conferences... awful?

Yeah, mostly. But that doesn’t mean that we just need to get back to ‘normal’ conferences — those are broken too, remember?

Chris Jackson, now at Manchester, started a good thread the other day:

This led, in a roundabout way, to some pros and cons — some of which are just my own opinions:

Good things about LIVE conferences

  • You get to spend a week away from work.

  • When you’re there, you’re fully focused.

  • You’re somewhere cool or exotic, or in Houston.

  • You get to see old friends again.

  • (Some) early career people get to build their networks. You know which ones.

  • There is technical content.

BAD things about LIVE conferences

  • You’re away from your home for a week.

  • You have to travel to a remote location.

  • You’re trapped in a conference centre.

  • The networking events are lame.

  • Well, maybe ECRs can make connections… sorry, who’s your supervisor again?

  • There’s so much content, and some of it is boring.

Good things about VIRTUAL conferences

  • Take part — and meet people — from anywhere!

  • The cost is generally low and more accessible.

  • You’re not away from work or home.

  • They are much easier to organize.

  • Live-streaming or posting to YouTube is easy-peasy.

  • No-one needs to give millions of research dollars to airline and hotel companies.

Bad things about VIRTUAL conferences

  • You don’t actually get to meet anyone.

  • Tech socs don’t make money from free webinars.

  • So many distractions!

  • The technology is a hassle to deal with.

  • If you’re in the wrong timezone, too bad for you.

  • The content is the same as live conferences, and some of it is even worse as a digital experience. And we’re all exhausted from all-day Zoom. And…

My assertion is that most virtual conferences are poor because all most organizers have really done is transpose a poor format, which was at least half-optimized for live events, to a pseudodigital medium. And — surprise! — the experience sucks.

So what now?

What now is that it’s beyond urgent to fix damn conferences. A huge part of the problem — and the fundamental reason why most virtual conferences are so bad — is that most of the technical societies completely failed to start experimenting with new, more accessible, more open formats a decade ago. This, in spite of the fact that, to a substantial extent, the societies are staffed by professional event organizers! These professionals weren’t paying attention to digital technology, or openness and reproducibility in science, or accessibility to disadvantaged and underrepresented segments of the community. I don’t know what they were paying attention to (okay, I do know), but it wasn’t primarily the needs of the scientific community.

Okay okay, sheesh, actually what now?

Sorry. Anyway, the thing to do is to focus on the left-hand columns in those lists up there, and try to eliminate the things on the right. So here are some things to start experimenting with. When? Ideally 2012 (the year, not the time). But tomorrow will do just fine. In no particular order:

  • Focus on the outcomes — conferences are supposed to serve their community of practice. So ask the community — what do you need? What big unsolved problems can we solve to move our science forward? What social or community problems are stopping us from doing our best work? Then design events to move the needle on that.

  • Distributed events — Local chapters hire awesome, interesting, cool spaces for local face-to-face events. People who can get to these locations are encouraged to show up at them — because there are interesting humans there, the coffee is good, and the experience is awesome.

  • Virtually connected — The global event is digitally connected, so that when we want to do global things with lots of people, we can. This also means being timezone agnostic by recording or repeating important bits of the schedule.

  • Small is good — You’re experimenting, don’t go all-in on your first event. Small is less stress, lower risk, more sustainable, and probably a better experience for participants. Want more reach? There are other ways.

  • Dedicated to open, accessible participation — We need to seize the idea that events should accommodate anyone who wants to participate, wherever they are and whatever their means. Someone asking, “How do we make sure the right people are there?” is a huge warning sign.

  • Meaningful networking — Gathering people in a Hilton ballroom with cheap beer, frozen canapés, and a barbershop quartet is not networking, it’s a bad wedding party. Professionals want to forge lasting connections by collaborating with each other on deep or valuable problems. I don’t think non-technical event organizers realize that we actually love our work and technical collaboration is fun. Create the conditions for that kind of work, and the socializing will happen.

  • Diversity as a superpower — Focus on increasing every dimension of diversity at your events, and good things will follow. For example: stop talking about hackathons as ‘great for students’ — no wonder ECRs need networking opportunities if you create events that seal them off from everyone! How do you do this? Increase the diversity of your organizing task force.

  • Stop doing the following things — endless talks (settle down, some talks are fine), digital posters, panels of any kind, ‘discussion’ that involves one person talking at a time, and all the other broken models of collaboration. Not sure what to replace them with? Read about open space technology, world cafe, unconferences, unsessions, hackathons, datathons, lightning talks, birds of a feather, design charettes, idea jams. General rule, if most of the people in an event can be described as ‘audience’ and not ‘participants’, you’re doing it wrong. Conversation, not discussion.

  • Stop trying to control the whole experience — most conference organizers seem to think they have to organize every aspect of a conference. In fact, the task is to create the conditions for the community to organize itself — bring its own content, make its own priorities, solve its own problems.

I know it probably looks like I’m proposing to burn everything down, but I’m really not proposing that we shred everything and only organize wacky events from now on. Some traditional formats may, in some measure, be fit for purpose. My point is that we need to experiment with new things, as soon as possible. Experiment, pay attention, adjust, repeat. (And it takes at least three iterations to learn about something.)

If you’re interested in doing more with conferences and scientific events in general, I’ve compiled a lot of notes over the years since Agile has been experimenting with formats. Here they are — please use and share and contribute back if you wish.

I’m also always happy to brainstorm events with you, no strings attached! Just get in touch: matt@agilescientific.com

Last thing: We try to organize meetings like this in the Software Underground. Join us!

100 years of seismic reflection

Where would we be without seismic reflection? Is there a remote sensing technology that is as unlikely, as difficult, or as magical as the seismic reflection method? OK, maybe neutrino tomography. But anyway, seismic has contributed a great deal to society — helping us discover and describe hydrocarbon resources, aquifers, geothermal anomalies, sea-floor hazards, and plenty more besides.

It even indirectly led to the integrated circuit, but that’s another story.

Depending on who you ask, 9 August 2021 may or may not be the 100th anniversary of the seismic reflection method. Or maybe 5th August. Or maybe it was June or July. But there’s no doubt that, although the first discovery with seismic did not happen until several years later, 1921 was the year that the seismic reflection method was invented.

Ryan, Karcher and Haseman in the field, August 1921. Badly colourized by an AI.

Ryan, Karcher and Haseman in the field, August 1921. Badly colourized by an AI.

The timeline

I’ve tried to put together a timeline by scouring a few sources. Several people — Clarence Karcher (a physicist), William Haseman (a physicist), Irving Perrine (a geologist), William Kite (a geologist) at the University of Oklahoma, and later Daniel Ohern (a geologist) — conducted the following experiments:

  • 12 April 1919 — Karcher recorded the first exploration seismograph record near the National Bureau of Standards (now NIST) in Washington, DC.

  • 1919 to 1920 — Karcher continues his experimentation.

  • April 1921 — Karcher, whilst working at the National Bureau of Standards in Washington, DC, designed and constructed apparatus for recording seismic reflections.

  • 4 June 1921 — the first field tests of the refleciton seismograph at Belle Isle, Oklahoma City, using a dynamite source.

  • 6 June until early July — various profiles were acquired at different offsets and spacings.

  • 14 July 1921 — Testing in the Arbuckle Mountains. The team of Karcher, Haseman, Ohern and Perrine determined the velocities of the Hunton limestone, Sylvan shale, and Viola limestone.

  • Early August 1921 — The group moves to Vines Branch where “the world’s first reflection seismograph geologic section was measured”, according to a commemorative plaque on I-35 in Oklahoma. That plaque claims it was 9 August, but there are also records from 5 August. The depth to the Viola limestone is recorded and observed to change with geological structure.

  • 1 September 1921 — Karcher, Haseman, and Rex Ryan (a geologist) conduct experiments at the Newkirk Anticline near Ponca City.

  • 13 September 1921 — a survey was begun for Marland Oil Company and continues into October. Success seems mixed.

So what did these physicists and geologists actually do? Here’s an explanation from Bill Dragoset in his excellent review of the history of seismic from 2005:

Using a dynamite charge as a seismic source and a special instrument called a seismograph, the team recorded seismic waves that had traveled through the subsurface of the earth. Analysis of the recorded data showed that seismic reflections from a boundary between two underground rock layers had been detected. Further analysis of the data produced an image of the subsurface—called a seismic reflection profile—that agreed with a known geologic feature. That result is widely regarded as the first proof that an accurate image of the earth’s subsurface could be made using reflected seismic waves.
— Bill Dragoset, A Historical Reflection on Reflections

The data was a bit hard to interpret! This is from William Schriever’s paper:

Marland_seismic.png

Nonetheless, here’s the section the team managed to draw at Vine Creek. This is the world’s first reflection seismograph section — 9 August 1921:

The method took a few years to catch on — and at least a few years to be credited with a discovery. Karcher founded Geophysical Research Corporation (now Sercel) in 1925, then left and founded Geophysical Service International — which later spun out Texas Instruments — in 1930. And, eventually, seismic reflection turned into an idsutry worth tens of billions of dollars per year. Sometimes.

References

Bill Dragoset, (2005), A historical reflection on reflections, The Leading Edge 24: s46-s70. https://doi.org/10.1190/1.2112392

Clarence Karcher (1932). DETERMINATION OF .SUBSURFACE FORMATIONS. Patent no. 1843725A. Patented 2 Feb 1932.

William Schriever (1952). Reflection seismograph prospecting; how it started; contributions. Geophysics 17 (4): 936–942. doi: https://doi.org/10.1190/1.1437831

B Wells and K Wells (2013). American Oil & Gas Historical Society. American Oil & Gas Historical Society. Exploring Seismic Waves. Last Updated: August 7, 2021. Original Published Date: April 29, 2013.