The Surmont Supermerge

In my recent Abstract horror post, I mentioned an interesting paper in passing, Durkin et al. (2017):

 

Paul R. Durkin, Ron L. Boyd, Stephen M. Hubbard, Albert W. Shultz, Michael D. Blum (2017). Three-Dimensional Reconstruction of Meander-Belt Evolution, Cretaceous Mcmurray Formation, Alberta Foreland Basin, Canada. Journal of Sedimentary Research 87 (10), p 1075–1099. doi: 10.2110/jsr.2017.59

 

I wanted to write about it, or rather about its dataset, because I spent about 3 years of my life working on the USD 75 million seismic volume featured in the paper. Not just on interpreting it, but also on acquiring and processing the data.

Let's start by feasting our eyes on a horizon slice, plus interpretation, of the Surmont 'Supermerge' 3D seismic volume:

Figure 1 from Durkin et al (2017), showing a stratal slice from 10 ms below the top of the McMurray Formation (left), and its interpretation (right). © 2017, SEPM (Society for Sedimentary Geology) and licensed CC-BY.

Figure 1 from Durkin et al (2017), showing a stratal slice from 10 ms below the top of the McMurray Formation (left), and its interpretation (right). © 2017, SEPM (Society for Sedimentary Geology) and licensed CC-BY.

A decade ago, I was 'geophysics advisor' on Surmont, which is jointly operated by ConocoPhillips Canada, where I worked, and Total E&P Canada. My line manager was a Total employee; his managers were ex-Gulf Canada. It was a fantastic, high-functioning team, and working on this project had a profound effect on me as a geoscientist. 

The Surmont bitumen field

The dataset covers most of the Surmont lease, in the giant Athabasca Oil Sands play of northern Alberta, Canada. The Surmont field alone contains something like 25 billions barrels of bitumen in place. It's ridiculously massive — you'd be delighted to find 300 million bbl offshore. Given that it's expensive and carbon-intensive to produce bitumen with today's methods — steam-assisted gravity drainage (SAGD, "sag-dee") in Surmont's case — it's understandable that there's a great deal of debate about producing the oil sands. One factoid: you have to burn about 1 Mscf or 30 m³ of natural gas, costing about USD 10–15, to make enough steam to produce 1 bbl of bitumen.

Detail from Figure 12 from Durkin et al (2017), showing a seismic section through the McMurray Formation. Most of the abandoned channels are filled with mudstone (really a siltstone). The dipping heterolithic strata of the point bars, so obvious in …

Detail from Figure 12 from Durkin et al (2017), showing a seismic section through the McMurray Formation. Most of the abandoned channels are filled with mudstone (really a siltstone). The dipping heterolithic strata of the point bars, so obvious in horizon slices, are quite subtle in section. © 2017, SEPM (Society for Sedimentary Geology) and licensed CC-BY.

The field is a geoscience wonderland. Apart from the 600 km² of beautiful 3D seismic, there are now about 1500 wells, most of which are on the 3D. In places there are more than 20 wells per section (1 sq mile, 2.6 km², 640 acres). Most of the wells have a full suite of logs, including FMI in 2/3 wells and shear sonic as well in many cases, and about 550 wells now have core through the entire reservoir interval — about 65–75 m across most of Surmont. Let that sink in for a minute.

What's so awesome about the seismic?

OK, I'm a bit biased, because I planned the acquisition of several pieces of this survey. There are some challenges to collecting great data at Surmont. The reservoir is only about 500 m below the surface. Much of the pay sand can barely be called 'rock' because it's unconsolidated sand, and the reservoir 'fluid' is a quasi-solid with a viscosity of 1 million cP. The surface has some decent topography, and the near surface is glacial till, with plenty of boulders and gravel-filled channels. There are surface lakes and the area is covered in dense forest. In short, it's a geophysical challenge.

Nonetheless, we did collect great data; here's how:

  • General information
    • The ca. 600 km² Supermerge consists of a dozen 3Ds recorded over about a decade starting in 2001.
    • The northern 60% or so of the dataset was recombined from field records into a single 3D volume, with pre- and post-stack time imaging.
    • The merge was performed by CGG Veritas, cost nearly $2 million, and took about 18 months.
  • Geometry
    • Most of the surveys had a 20 m shot and receiver spacing, giving the volume a 10 m by 10 m natural bin size
    • The original survey had parallel and coincident shot and receiver lines (Megabin); later surveys were orthogonal.
    • We varied the line spacing between 80 m and 160 m to get trace density we needed in different areas.
  • Sources
    • Some surveys used 125 g dynamite at a depth of 6 m; others the IVI EnviroVibe sweeping 8–230 Hz.
    • We used an airgun on some of the lakes, but the data was terrible so we stopped doing it.
  • Receivers
    • Most of the surveys were recorded into single-point 3C digital MEMS receivers planted on the surface.
  • Bandwidth
    • Most of the datasets have data from about 8–10 Hz to about 180–200 Hz (and have a 1 ms sample interval).

The planning of these surveys was quite a process. Because access in the muskeg is limited to 'freeze up' (late December until March), and often curtailed by wildlife concerns (moose and elk rutting), only about 6 weeks of shooting are possible each year. This means you have to plan ahead, then mobilize a fairly large crew with as many channels as possible. After acquisition, each volume spent about 6 months in processing — mostly at Veritas and then CGG Veritas, who did fantastic work on these datasets.

Kudos to ConocoPhillips and Total for letting people work on this dataset. And kudos to Paul Durkin for this fine piece of work, and for making it open access. I'm excited to see it in the open. I hope we see more papers based on Surmont, because it may be the world's finest subsurface dataset. I hope it is released some day, it would have huge impact.


References & bibliography

Paul R. Durkin, Ron L. Boyd, Stephen M. Hubbard, Albert W. Shultz, Michael D. Blum (2017). Three-Dimensional Reconstruction of Meander-Belt Evolution, Cretaceous Mcmurray Formation, Alberta Foreland Basin, Canada. Journal of Sedimentary Research 87 (10), p 1075–1099. doi: 10.2110/jsr.2017.59 (not live yet).

Hall, M (2007). Cost-effective, fit-for-purpose, lease-wide 3D seismic at Surmont. SEG Development and Production Forum, Edmonton, Canada, July 2007.

Hall, M (2009). Lithofacies prediction from seismic, one step at a time: An example from the McMurray Formation bitumen reservoir at Surmont. Canadian Society of Exploration Geophysicists National Convention, Calgary, Canada, May 2009. Oral paper.

Zhu, X, S Shaw, B Roy, M Hall, M Gurch, D Whitmore and P Anno (2008). Near-surface complexity masquerades as anisotropy. SEG Annual Convention, Las Vegas, USA, November 2008. Oral paper. doi: 10.1190/1.3063976.

Surmont SAGD Performance Review (2016), by ConocoPhillips and Total geoscientists and engineers. Submitted to AER, 258 pp. Available online [PDF] — and well worth looking at.

Trad, D, M Hall, and M Cotra (2008). Reshooting a survey by 5D interpolation. Canadian Society of Exploration Geophysicists National Convention, Calgary, Canada, May 2006. Oral paper. 

Burning the surface onto the subsurface

Previously, I described a few of the reasons why we don't get a clean ground surface event on land seismic data like we do the water-bottom in marine seismic. In land data, the worst part of the image is right at the surface. But ground level is not just tricky to see, it's impossible to see. Since the vibe truck is on the ground, there's no reflection from that surface. Even if there was some kind of event there, processors apply a magic eraser to the top of the section — the mute — to erase the early arrivals. So it's not possible to see the ground in land data, and you can't pick what isn't there.  

But I still want to know where the ground is. Why can't we slap a ground-level seismic 'reflection' event on the section? 

What you need

We need the ground level, which is in depth of course, in the time domain of the seismic section. To compute this, let's call it \(t_\mathrm{G}\), we need three pieces of information at every trace location: the ground elevation \(G\), the seismic reference datum (SRD) which I'll call \(D\), and the replacement velocity \(V_\mathrm{r}\). 

$$ t_\mathrm{G} = \frac{2 (G - D)}{V_\mathrm{r}} $$

Ground elevation.  If you're lucky, you'll be able to find the ground elevation corresponding to each trace stored in the trace headers. Ground elevation might be located in bytes 41-44 or 45-48 of the trace header, which correspond to the receiver group elevation and the surface elevation of the source, respectively. These should be the same for a stacked trace, but as with any meta-data to do with SEGY, this info could be hiding somewhere else, or missing altogether. And if you're that unlucky, you might have to comb through processing reports for the missing information. If you are even more unlucky (as I was in this example), you won't have any kind of processing report to fall back on and you'll have to concoct something else. In the accompanying Jupyter notebook, I resorted to interpolating a digitized elevation profile from a JPEG plot of the seismic line. So if you're all out of options, you might find refuge in those legacy plots! 

This profile is particularly wonky, because the seismic reference datum (red) is not the same across the profile

This profile is particularly wonky, because the seismic reference datum (red) is not the same across the profile

Seismic reference datum. And to make life yet more complicated, the seismic reference datum is not flat across the profile. It goes downhill and then flattens out (red line below). Don't ask me what the advantages are of processing data to a variable datum, but whatever they are, I hope they offset the disadvantages of all-to-easily mistaking the datum to be flat.

The replacement velocity is given in the sidelabel of the raster image online (shown right). It's 10 000 ft/sec, or 3048 m/s. 

Byte locations 53-56 and 57-60 are the standard trace header placeholders reserved for holding the datum elevation at the receiver group and the datum elevation at source. Again, for a stacked trace, these should be the same value. If these fields are zeros, then check the fields of the Trace Header Extension. If they turn up empty, and if the datum is horizontal, it might be listed in the file's text header. 

Convert elevation to time

By definition, the seismic reference datum is horizontal in the time-domain (red line below). Notice how the ground elevation – in the time domain – plots mostly as negative values (before) time zero. In other words, most of the ground is being cut-off by the top of the section. So, if we want to see it, we need to shift everything down into the field of view. Conceptually, this means adjusting the seismic reference datum so it floats entirely above the ground-level. Computationally, we can achieve this easily enough by padding the top of the data with zeros.

A time-domain representation of the ground-level along the seismic profile. The surface of the earth extends above the start of the seismic data for most of the locations along the profile. 

A time-domain representation of the ground-level along the seismic profile. The surface of the earth extends above the start of the seismic data for most of the locations along the profile. 

Make the ground a pickable event

As a final bit of post-processing, we could actually burn the ground-level into the data as a sort of synthetic seismic event. The reason I like this concept is that it alleviates the need to dig up old-processing reports, puzzle over missing header data, or worse, maintain and munge external text files containing elevation information. I say, let's make it self-contained. Let's put it directly into the data so that it can be treated like any other seismic reflection. Why would I do this?

  • You can see where there might be fold, velocity or other issues related to topography.
  • You can immediately see the polarity of the data. 
  • You could use the bandwidth of the data to make the pseudo-reflector, giving a visual hint to the interpreter.
  • Keeping track of amplitude adjustments and phase rotations would be self-documenting and reversible.
  • you could autotrack it to get a topographic map (or just get this from the processor).
  • It looks cool!
Seismic profile with ground level SYNTHETICALLY SLAPPED ON TOP.  Bandlimited, of course, so you can Autotrack till your hearts content!

Seismic profile with ground level SYNTHETICALLY SLAPPED ON TOP.  Bandlimited, of course, so you can Autotrack till your hearts content!

I've deliberately constructed a band-limited reflection, opposed to placing a sharp spike at ground-level. The problem with a spike is that it has infinite bandwidth. It contains higher frequencies than the image, so as Carl Reine commented on that last post, that might not play nice with seismic attributes. Also, there's the problem of selecting an amplitude value to assign to the spike: we don't want to introduce amplitudes that are ridiculously out of range of the existing data.  

The whole image

I hereby propose that this synthetic ground level trick adopted as the new standard for any land seismic processing and interpretation. The great thing is, it can be done just as easily by interpreters and seismic data technologists, as by the processing companies that create the rest of the image. I realize we're adding stuff to the data that isn't actually signal. We do non-real things to signals all the time. The question is, do the benefits outweigh the artificiality?

Here's the view of the entire section:

The whole section, ground level included.

The whole section, ground level included.

The details of this exercise can be found in the this Jupyter Notebook.

References

The seismic is line 36_77_PR from the USGS data repository.

SEG Y rev 2 Data Exchange Format. SEG Technical Standards Committee. Draft 2.0, January, 2015. 

Where is the ground?

This is the upper portion of a land seismic profile in Alaska. Can you pick a horizon where the ground surface is? Have a go at pickthis.io.

Pick the Ground surface at the top of the seismic section at pickthis.io.

Pick the Ground surface at the top of the seismic section at pickthis.io.

Picking the ground surface on land-based seismic data is not straightforward. Picking the seafloor reflection on marine data, on the other hand, is usually a piece of cake, a warm-up pick. You can often auto-track the whole thing with a few seeds.

Seafloor reflection on Penobscot 3D survey, offshore Nova Scotia. from Matt's tutorial in the April 2016 The Leading Edge, The function of interpolation.

Seafloor reflection on Penobscot 3D survey, offshore Nova Scotia. from Matt's tutorial in the April 2016 The Leading Edge, The function of interpolation.

Why aren't interpreters more nervous that we don't know exactly where the surface of the earth is? I'm sure I'm not the only one that would like to have this information while interpreting. Wouldn't it be great if land seismic were more like marine?

Treacherously Jagged TopographY or Near-Surface processing ArtifactS?

Treacherously Jagged TopographY or Near-Surface processing ArtifactS?

If you're new to land-based seismic data, you might notice that there isn't a nice pickable event across the top of the section like we find in marine seismic data. Shot noise at the surface has been muted (deleted) in processing, and the low fold produces an unclean, jagged look at the top of the section. Additionally, the top of the section, time-zero — the seismic reference datum — usually floats somewhere above the land surface — and we can't know where that is unless it can be found in the file header, or looked up in the processing report.

The seismic reference datum, at a two-way time of zero seconds on seismic data, is typically set at mean sea level for offshore data. For land data, it is usually chosen to 'float' above the land surface.

The seismic reference datum, at a two-way time of zero seconds on seismic data, is typically set at mean sea level for offshore data. For land data, it is usually chosen to 'float' above the land surface.

Reframing the question

This challenge is a bit of a trick question. It begs the viewer to recognize that the seemingly simple task of mapping the ground level on a land seismic section is actually a rudimentary velocity modeling or depth conversion exercise in itself. Wouldn't it be nice to have the ground surface expressed as pickable seismic event? Shouldn't we have it always in our images? Baked into our data, so to speak, such that we've always got an unambiguous pick? In the next post, I'll illustrate what I mean and show what's involved in putting it in. 

In the meantime, I challenge you to pick where you think the (currently absent) ground surface is on this profile, so in the next post we can see how well you did.

The big data eye-roll

First, let's agree on one thing: 'big data' is a half-empty buzzword. It's shorthand for 'more data than you can look at', but really it's more than that: it branches off into other hazy territory like 'data science', 'analytics', 'deep learning', and 'machine intelligence'. In other words, it's not just 'large data'. 

Anyway, the buzzword doesn't bother me too much. What bothers me is when I talk to people at geoscience conferences about 'big data', about half of them roll their eyes and proclaim something like this: "Big data? We've been doing big data since before these punks were born. Don't talk to me about big data."

This is pretty demonstrably a load of rubbish.

What the 'big data' movement is trying to do is not acquire loads of data then throw 99% of it away. They are not processing it in a purely serial pipeline, making arbitrary decisions about parameters on the way. They are not losing most of it in farcical enterprise data management train-wrecks. They are not locking most of their data up in such closed systems that even they don't know they have it.

They are doing the opposite of all of these things.

If you think 'big data', 'data' science' and 'machine learning' are old hat in geophysics, then you have some catching up to do. Sure, we've been toying with simple neural networks for years, eg probabilistic neural nets with 1 hidden layer — though this approach is very, very far from being mainstream in subsurface — but today this is child's play. Over and over, and increasingly so in the last 3 years, people are showing how new technology — built specifically to handle the special challenge that terabytes bring — can transform any quantitative endeavour: social media and online shopping, sure, but also astronomy, robotics, weather prediction, and transportation. These technologies will show up in petroleum geoscience and engineering. They will eat geostatistics for breakfast. They will change interpretation.

So when you read that Google has open sourced its TensorFlow deep learning library (9 November), or that Microsoft has too (yesterday), or that Facebook has too (months ago), or that Airbnb has too (in August), or that there are a bazillion other super easy-to-use packages out there for sophisticated statistical learning, you should pay a whole heap of attention! Because machine learning is coming to subsurface.

What is AVO-friendly processing?

It's the Geophysics Hackathon next month! Come down to Propeller in New Orleans on 17 and 18 October, and we'll feed you and give you space to build something cool. You might even win a prize. Sign up — it's free!

Thank you to the sponsors, OpenGeoSolutions and Palladium Consulting — both fantastic outfits. Hire them.

AVO-friendly processing gets called various things: true amplitude, amplitude-friendly, and controlled amplitude, controlled phase (or just 'CACP'). And, if you've been involved in any processing jobs you'll notice these phrases get thrown around a lot. But seismic geophysics has a dirty little secret... we don't know exactly what it is. Or, at least, we can't agree on it.

A LinkedIn discussion in the Seismic Data Processing group earlier this month prompted this post:

I can't compile a list of exactly which processes will harm your AVO analysis (can anyone? Has anyone??), but I think I can start a list of things that you need to approach with caution and skepticism:

  • Anything that is not surface consistent. What does that mean? According to Oliver Kuhn (now at Quantec in Toronto):
Surface consistent: a shot-related [process] affects all traces within a shot gather in the same way, independent of their receiver positions, and, a receiver-related [process] affects all traces within a receiver gather in the same way, independent of their shot positions.
  • Anything with a window — spatial or temporal. If you must use windows, make them larger or longer than your areas and zones of interest. In this way, relative effects should be preserved.
  • Anything that puts the flattening of gathers before the accuracy of the data (<cough> trim statics). Some flat gathers don't look flat. (The thumbnail image for this post is from Duncan Emsley's essay in 52 Things.)
  • Anything that is a sort of last resort, post hoc attempt to improve the data — what we might call 'cosmetic' treatments. Things like wavelet stretch correction and spectral shaping are good for structural interpreters, but not for seismic analysts. At the very least, get volumes without them, and convince yourself they did no harm.
  • Anything of which people say, "This should be fine!" but offer no evidence.

Back to my fourth point there... spectral shaping and wavelet stretch correction (e.g. this patented technique I was introduced to at ConocoPhillips) have been the subject of quite a bit of discussion, in my experience. I don't know why; both are fairly easy to model, on the face of it. The problem is that we start to get into the sticky question of what wavelets 'see' and what's a wavelet anyway, and hang on a minute why does seismic reflection even work? Personally, I'm skeptical, especially as we get more used to, and better at, looking at spectral decompositions of stacked and pre-stack data.

Divergent paths

I have seen people use seismic data with very different processing paths for structural interpretation and for AVO analysis. This can happen on long-term projects, where the structural framework depends on an old post-stack migration that was later reprocessed for AVO friendliness. This is a bad idea — you won't be able to put the quantitative results into the structural framework without introducing substantial error.

What we need is a clinical trial of processing algorithms, in which they are tested against a known model like Marmousi, and their effect on attributes is documented. If such studies exist, I'd love to hear about them. Come to think of it, this would make a good topic for a hackathon some day... Maybe Dallas 2016?

How to QC a seismic volume

I've had two emails recently about quality checking seismic volumes. And last month, this question popped up on LinkedIn:

We have written before about making a data quality volume for your seismic — a handy way to incorporate uncertainty into risk maps — but these recent questions seem more concerned with checking a new volume for problems.

First things first

Ideally, you'd get to check the volume before delivery (at the processing shop, say), otherwise you might have to actually get it loaded before you can perform your QC. I am assuming you've already been through the processing, so you've seen shot gathers, common-offset gathers, etc. This is all about the stack. Nonetheless, the processor needs to prepare some things:

  • The stack volume, of course, with and without any 'cosmetic' filters (eg fxy, fk).
  • A semblance (coherency, similarity, whatever) volume.
  • A fold volume.
  • Make sure the processor has some software that can rapidly scan the data, plot amplitude histograms, compute a spectrum, pick a horizon, and compute phase. If not, install OpendTect (everyone should have it anyway), or you'll have to load the volume yourself.

There are also some things you can do ahead of time. 

  1. Be part of the processing from the start. You don't want big surprises at this stage. If a few lines got garbled during file creation, no problem. If there's a problem with ground-roll attentuation, you're not going to be very popular.
  2. Make sure you know how the survey was designed — where the corners are, where you would expect live traces to be, and which way the shot and receiver lines went (if it was an orthogonal design). Get maps, take them with you.
  3. Double-check the survey parameters. The initial design was probably changed. The PowerPoint presentation was never updated. The processor probably has the wrong information. General rule with subsurface data: all metadata is probably wrong. Ideally, talk to someone who was involved in the planning of the survey.
  4. You didn't skip (2) did you? I'm serious, double check everything.

Crack open the data

OK, now you are ready for a visit with the processor. Don't fall into the trap of looking at the geology though — it will seduce you (it's always pretty, especially if it's the first time you've seen it). There is work to do first.

  1. Check the cornerpoints of the survey. I like the (0, 0) trace at the SW corner. The inline and crossline numbering should be intuitive and simple. Make sure the survey is the correct way around with respect to north.
  2. Scan through timeslices. All of them. Is the sample interval what you were expecting? Do you reach the maximum time you expected, based on the design? Make sure the traces you expect to be live are live, and the ones you expect to be dead are dead. Check for acquisition footprint. Start with greyscale, then try another colourmap.
  3. Repeat (5) but in a similarity volume (or semblance, coherency, whatever). Look for edges, and geometric shapes. Check again for footprint.
  4. Look through the inlines and crosslines. These usually look OK, because it's what processors tend to focus on.
  5. Repeat (7) but in a similarity volume.

Dive into the details

  1. Check some spectrums. Select some subsets of the data — at least 100 traces and 1000 ms from shallow, deep, north, south, east, west — and check the average spectrums. There should be no conspicuous notches or spikes, which could be signs of all sorts of things from poorly applied filters to reverberation.
  2. Check the amplitude histograms from those same subsets. It should be 32-bit data — accept no less. Check the scaling — the numbers don't mean anything, so you can make them range over whatever you like. Something like ±100 or ±1000 tends to make for convenient scaling of amplitude maps and so on; ±1.0 or less can be fiddly in some software. Check for any departures from an approximately Laplacian (double exponential) distribution: clipping, regular or irregular spikes, or a skewed or off-centre distribution:
  1. Interpret a horizon and check its phase. See Purves (Leading Edge, October 2014) or SubSurfWiki for some advice.
  2. By this time, the fold volume should yield no surprises. If any of the rest of this checklist throws up problems, the fold volume might help troubleshoot.
  3. Check any other products you asked for. If you asked for gathers or angle stacks (you should), check them too.

Last of all, before actual delivery, talk to whoever will be loading the data about what kind of media they prefer, and what kind of file organization. They may also have some preferences for the contents of the SEG-Y file and trace headers. Pass all of this on to the processor. And don't forget to ask for All The Seismic

What about you?

Have I forgotten anything? Are there things you always do to check a new seismic volume? Or if you're really brave, maybe you have some pitfalls or even horror stories to share...

What is anisotropy?

anisotropy_vs_heterogeneity.png

Geophysicists often assume that the earth is isotropic. This word comes from 'iso', meaning same, and 'tropikos', meaning something to do with turning. The idea is that isotropic materials look the same in all directions — they have no orientation, and we can make measurements in any direction and get the same result. Note that this is different from homogeneous, which is the quality of uniformity of composition. You can think of anisotropy as a directional (not just spatial) variation in homogeneity. 

In the illustration, I may have cheated a bit. The lower-left image shows a material that is homogeneous but anisotropic. The thin lines are supposed to indicate microfractures, say, or the alignment of clay flakes, or even just stress. So although the material has uniform composition, at least at this scale, it has an orientation.

The recognition of the earth's anisotropy is a dominant theme among papers in our forthcoming 52 Things book on rock physics. It's not exactly a new thing — it was an emerging trend 10 years ago when Larry Lines at U of C reviewed Milo Backus's famous 'challenges' (Lines 2005). And even then, the spread of anisotropic processing and analysis had been underway for almost 20 years since Leon Thomsen's classic 1986 paper, Weak elastic anisotropy. This paper introduced three parameters that we need—alongside the usual \(V_\text{P}\), \(V_\text{S}\), and \(\rho\)—to describe anisotropy. They are \(\delta\) (delta), \(\epsilon\) (epsilon), and \(\gamma\) (gamma), collectively referred to as Thomsen's parameters

  • \(\delta\) or delta — the short offset effect — captures the relationship between the velocity required to flatten gathers (the NMO velocity) and the zero-offset average velocity as recorded by checkshots. It's easy to measure, but perhaps hard to understand in physical terms.
  • \(\epsilon\) or epsilon — the long offset effect — is, according to Thomsen himself:  "the fractional difference between vertical and horizontal P velocities; i.e., it is the parameter usually referred to as 'the' anisotropy of a rock". Unfortunately, the horizontal velocity is rather hard to measure. 
  • \(\gamma\) or gamma — the shear wave effect — relates, as rock physics meister Colin Sayers put it on Twitter, a horizontal shear wave with horizontal polarization to a vertical shear wave. He added, "\(\gamma\) can be determined in a single well using sonic. So the correlation with \(\epsilon\) and \(\delta\) is of great interest."

Sidenote to aspiring authors: Thomsen's seminal paper, which has been cited over 2800 times, is barely 13 pages long. Three and a half of those pages are taken up by... data! A huge table containing the elastic parameters of almost 60 samples. And this is from a corporate scientist at Amoco. So no more excuses: publish you data! </rant>

Vertical transverse what now?

The other bit of jargon you will come across is the concept of transverse isotropy, which is a slightly perverse (to me) way of expressing the orientation of the anisotropy effect. In vertical transverse isotropy, the horizontal velocity is different from the vertical velocity. Think of flat-lying shales with gravity dominating the stress field. Usually, the velocity is faster along the beds than it is across the beds. This manifests as nonhyperbolic moveout in the far offsets, in particular a pull-up or 'hockey stick' effect in the gathers — the arrivals are unexpectedly early at long offsets. Clearly, this will also affect AVO analysis

There's more jargon. If the rocks are dipping, we call it tilted transverse isotropy, or TTI. But if the anisotropies, so to speak, are oriented vertically — as with fractures, for example, or simply horizontal stress — then it's horizontal transverse isotropy, or HTI. This causes azimuthal (compass directional) travel-time variations. We can even venture into situations where we encounter orthorhombic anisotropy, as in the combined VTI/HTI model shown above. It's easy to imagine how these effects, if not accounted for in processing, can (and do!) result in suboptimal seismic images. Accounting for them is not easy though, and trying can do more harm than good.

If you have handy rules of thumb of ways of conceptualizing anisotropy, I'd love to hear about them. Some time soon I want to write about thin-layer anisotropy, which is where this post was going until I got sidetracked...

References

Lines, L (2005). Addressing Milo's challenges with 25 years of seismic advances. The Leading Edge 24 (1), 32–35. DOI 10.1190/1.2112389.

Thomsen, L (1986). Weak elastic anisotropy. Geophysics 51 (10), 1954–1966. DOI 10.1190/1.1442051.

Neglected near-surface workhorses

Yesterday afternoon, I attended a talk at Dalhousie by Peter Cary who has begun the CSEG distinguished lecture tour series. Peter's work is well known in the seismic processing world, and he's now spreading his insights to the broader geoscience community. This was only his fourth stop out of 26 on the tour, so there's plenty of time to catch it.

Three steps of seismic processing

In the head-spinning jargon of seismic processing, if you're lost, it's maybe not be your fault. Sometimes it might even seem like you're going in circles.

Ask the vendor or processing specialist first to keep it simple, and second to tell you in which of the three processing stages you are in. Seismic data processing has steps:

  • Attenuate all types of noise.
  • Remove the effects of the near surface.
  • Migration, sometimes called imaging.

If time migration is the workshorse of seismic processing, and if is fk filtering (or f–anything filtering) is the workhorse of noise attenuation, then surface consistent deconvolution is the workhorse of the near surface. These topics aren't as sexy or as new as FWI or compressed sensing, but Peter has been questioning the basics of surface-consistent scaling, and the approximations we make when processing land seismic data. 

The ambiguity of phase and travel-time corrections

To the processor, removing the effects of the near surface means making things flat in the CMP domain. It turns out you can do this with travel time corrections (static shifts), you can do this with phase corrections, or you can do it with both.

A simple synthetic example showing (a) a gather with surface-consistent statics and phase variations; (b) the same gather after surface-consistent residual statics correction, and (c) after simultaneous surface-consistent statics and phase correcition. Image © Cary & Nagarajappa and CSEG.

It's troubling that there is more than one way to achieve flatness. Peter's advice is to use shot stacks and receiver stacks to compare the efficacy of static corrections. They eliminate doubt about whether surface consistent scaling is working, and are a better QC tool than other data domains.

Deeper than shallow

It may sound trivial, but the hardest part about using seismic waves for imaging is that they have to travel down and back up through the near surface on their path to the target. It might seem counter-intuitive, but the geometric configurations that work well for the deep earth are not well suited to the shallow earth, and how we might correct for it. I can imagine that two surveys could be useful, one for the target and one for characterizing the shallow that gets in the way of the target, but seismic experiments are already expensive enough when there is only target to be concerned with.

Still, the near surface is something we can't avoid. Much like astronomers using ground-based telescopes shooting for the stars, seismic processors too have to get the noisy stuff that is sitting closest to the detectors out of the way.

Big imaging, little imaging, and telescopes

I caught three lovely talks at the special session yesterday afternoon, Recent Advances and the Road Ahead. Here are my notes...

The neglected workhorse

If you were to count up all the presentations at this convention on seismic migration, only 6% of them are on time migration. Even though it is the workhorse of seismic data processing, it is the most neglected topic in migration. It's old technology, it's a commodity. Who needs to do research on time migration anymore? Sergey does.

Speaking as an academic, Fomel said, "we are used to the idea that most of our ideas are ignored by industry," even though many transformative ideas in the industry can be traced back to academics. He noted that it takes at least 5 years to get traction, and the 5 years are up for his time migration ideas, "and I'm starting to lose hope". Here's five things you probably didn't know about time migration:

  • Time migration does not need travel times.
  • Time migration does not need velocity analysis.
  • Single offsets can be used to determine velocities.
  • Time migration does need approximations, but the approximation can be made increasingly accurate.
  • Time migration distorts images, but the distortion can be removed with regularized inversion.

It was joy to listen to Sergey describe these observations through what he called beautiful equations: "the beautiful part about this equation is that it has no parameters", or "the beauty of this equation is that is does not contain velocity", an so on. Mad respect.

Seismic adaptive optics

Alongside seismic multiples, poor illumination, and bandwidth limitations, John Etgen (BP) submitted that, in complex overburden, velocity is the number one problem for seismic imaging. Correct velocity model equals acceptable image. His (perhaps controversial) point was that when velocities are complex, multiples, no matter how severe, are second order thorns in the side of the seismic imager. "It's the thing that's killing us, and that's the frontier." He also posited that full waveform inversion may not save us after all, and image gather analysis looks even less promising.

While FWI looks to catch the wavefield and look at it in the space of the data, migration looks to catch the wavefield and look at it at the image point itself. He elegantly explained these two paradigms, and suggested that both may be flawed.

John urged, "We need things other than what we are working on", and shared his insights from another field. In ground-based optical astronomy, for example, when the image of a star is be distorted by turbulence in our atmosphere, astromoners numerically warp the curvature of the lens to correct for rapid variations in phase of the incoming wavefront. The lenses we use for seismic focusing, velocities, can be tweaked just the same by looking at the wavefield part of the way through its propagation. He quoted Jon Claerbout:

If you want to understand how a horse runs, you gotta run along with it.

Big imaging, little imaging, and combination of the two

There's a number of ways one could summarize what petroleum seismologists do. But hearing (CGG researcher) Sam Gray's talk yesterday was a bit of an awakening. His talk was a remark on the notion of big imaging vs little imaging, and the need for convergence.

Big imaging is the structural stuff. Structural migration, stratigraphic imaging, wide-azimuth acquisition, and so on. It includes the hardware and compute innovations of broadband, blended sources, deblending processing, anisotropic imaging, and the beginnings of viscoacoustic reverse-time migration. 

Little imaging is inversion. It's reservoir characterization. It's AVO and beyond. Azimuthal velocities (fast and slow directions) hint at fracture orientations, azimuthal amplitudes hint even more subtly at fracture compliance.

Big imaging is hard because it's computationally expensive, and velocities are unknown. Little imaging is hard because features like fractures, faults and pores are at the centimetre scale, but on land we lay out inlines and crossline hundreds of metres apart, and use signals that carry only a few bits of information from an area the size of a football field.

What we've been doing with imaging is what he called a separated workflow. We use gathers to make big images. We use gathers to make rock properties, but seldom do they meet. How often have you tested to see if the rock properties the little are explain the wiggles in the big? Our work needs to be such a cycle, if we want our relevance and impact to improve.

The figures are copyright of the authors of SEG, and used in accordance with SEG's permission guidelines.

Not picking parameters

I like socks. Bright ones. I've liked bright socks since Grade 6. They were the only visible garment not governed by school uniform, or at least not enforced, and I think that was probably the start of it. The tough boys wore white socks, and I wore odd red and green socks. These days, my favourites are Cole & Parker, and the only problem is: how to choose?

Last Tuesday I wrote about choosing parameters for geophysical algorithms — window lengths, velocities, noise levels, and so on. Like choosing socks, it's subjective, and it's hard to find a pair for every occasion. The comments from Matteo, Toastar, and GuyM raised an interesting question: maybe the best way to pick parameters is to not pick them? I'm not talking about automatically optimizing parameters, because that's still choosing. I'm talking about not choosing at all.

How many ways can we think of to implement this non-choice? I can think of four approaches, but I'm not 100% sure they're all different, or if I can even describe them...

Is the result really optimal, or just a hard-to-interpret patchwork?

Adaptivity

Well, okay, we still choose, but we choose a different value everywhere depending on local conditions. A black pair for a formal function, white for tennis, green for work, and polka dots for special occasions. We can adapt to any property (rather like automatic optimization), along any dimension of our data: spatially, azimuthally,  temporally, or frequentially (there's a word you don't see every day).

Imagine computing seismic continuity. At each sample, we might evaluate some local function — such as contrast — for a range of window sizes. Or, when smoothing, we might specifiy some minimum signal loss compared to the original. We end up using a different value everywhere, and expect an optimal result.

One problem is that we still have to choose a cost function. And to be at all useful, we would need to produce two new data products, besides the actual result: a map of the parameter's value, and a map of the residual cost, so to speak. In other words, we need a way to know what was chosen, and how satisfactory the choice was.

Stochastic shotgun

We could fall back on that geostatistical favourite and pick the parameter values stochastically, grabbing socks at random out of the drawer. This works, but I need a lot of socks to have a chance of getting even a local maximum. And we run into the old problem of really not knowing what to do with all the realizations. Common approaches are to take the P50, P10, and P90, or to average them. Both of these approaches make me want to ask: Why did I generate all those realizations?

Experimental design methods

The design of experiments is a big deal in the life sciences,  but for some reason rarely (never?) talked about in geoscience. Applying a cost function, or even just visual judgment, to a single parameter is perhaps trivial, but what if you have two variables? Three? What if they are non-linear and covariant? Then the optimization process amounts to a sticky inverse problem.

Fortunately, lots of clever people have thought about these problems. I've even seen them implemented in subsurface software. Cool-sounding combinatorial reduction techniques like Greco-Latin squares, or Latin hypercubes offer ways to intelligently sample the parameter space and organize the results. We could do the same with socks, evaluating pattern and toe colour separately...

The mixing board

There is another option: the mixing board. Like a music producer, a film editor, or the Lytro camera, I can leave the raw data in place, and always work from the masters. Given the right tools, I can make myself just the right pair of socks whenever I like.

This way we can navigate the parameter space, applying views, processes, or other tools on the fly. Clearly this would mean changing everything about the way we work. We'd need a totally different approach not just to interpretation, but to the entire subsurface characterization workflow.

Are there other ways to avoid choosing? What are people using in other industries, or other sciences? I think we need to invite some experimental design and machine learning people to SEG...

Cole & Parker socks are awesomeThe quilt image is by missvancamp on Flickr and licensed CC-BY. The spools are by surfzone on Flickr, licensed CC-BY. Many thanks to Cole & Parker for permission to use the sock images, despite not knowing what on earth I was going to do with them. Buy their socks! They're Canadian and everything.