Pricing professional services, again

I have written about this before, but in my other life as an owner of a coworking space. It's come up in Software Underground a couple of times recently, so I thought it might be time to revisit the crucial question for anyone offering services: what do I charge?

Unfortunately, it's not a simple answer. And before you read any further, you also need to understand that I am no business mastermind. So you should probably ignore everything I write. (And please feel free to set me straight!)

Here's a bit of the note I got recently from a blog reader:

I'm planning to start doing consulting and projects of seismic interpretation and prospect generations but I don't know what's a fair price to charge for services. I sure there're many of factors. I was wondering if you can share some tips on how to calculate/determine the cost of a seismic interpreter project? Is it by sq mi of data interpreted, maps of different formations, presentations, etc.?

Let's break the reply down into a few aspects:

Know the price you're aiming for and don't go below it. I've let myself get beaten down once or twice, and it's not a recipe for success: you may end up resenting the entire job. One opinion on Software Underground was to start with a high price, then concede to the client during negotiations. I tend to keep a fair price fixed from the start, and negotiate on other things (scope and deliverables). Do try not to get sucked into too much itemization though: it will squeeze your margins.

But what is the price you're aiming for? It depends on your fixed costs (how much do you need to get the work done and pay yourself what you need to live on?), time, complexity, your experience, how simple you want your pricing to be, and so on. All these things are difficult. I tend to go for simplicity, because I don't want the administrative overhead of many line items, keeping track of time, etc. Sometimes this bites me, sometimes (maybe) I come out ahead. 

Come on, be specific. If you've recently had a 'normal' job, then a good starting point is to know your "fully loaded cost" (i.e. what you really cost, with benefits, bonuses, cubicle, coffee, computer, and so on). This is typically about 2 to 2.5 times your salary(!). That's what you would like to make in about 200 days of work. You will quickly realize why consultants are apparently so expensive: people are expensive, especially people who are good at things.

If I ever feel embarrassed to ask for my fee, I remind myself that when I worked at Halliburton, my list price as a young consultant was USD 2400 per day. Clients would sign year-long contracts for me at that rate.

It's definitely a good idea to know what you're competing with. However, it can be very hard to find others' pricing information. If you have a good relationship with the client, they may even tell you what they are used to paying. Maybe you give them a better price, or maybe you're more expensive, because you're more awesome.

Remember your other bottom lines. Money is not everything. If we get paid for work on an open source project (open code or open content), we always discount the price, often by 50%. If we care deeply about the work, we ask for less than usual. Conversely, if the work comes with added stress or administration, we charge a bit more.

One thing's for sure: sometimes (often) you're leaving money on the table. Someone out there is charging (way) more for (much) lower quality. Conversely, someone is probably charging less and doing a better job. The lack of transparency around pricing and salaries in the industry doubtless contributes to this. In the end, I tend to be as open as possible with the client. Often, prices change for the next piece of work for the same client, because I have more information the second time.

Opinions wanted

There's no doubt, it's a difficult subject. The range of plausible prices is huge: $50 to $500 per hour, as someone on Software Underground put it. Nearer $50 to $100 for a routine programming job, $200 for professional input, $400 for more awesomeness than you can handle. But if there's a formula, I've yet to discover it. And maybe a fair formula is impossible, because providing critical insight isn't really something you can pay for on a 'per hour' kind of basis — or shouldn't be.

I'm very open to more opinions on this topic. I don't think I've heard the same advice yet from any two people. When I asked one friend about it he said: "Keep increasing your prices until someone says No."

Then again, he doesn't drive a Porsche either.


If you found this post useful, you might like the follow-up post too: Beyond pricing: the fine print.


Strategies for a revolution

This must be a record. It has taken me several months to get around to recording the talk I gave last year at EAGE in Vienna — Strategies for a revolution. Rather a gradiose title, sorry about that, especially over-the-top given that I was preaching to the converted: the workshop on open source. I did, at least, blog aobut the goings on in the workshop itself at the time. I even followed it up with a slightly cheeky analysis of the discussion at the event. But I never posted my own talk, so here it is:

Too long didn't watch? No worries, my main points were:

  1. It's not just about open source code. We must write open access content, put our data online, and push the whole culture towards openness and reproducibility. 
  2. We, as researchers, professionals, and authors, need to take responsibility for being more open in our practices. It has to come from within the community.
  3. Our conferences need more tutorials, bootcamps, , hackathons and sprints. These events build skills and networks much faster than (just) lectures and courses.
  4. We need something like an Open Geoscience Foundation to help streamline funding channels for open source projects and community events.

If you depend on open source software, or care about seeing more of it in our field, I'd love to hear your thoughts about how we might achieve the goal of having greater (scientific, professional, societal) impact with technology. Please leave a comment.

 

No secret codes: announcing the winners

The SEG / Agile / Enthought Machine Learning Contest ended on Tuesday at midnight UTC. We set readers of The Leading Edge the challenge of beating the lithology prediction in October's tutorial by Brendon Hall. Forty teams, mostly of 1 or 2 people, entered the contest, submitting several hundred entries between them. Deadlines are so interesting: it took a month to get the first entry, and I received 4 in the second month. Then I got 83 in the last twenty-four hours of the contest.

How it ended

Team F1 Algorithm Language Solution
1 LA_Team (Mosser, de la Fuente) 0.6388 Boosted trees Python Notebook
2 PA Team (PetroAnalytix) 0.6250 Boosted trees Python Notebook
3 ispl (Bestagini, Tuparo, Lipari) 0.6231 Boosted trees Python Notebook
4 esaTeam (Earth Analytics) 0.6225 Boosted trees Python Notebook
ml_contest_lukas_alfo.png

The winners are a pair of graduate petroelum engineers, Lukas Mosser (Imperial College, London) and Alfredo de la Fuente (Wolfram Research, Peru). Not coincidentally, they were also one of the more, er, energetic teams — it's say to say that they explored a good deal of the solution space. They were also very much part of the discussion about the contest on GitHub.com and on the Software Underground Slack chat group, aka Swung (you're in there, right?).

I will be sending Raspberry Shakes to the winners, along with some other swag from Enthought and Agile. The second-place team will receive books from SEG (thank you SEG Book Mart!), and the third-place team will have to content themselves with swag. That team, led by Paolo Bestagini of the Politecnico di Milano, deserves special mention — their feature engineering approach was very influential, being used by most of the top-ranking teams.

Coincidentally Gram and I talked to Lukas on Undersampled Radio this week:

Back up a sec, what the heck is a machine learning contest?

To enter, a team had to predict the lithologies in two wells, given wireline logs and other data. They had complete data, including lithologies, in nine other wells — the 'training' data. Teams trained a wide variety of models — from simple nearest neighbour models and support vector machines, to sophisticated deep neural networks and random forests. These met with varying success, with accuracies ranging between about 0.4 and 0.65 (i.e., error rates from 60% to 35%). Here's one of the best realizations from the winning model:

One twist that made the contest especially interesting was that teams could not just submit their predictions — they had to submit the code that made the prediction, in the open, for all their fellow competitors to see. As a result, others were quickly able to adopt successful strategies, and I'm certain the final result was better than it would have been with secret code.

I spent most of yesterday scoring the top entries by generating 100 realizations of the models. This was suggested by the competitors themselves as a way to deal with model variance. This was made a little easier by the fact that all of the top-ranked teams used the same language — Python — and the same type of model: extreme gradient boosted trees. (It's possible that the homogeneity of the top entries was a negative consequence of the open format of the contest... or maybe it just worked better than anything else.)

What now?

There will be more like this. It will have something to do with seismic data. I hope I have something to announce soon.

I (or, preferably, someone else) could write an entire thesis on learnings from this contest. I am busy writing a short article for next month's Leading Edge, so if you're interested in reading more, stay tuned for that. And I'm sure there wil be others.

If you took part in the contest, please leave a comment telling about your experience of it or, better yet, write a blog post somewhere and point us to it.