by pup55 » Fri 17 Jun 2005, 08:50:29
You are quite right. The area under the curve should be Q-inf. I am no longer capable of doing it myself, but if you go back to the original Verhulst equation in the paper by Roper, and solve for y, you might be able to calculate the integral and arrive at an approximation of Q-inf, and also, by setting the first derivative to zero, you should be able to arrive at the peak.
The value of Q-inf is critical to the calculation of the curve. If the Q-inf value above were greatly higher or lower, the shape of the curve, and therefore the peak prediction, would be completely different.
When we did the exercises a year ago, in the "peak mart" thread, we ran through a lot of calculations, and did some "blind" tests where softlanding gave us some partial curves, and we tried to predict the peak, and what we found out was that we needed to be at least in the ballpark on Q-inf in order to be close to the correct curve shape. What we also found out is that there is an "inflexion point" on the curve, which is about 2/3 of the way up on the left side of the curve. This is the point at which the "tangent line" slope, if you will, goes to infinity. If you have data past that, it is also helpful.
So the ramifications of this, as it applies to us, is that the value of remaining reserves needs to be pretty well known in order to do a good job on peak prediction. Therefore this whole issue of "reserves transparency" is critical to being able to make this type of model. Also, if you think you are "close to the peak", you can do a pretty good job of fitting the curve to the existing data.
After that, it becomes an issue of resolution, that is, to what level of accuracy are you satisified that you have made an accurate prediction of the peak? If you have a hundred years of data, and you can predict the peak within 5 years, is your methodology good enough? We were never really able to answer that question to any degree of satisfaction.
This applies directly to the work of Hubbert himself, plus Lahererre and others. If you have data of a given quality, and they predict a peak value of 2000, or 2009, or whatever, is there enough accuracy in this methodology to be able to make plans and/or policy decisions on that basis?
So, I think the real answer is to use many sources of data, and also refine the predictions when new data comes in, to see whether or not the world is going to end, and not take any one prediction too seriously (including those of ASPO and others, and especially, the amateur pup55).
This is annoying to people who do not understand this type of modeling, because they expect some kind of easy-to-digest number. This is the very thing that Michael Lynch criticizes Campbell for all the time, because as new data comes in, you have to recalculate everything again, and this results in a change in the peak prediction, and so the less-enlightened people consider this "waffling" or whatever. Deffeyes has solved the problem by picking a peak date kind of tongue-in-cheek, but he freely admits that this is just a guess, and also, he has been around long enough and does it with enough of a sense of humor that he does not aggravate people and they kind of accept it.
Sorry to have gotten longwinded on this, but some background is in order for those who joined us recently and are reading up on this. For further edification, feel free to go back to the "peak mart" thread and read up on what we did a year ago. Also, the links to the original equations are in there for those calculus-proficient to go back and view.
Also, I am still happy to have an ongoing test of this curve-fitting ability. Anybody who wants to can submit some partial production curves for any country, and I will calculate the curve using this method, and predict the peak. Then, easy to compare with the actual peak and see if I predicted the peak accurately. That way, you can test our proficiency.