General discussions of the systemic, societal and civilisational effects of depletion.
by EnergySpin » Mon 26 Dec 2005, 06:51:22
Pretty good thread; let's see if I can contribute to it
I think entropyfalls wants to evaluate two complimentary and mutually exclusive hypotheses on the basis of data (i.e. frequency of blackouts etc).
The two hypotheses are Olduvai (O), and civilization (C). Since they are complimentary and mutually exclusive, the following relation holds true:
P(O)+P(C)=1
Now before we can use the frequency of blackouts to update the probabilities of the two competing hypotheses, we've got to specify the data models i.e. what is the probability that blackouts will be observed
provided that O is true vs the occurance of blackouts
if civilization is to continue
The olduvai theory is pretty specific regarding its prediction i.e. we will see an increasing frequency of blackouts, but to the best of my knowledge does not give a quantitative metric for this occurence (e.g. how many of them, duration etc). On the other hand, the civilization hypothesis is compatible with blackouts (after all life is compatible with illness

), so we've got a problem there as well.
The best way to monitor how the probabilities of the 2 competiting hypothesis change is to use Bayesian Survival Analysis to the data series of blackouts (# events and their duration), but I'm afraid this will be way too complicated for the people to follow. Rather you might want to set a date (i.e. 2012) and see if the lights are still on 24-7. Then per Olduvai Theory P(Lights On|O)=0 => P(O|Lights On) =0 i.e. if the lights are still one, the
posterior probability of this hypothesis is ZERO.
My pre-Bayesian assessment of the Olduvai Hypothesis: I consider the basic metric of the theory ( of energy per capita) a rather poor measure. If one were to double the efficiency of the processes across society then the energy use per capita will half. Such a civilization is not going to collapse anytime soon (if you ask me), which is precisely what the theory predicts.
The other problem I have with the theory has to do with the unjustified emphasis it puts on the role of FF to civilization. With the exception of the SUVs, our machines do not "use" FF. They use (and are made with ) electricity which can be made with FF.
In the case for petroleum for example, the efficiency of its conversion to electricity is about 33%. Multiply this number by the EROEI of oil from Saudi (30) and you get an initial investment to electricity ratio of 10. The initial investment to electricity ration of nuclear is 50-60 (LWR) and can increase by 60-140 if one deploys closed fuel reactors. Wind is 25-35 (offshore), PVs are close to 8-10. On the basis of these considerations, my prior probability of the Olduvai Hypothesis is pretty low i.e. in the same range as the prior hypothesis for cow mutilations by aliens (and I am not referring to mexicans) or ESP. The latter situation has been discussed by ET Jaynes in his book
Probability Theory: The logic of Science, where he discusses the application of probability theory (i.e. the Bayesian framework) to "weird" hypothesis/model evaluation. IIRC he assigns a prior equal to 10^(-60) (0.000 ...1) i.e. 59 zeros after the decimal point
to the "weirdos".
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by chriskn » Mon 26 Dec 2005, 07:04:39
I struggle with the concept of a finite world when I see all around me the abundance that nature provides. The planet will continue, the question really is will we learn to adapt and live in harmony quickly enough so that we also survive. Each of us must play our part by adopting our individual lifestyles.
-

chriskn
- Wood

-
- Posts: 1
- Joined: Mon 26 Dec 2005, 04:00:00
-
by entropyfails » Mon 26 Dec 2005, 10:29:45
$this->bbcode_second_pass_quote('Andrew_S', 'T')o put it another way, on the real number line let Olduvai be called 1.0 and let Civilization be called 0.0. You seem to be trying to estimate the relative merits of the two mutually incompatible hypotheses as if there exists a continuous parameter from 0.0 to 1.0 which can be estimated. However, as stated 0.0 and 1.0 represent two incompatible hypotheses, i.e. nothing in between.
Yes but we don't know which one to believe. Like I said, by setting the priors in the way I have outlined and applying Bayesian analysis to the data as it comes to light, we can watch to see if the Olduvai theory has an increasing trend or a decreasing one.
And while these two theories have mutual incompatibility, other theories that people float around here do not have as much incompatibility and we can use Bayesian analysis to determine which best predicts our dataset.
$this->bbcode_second_pass_quote('Andrew_S', 'A')nother way of putting it, you have defined what's characteristic of Civilization and what of Olduvai. You could just as easily say, oh things are 0.5% Olduvai-ish this week, I'm not really a fan of that hypothesis. Then if a year later things are 95% Olduvai-ish you could say yeah there's a lot to be said for Olduvai. No need for Bayes' theorem there. Bayes' works in retrospect, it doesn't predict.
The hypothesis predicts. Bayes' Theory tells us which is more likely given the current data. Of course we have to wait for data, I never said otherwise. I just want to know which theory has, up to this point, best predicted our situation. Then we can extrapolate consequences based on that theory.
$this->bbcode_second_pass_quote('Andrew_S', 'A') parameterization of the problem is what you really want. Some knowledge of how the parameter(s) determine the outcome (Olduvai or Civilization) and a means to estimate the parameter(s) and their time course might provide a means to test the relative merits of the hypotheses.
Well I have provided some parameters at the top for both of those Theories. Because Olduvai theory simply uses EUP as its measure and Civilization Theory encompasses more variables, I have tried to use EUP as the parameter for this simple case. I think the problem has been pretty well parameterized, though I would like a firmer numbers for Civilization Theory's predicted EUP.
From your post, I think you may have some misunderstandings on how and where you can apply Bayesian analysis. Check out EnergySpin’s post below for a recap of how we can use it to apply the theory along with validation of its use in this case.
EntropyFails
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche
-

entropyfails
- Expert

-
- Posts: 565
- Joined: Wed 30 Jun 2004, 03:00:00
-
by entropyfails » Mon 26 Dec 2005, 10:47:17
$this->bbcode_second_pass_quote('seldom_seen', 'I') like the idea of what you're doing, but it seems that applying a mathematical formula for calculating probabilities to whether industrial civilization will go on as is (civilization theory) or collapse (olduvai theory) is the equivalent to say filling up your gas tank, driving around all day long and using Bayes theorem to determine whether you will run out of gas or not.
I disagree. It seems more like 2 people in the car arguing on how much gas they have left in the tank. They both have many different dials for “Fuel” but they do both feel that a few of the dials do provide good data. They take these dials and make predictions of what levels they will read at certain times. Then we test the dials at those times. Then we apply Bayes’ Theory to determine which person’s theory more likely predicted the data.
$this->bbcode_second_pass_quote('seldom_seen', 'T')he answer is already deducible through scientific method. There's is a %100 chance for the 'run out of gas theory' based on the continual drawdown of a finite amount of gas in your tank.
The only undetermined factor is time.
Firstly, anyone predicting anything at 100% probability has a religious belief. (Above, I stated that the 100% number that I used was obtained by rounding and I just wanted to illustrate how Bayes’ Theorem works. I’m going to put the exact calculation in there because I’ve seen WAY too many 100% or 0% numbers in this thread) Both sides of this debate have done this and I feel that taking a different approach may help us talk about our theories without causing conniption fits.
Secondly, if we have an undetermined factor and theories about that undetermined factor, we can use Bayes’ Theorem to find out which predicts models current data better and thus which theory we should find credible. I wish to do this with this thread.
EntropyFails
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche
-

entropyfails
- Expert

-
- Posts: 565
- Joined: Wed 30 Jun 2004, 03:00:00
-
by entropyfails » Mon 26 Dec 2005, 11:03:21
$this->bbcode_second_pass_quote('EnergySpin', 'P')retty good thread; let's see if I can contribute to it
I think entropyfalls wants to evaluate two complimentary and mutually exclusive hypotheses on the basis of data (i.e. frequency of blackouts etc).
The two hypotheses are Olduvai (O), and civilization (C). Since they are complimentary and mutually exclusive, the following relation holds true:
P(O)+P(C)=1
Now before we can use the frequency of blackouts to update the probabilities of the two competing hypotheses, we've got to specify the data models i.e. what is the probability that blackouts will be observed
provided that O is true vs the occurance of blackouts
if civilization is to continueThe olduvai theory is pretty specific regarding its prediction i.e. we will see an increasing frequency of blackouts, but to the best of my knowledge does not give a quantitative metric for this occurence (e.g. how many of them, duration etc). On the other hand, the civilization hypothesis is compatible with blackouts (after all life is compatible with illness

), so we've got a problem there as well.
The best way to monitor how the probabilities of the 2 competiting hypothesis change is to use Bayesian Survival Analysis to the data series of blackouts (# events and their duration), but I'm afraid this will be way too complicated for the people to follow. Rather you might want to set a date (i.e. 2012) and see if the lights are still on 24-7. Then per Olduvai Theory P(Lights On|O)=0 => P(O|Lights On) =0 i.e. if the lights are still one, the
posterior probability of this hypothesis is ZERO.
Thanks. You hit a home run with that summary. To all reading, if the way I presented it didn’t make much sense, this recap summarizes the method.
As for the going over people’s heads, I don’t feel that concerned about that. I personally feel willing to educate people on this method because I think it will help us all come to a greater understanding of our world. Do you know of any database of blackouts that we can use to jumpstart this process?
$this->bbcode_second_pass_quote('EnergySpin', '
')My pre-Bayesian assessment of the Olduvai Hypothesis: I consider the basic metric of the theory ( of energy per capita) a rather poor measure.
I don’t know about that. I have seen EUP used in several academic journals. It seems like an accepted metric, though it could end up been a poor one for predicting Civilization’s demise. I wish to kickstart a process for determining this.
$this->bbcode_second_pass_quote('EnergySpin', ' ')The other problem I have with the theory has to do with the unjustified emphasis it puts on the role of FF to civilization. With the exception of the SUVs, our machines do not "use" FF. They use (and are made with ) electricity which can be made with FF.
Olduvai theory doesn’t have anything to say about Fossil Fuels. It only uses EUP, Energy Use Per Capita. The whole FF thing ends up being an addition from people around here.
$this->bbcode_second_pass_quote('EnergySpin', 'O')n the basis of these considerations, my prior probability of the Olduvai Hypothesis is pretty low i.e. in the same range as the prior hypothesis for cow mutilations by aliens (and I am not referring to mexicans) or ESP. The latter situation has been discussed by ET Jaynes in his book
Probability Theory: The logic of Science, where he discusses the application of probability theory (i.e. the Bayesian framework) to "weird" hypothesis/model evaluation. IIRC he assigns a prior equal to 10^(-60) (0.000 ...1) i.e. 59 zeros after the decimal point
to the "weirdos".
At least you don’t set it at 0%, so your belief doesn’t seem to have a religious bent. You have, however, compared widely differing situations to the same probability, so I would watch your neural nets. *grin*
Even starting with a prior that low, Bayes’ Theory can resurrect it to a high probability if it ends up being a good explanation for the data.
Thanks for your input!
EntropyFails
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche
-

entropyfails
- Expert

-
- Posts: 565
- Joined: Wed 30 Jun 2004, 03:00:00
-
by ashurbanipal » Mon 26 Dec 2005, 12:03:06
To me, Bayesian analysis applied in this way is a species of affirming the consequent, which is a well known and oft-committed fallacy. Here's a simple example to explain why:
P1: If it rains, the street will be wet
P2: The street is wet.
C1: Therefore, it must have rained.
The problem is that that's not the only way the street can get wet. Someone may have wet it down with a garden hose. Or someone may have run over a fire hydrant. Or a water main may have burst. Or a creek may have run over its banks. Etc.
So if there are worldwide blackouts in 2011, we still will not know whether Olduvai is correct because any number of other theories may explain them. Even if everything that Olduvai predicts happens precisely on schedule, we still will not know whether the theory itself is correct because another theory, of entirely different character, may have exactly the same observational consequences.
In a world that is not whole, you have got to fight just to keep your soul.
-Ben Harper-
-

ashurbanipal
- Lignite

-
- Posts: 263
- Joined: Tue 13 Sep 2005, 03:00:00
- Location: A land called Honalee
-
by JustinFrankl » Mon 26 Dec 2005, 14:17:43
$this->bbcode_second_pass_quote('ashurbanipal', 'T')o me, Bayesian analysis applied in this way is a species of affirming the consequent, which is a well known and oft-committed fallacy. Here's a simple example to explain why:
P1: If it rains, the street will be wet
P2: The street is wet.
C1: Therefore, it must have rained.
You can also say that this logic failed simply because the logic wasn't applied properly.
P1: If A then B. (If event 'A' happens, then event 'B' will happen. If it rains, the street gets wet.)
P2: B occurred. (The street got wet.)
C1: You have insufficient information to say anything about event 'A' in this case.
One way of correcting this logic is:
P1: If
and only if A, then B. (If and only if it rains, the street gets wet.)
P2: B occurred. (The street got wet.)
C1: A must have also occurred. (It must have rained.)
The logic here is solid, but the premise, P1, is in itself testable and can be disproven. Other things make the street wet.
Another way of correcting this logic is:
P1: If A, then B. (If it rains, the street gets wet.)
P2: B DID NOT occur. (The street DID NOT get wet.)
C1: Then A must have also NOT occurred. (It DID NOT rain.)
$this->bbcode_second_pass_quote('', 'T')he problem is that that's not the only way the street can get wet. Someone may have wet it down with a garden hose. Or someone may have run over a fire hydrant. Or a water main may have burst. Or a creek may have run over its banks. Etc.
So if there are worldwide blackouts in 2011, we still will not know whether Olduvai is correct because any number of other theories may explain them. Even if everything that Olduvai predicts happens precisely on schedule, we still will not know whether the theory itself is correct because another theory, of entirely different character, may have exactly the same observational consequences.
Does the Olduvai Theory ever attempt to claim that declining EUP is the only road toward a civilizational crash? Because if it does, then aren't we Bayes testing:
H1: If
and only if the Olduvai Theory is correct, then there will be permanent blackouts.
Permanent blackouts might also occur as a result of of asteroid strikes, nuclear war, runaway global warming, magnetic pole shift, lunar orbit deterioration, alien invasion, supernova, etc.
Monte had a point about the long-term viability of civilization being 0%. Let's look at this side of the coin.
Assumption 1: All things eventually end. A.k.a. "nothing lasts forever". (Not even the Earth and sky, sorry Kansas.)
Assumption 2: Industrial Civilization (indeed all of human society) belongs to the category of "all things".
Conclusion: Industrial Civilization will eventually end.
It may take 20 years, it may take until the heat death of universe in 200 billion years, but this will all end at some point. I'll be interested to see if there will be any serious debate about whether or not human society will continue "forever".
But going from this point, perhaps it is better to do the Bayesian analysis like this:
Hypothesis 1: Industrial Civilization will end within the next 30 years as predicted by the Olduvai Theory
Hypothesis 2: Something else will end it within 30 years, or it will not end within 30 years
"We have seen the enemy, and he is us." -- Walt Kelly
-
JustinFrankl
- Tar Sands

-
- Posts: 623
- Joined: Mon 22 Aug 2005, 03:00:00
-
by Andrew_S » Mon 26 Dec 2005, 14:23:03
$this->bbcode_second_pass_quote('entropyfails', '')$this->bbcode_second_pass_quote('EnergySpin', 'P')retty good thread; let's see if I can contribute to it
I think entropyfalls wants to evaluate two complimentary and mutually exclusive hypotheses on the basis of data (i.e. frequency of blackouts etc).
The two hypotheses are Olduvai (O), and civilization (C). Since they are complimentary and mutually exclusive, the following relation holds true:
P(O)+P(C)=1
Now before we can use the frequency of blackouts to update the probabilities of the two competing hypotheses, we've got to specify the data models i.e. what is the probability that blackouts will be observed
provided that O is true vs the occurance of blackouts
if civilization is to continueThe olduvai theory is pretty specific regarding its prediction i.e. we will see an increasing frequency of blackouts, but to the best of my knowledge does not give a quantitative metric for this occurence (e.g. how many of them, duration etc). On the other hand, the civilization hypothesis is compatible with blackouts (after all life is compatible with illness

), so we've got a problem there as well.
The best way to monitor how the probabilities of the 2 competiting hypothesis change is to use Bayesian Survival Analysis to the data series of blackouts (# events and their duration), but I'm afraid this will be way too complicated for the people to follow. Rather you might want to set a date (i.e. 2012) and see if the lights are still on 24-7. Then per Olduvai Theory P(Lights On|O)=0 => P(O|Lights On) =0 i.e. if the lights are still one, the
posterior probability of this hypothesis is ZERO.
Thanks. You hit a home run with that summary. To all reading, if the way I presented it didn’t make much sense, this recap summarizes the method.
It would help to define everything you are using and how you treat them. So, are hypotheses Olduvai and Civilization treated as possible outcomes of a random variable?
EnergySpins' statement
P(O)+P(C)=1
says that. So now O and C are the 2 possible outcomes of a random variable, let's call it X. So, is it assumed to have a binomial distribution of one trial and you're estimating and updating the parameter, p, of the binomial distribution?
If so, do you update the parameter, what after each trial, or after how many trials, or does that not matter?
The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
-
Andrew_S
- Tar Sands

-
- Posts: 634
- Joined: Sun 09 Jan 2005, 04:00:00
-
by ashurbanipal » Mon 26 Dec 2005, 15:23:49
$this->bbcode_second_pass_quote('', 'Y')ou can also say that this logic failed simply because the logic wasn't applied properly.
Depends on what you mean.
$this->bbcode_second_pass_quote('', 'P')1: If A then B. (If event 'A' happens, then event 'B' will happen. If it rains, the street gets wet.)
P2: B occurred. (The street got wet.)
C1: You have insufficient information to say anything about event 'A' in this case.
Correct--that's the upshot of my post.
$this->bbcode_second_pass_quote('', 'O')ne way of correcting this logic is:
P1: If and only if A, then B. (If and only if it rains, the street gets wet.)
P2: B occurred. (The street got wet.)
C1: A must have also occurred. (It must have rained.)
Well, that's different logic than what I posted. Establishing strong implication in the real world is fairly rare. It does happen (Figure A is called a triangle if and only if it is a 2-D lineal figure with exactly 3 sides completely enclosing and area). But how would you do it with the Olduvai theory--or for that matter, any theory?
$this->bbcode_second_pass_quote('', 'A')nother way of correcting this logic is:
P1: If A, then B. (If it rains, the street gets wet.)
P2: B DID NOT occur. (The street DID NOT get wet.)
C1: Then A must have also NOT occurred. (It DID NOT rain.)
Modus Tolens is entirely valid. If we don't suffer worldwide blackouts, then Olduvai is not a valid theory.
I think the success of scientific theories depends only a little on their predictive power, and much more on their explanatory power, which is a somewhat different domain.
In a world that is not whole, you have got to fight just to keep your soul.
-Ben Harper-
-

ashurbanipal
- Lignite

-
- Posts: 263
- Joined: Tue 13 Sep 2005, 03:00:00
- Location: A land called Honalee
-
by EnergySpin » Mon 26 Dec 2005, 15:46:43
$this->bbcode_second_pass_quote('Andrew_S', '
')It would help to define everything you are using and how you treat them. So, are hypotheses Olduvai and Civilization treated as possible outcomes of a random variable?
Not Really; the concept of a random variable i.e. the outcome of a repetitive experiment is not one that Bayesians accept (or at least the Bayesian "School" of Laplace, John Maynard Keynes, Jeffreys, ET Jaynes accepts). For them probability is just a measure of uncertainty about the truth of a given hypothesis.
So mathematical statements such as P(H)=0 false, P(H)=1 true, P(H)=0.75 are a numerical measure of uncertainty regarding the truth of the hypothesis H . In the situation that entropyfalls is exploring, the two competing hypothesis (i.e. C & O) are mutually exclusive (i.e. only of them can be true), and exhaustive (i.e. these are the only two possible models of the world). In our present situation we have incomplete information to decide which one is true and we express that with numerical values that sum to one i.e. if P(O)=0.25, then P(C)=0.75 and if P(O)=0.3 then P(C)=0.7. These are our
prior or
pre-data probabilities, the ones we assign to the two hypothesis before we have seen any data. (For those of you with a frequntist training, these values correspond to the regularization parameters in Tihkonov's theory of ill posed problems).
$this->bbcode_second_pass_quote('Andrew_S', '
')says that. So now O and C are the 2 possible outcomes of a random variable, let's call it X. So, is it assumed to have a binomial distribution of one trial and you're estimating and updating the parameter, p, of the binomial distribution?
If so, do you update the parameter, what after each trial, or after how many trials, or does that not matter?
Do not think in terms of a random experiment Andrew ...each of the two hypothesis needs a data model (which I did not define). . In the case of the civilization hypothesis one should use a joint Poisson model for the occurence of a blackout its duration, severity with constant parameters(yikes we are already into a tri-variate space).
For the Olduvai hypothesis , I would have to read again Duncan's paper, but IIRC he predicted an escalating course ie blackouts that become more frequent and extensive. There are many possibilities from the Bayesian Survival Analysis toolkit, but in order to keep it simple we could use a hierarchical model: P(freq blackout X Duration X Severity)~Trivariate Poisson(
theta(t)), and theta(t) obeys a first order Stochastic Differential equation.
In any case, once the data or likelihood model has been defined, one uses the Bayes theorem to update the beliefs i.e. the plausibility of the two hypothesis. From the looks of it, I doubt there will be a closed formula; you've got to use MCMC methods or use a numerical integration code. I'd go with the first method, if I were you (unless you are working on a PhD thesis

)
$this->bbcode_second_pass_quote('Andrew_S', '
')The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
The aforementioned exposition is the best I can do, without formulas.
I do not know if this site can render MathML properly and whether people are really interested in the actual formulas.
To get back to the problem of prior assignment and the "correctness" of the theory. EUP is not
the proper metric to measure civilization collapse because it lacks
specificity and
sensitivity (entropyfalls I apologise for not being clear on my 1st post).
Let me give you an analogy to see why it is a flawed measure (JustinFrankl was on right track on this one, but the Bayes framework can be applied even in these cases). To see why EUP is an improper civilization-health metric, consider the more familiar case of the caloric intake of a person (analogous to EUP) and compare the following situations:
a) pubertal growth
b) fat slob watching Soaps
c) a person on a diet
d) a hospitalized person
e) a person with stable weight on a exercise program
Now situations (a) and (b) are associated with increasing caloric intake (EUP), whereas (c) and (d) with a decreasing EUP and (e) with a stable, increasing or decreasing (EUP). Situations (a)/(c)/(e) are associated with a healthy state (corresponding to the C hypothesis), whereas situations (b), (d) are not (the technical term we employ in clinical research would call the EUP and the clinical state confounding variables).
The data series that Duncan has been using regarding the EUP are usually interpreted as the effect of increasing efficiency in our energy use i.e. we can actually decrease the EUP and perform the same useful work, if we use energy more efficiently OR we decide not to consume as much (go on a diet). Decreasing EUP does not necessarily mean that we are sick or that civilization is about to crash.
Therefore just observing the EUP without knowledge of the situation is not going to help you much. The sensitivity and specificity are so low that bayesian updating of the competing hypotheses leads to posterior probabilities that are almost identical to the prior values. Where does that leaves us re: Dr Duncan? As with any correlational approach to multi-scale modelling of a complex system it is bound to fail.
My 2c
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by Andrew_S » Tue 27 Dec 2005, 02:42:48
$this->bbcode_second_pass_quote('EnergySpin', '')$this->bbcode_second_pass_quote('Andrew_S', '
')It would help to define everything you are using and how you treat them. So, are hypotheses Olduvai and Civilization treated as possible outcomes of a random variable?
Not Really; the concept of a random variable i.e. the outcome of a repetitive experiment is not one that Bayesians accept (or at least the Bayesian "School" of Laplace, John Maynard Keynes, Jeffreys, ET Jaynes accepts). For them probability is just a measure of uncertainty about the truth of a given hypothesis.
Well, Bayes did (accept). I do not in principle dispute subjective versus frequentative points of view.
$this->bbcode_second_pass_quote('EnergySpin', '
')So mathematical statements such as P(H)=0 false, P(H)=1 true, P(H)=0.75 are a numerical measure of uncertainty regarding the truth of the hypothesis H . In the situation that entropyfalls is exploring, the two competing hypothesis (i.e. C & O) are mutually exclusive (i.e. only of them can be true), and exhaustive (i.e. these are the only two possible models of the world). In our present situation we have incomplete information to decide which one is true and we express that with numerical values that sum to one i.e. if P(O)=0.25, then P(C)=0.75 and if P(O)=0.3 then P(C)=0.7. These are our
prior or
pre-data probabilities, the ones we assign to the two hypothesis before we have seen any data. (For those of you with a frequntist training, these values correspond to the regularization parameters in Tihkonov's theory of ill posed problems).
And I discerned so much before your first helpful post. You add nothing here.
That probabilty is a measure of uncertainty was not a matter of dispute.
$this->bbcode_second_pass_quote('EnergySpin', '
')$this->bbcode_second_pass_quote('Andrew_S', '
')says that. So now O and C are the 2 possible outcomes of a random variable, let's call it X. So, is it assumed to have a binomial distribution of one trial and you're estimating and updating the parameter, p, of the binomial distribution?
If so, do you update the parameter, what after each trial, or after how many trials, or does that not matter?
Do not think in terms of a random experiment Andrew ...each of the two hypothesis needs a data model (which I did not define). . In the case of the civilization hypothesis one should use a joint Poisson model for the occurence of a blackout its duration, severity with constant parameters(yikes we are already into a tri-variate space).
Justify your first statement. If it's not viewed as a random experiment explain and justify your first equation.
@entropyfails justify and exlpain your presented calculations.
What's the justification for a Poisson model?
You presented the basics as if samples from a random sample. Why not now? You do not answer my questions, you merely evade.
$this->bbcode_second_pass_quote('EnergySpin', '
')For the Olduvai hypothesis , I would have to read again Duncan's paper, but IIRC he predicted an escalating course ie blackouts that become more frequent and extensive. There are many possibilities from the Bayesian Survival Analysis toolkit, but in order to keep it simple we could use a hierarchical model: P(freq blackout X Duration X Severity)~Trivariate Poisson(theta(t)), and theta(t) obeys a first order Stochastic Differential equation.
In any case, once the data or likelihood model has been defined, one uses the Bayes theorem to update the beliefs i.e. the plausibility of the two hypothesis. From the looks of it, I doubt there will be a closed formula; you've got to use MCMC methods or use a numerical integration code. I'd go with the first method, if I were you (unless you are working on a PhD thesis
)
This thread, in my book, as a research proposal got an epsilon minus until your first post. Then it went up to an epsilon for an inkling of explanation. Wanna improve? It's stands at epsilon until you explain convincingly.
$this->bbcode_second_pass_quote('EnergySpin', '
')$this->bbcode_second_pass_quote('Andrew_S', '
')The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
The aforementioned exposition is the best I can do, without formulas.
I do not know if this site can render MathML properly and whether people are really interested in the actual formulas.
You didn't anwer the essential questions.
A lot can be said and answered in words.
Especally by those who have a clear idea of what they're proposing.
-
Andrew_S
- Tar Sands

-
- Posts: 634
- Joined: Sun 09 Jan 2005, 04:00:00
-
by EnergySpin » Tue 27 Dec 2005, 03:53:59
$this->bbcode_second_pass_quote('Andrew_S', '
')Well, Bayes did (accept). I do not in principle dispute subjective versus frequentative points of view.
Connections between the two were established by ET Jaynes, we should not re-discover the wheel. The degree-of-plausibility interpretation is more general than the limiting frequency of a random experiment.
$this->bbcode_second_pass_quote('Andrew_S', '
')
And I discerned so much before your first helpful post. You add nothing here.
That probabilty is a measure of uncertainty was not a matter of dispute.
You were looking for an intepretation in terms of random variables, so appearances suggested otherwise

.
$this->bbcode_second_pass_quote('Andrew_S', '
')
Justify your first statement. If it's not viewed as a random experiment explain and justify your first equation.
Which equation? the P(O)+P(C)=1?
This is just a consequence of having two
mutually exclusive and exhaustive models of the world. A result of the way the problem was presented.
$this->bbcode_second_pass_quote('Andrew_S', '
')What's the justification for a Poisson model?
You presented the basics as if samples from a random sample. Why not now? You do not answer my questions, you merely evade.
I do not evade anything; the data models (e.g Poisson) is the way we link measurements to hypothesis. I think I answered your questions in a pretty straightforward manner, you need to read up on Bayesian methods. I suggest Jaynes book and see (for yourself) how randomness manifests in Bayesian calculations. A shorthand version:
randomness = incomplete state of knowledge NOT a property of the physical world. To give you an example, the act of flipping a coin may be analyzed as a Bernoulli trial. BUT if someone had a supercomputer at his disposal he could solve the equations of motion and deterministically predict the outcome. The deterministic solution is cumbersome, so we stick with the probabilistic solution.
The Poisson model is just a conveniency (some analytical results are possible), but it is also a very prominent tool in [/quote]Survival/Time Failure Analysis. Its role in statistics in general is also detailed [url=http://home.clara.net/sisa/poishlp.htm]here[/url.
An example which used the Poisson process in survival analysis (in a medical setting) along with the corresponding BUGS code can be found here
This takes care of the Civilization hypothesis IMHO (but feel free to play with other distributions, but keep the Poisson as a default option). Regarding the Olduvai hypothesis (OH), one could use the time-dependent Poisson process (J.F. Lawless, Statistical Models and Methods for Lifetime DataJohn Wiley and Sons, 1982 ISBN 0-471-08544-8 p497-499), since OH does predict a trend in the frequency of backgrounds. Or you could use what I proposed i.e. a hierarchical model. The latter will allow you more flexibility in modelling.
$this->bbcode_second_pass_quote('Andrew_S', '
')This thread, in my book, as a research proposal got an epsilon minus until your first post. Then it went up to an epsilon for an inkling of explanation. Wanna improve? It's stands at epsilon until you explain convincingly.
I liked the idea of using bayesian analysis ... but I do not think that the OH is worth considering. My prior that it is correct is in par with the priors I assign to cow-abductions/mutilatiuons, cold-fusion, ESP i.e. low. So to answer entropyfalls curiosity about my "neural nets"
,in all three of them, I'd like to see evidence gretaer than 60dB before I decide to change my mind substantially .
$this->bbcode_second_pass_quote('Andrew_S', '
')The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
I think I did; there was plenty of information for someone to "build" up a model. There is even more information now (including a link that shows how to do failure/survival analysis). All these are standard processes ... no need to rediscover the wheel. The Statistical Engineering Division of the NIST has a pretty good statistical handbookon line. In any case, why should I have to retype formulas in a website that does not support MathML ? Writing TEX code is really low on my priority list
The aforementioned exposition is the best I can do, without formulas.
I do not know if this site can render MathML properly and whether people are really interested in the actual formulas.
[/quote]
$this->bbcode_second_pass_quote('Andrew_S', '
')You didn't anwer the essential questions.
A lot can be said and answered in words.
Especally by those who have a clear idea of what they're proposing.
I disagree with this, my proposals are crystal clear; anyone with a basic understanding of survival analysis and statistics could grasp them. I do have to question your understanding of the Bayesian method for updating probabilities. In any case, I did not try to follow the analysis of the OH that entropyfalls did, because of the problems I see with the validity of OH as a scientific theory.
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by EnergySpin » Tue 27 Dec 2005, 03:58:52
$this->bbcode_second_pass_quote('EnergySpin', '')$this->bbcode_second_pass_quote('Andrew_S', '
')Well, Bayes did (accept). I do not in principle dispute subjective versus frequentative points of view.
Connections between the two were established by ET Jaynes, we do not have to re-discover the wheel. The degree-of-plausibility interpretation is more general than the limiting frequency of a random experiment and this one that I used
$this->bbcode_second_pass_quote('Andrew_S', '
')
And I discerned so much before your first helpful post. You add nothing here.
That probabilty is a measure of uncertainty was not a matter of dispute.
You were looking for an intepretation in terms of random variables, so appearances suggested otherwise

.
$this->bbcode_second_pass_quote('Andrew_S', '
')
Justify your first statement. If it's not viewed as a random experiment explain and justify your first equation.
Which equation? the P(O)+P(C)=1?
This is just a consequence of having two
mutually exclusive and exhaustive models of the world. A result of the way the problem was presented.
$this->bbcode_second_pass_quote('Andrew_S', '
')What's the justification for a Poisson model?
You presented the basics as if samples from a random sample. Why not now? You do not answer my questions, you merely evade.
I do not evade anything; the data models (e.g Poisson) is the way we link measurements to hypothesis. I think I answered your questions in a pretty straightforward manner, you need to read up on Bayesian methods. I suggest Jaynes book and see (for yourself) how randomness manifests in Bayesian calculations. A shorthand version:
randomness = incomplete state of knowledge NOT a property of the physical world. To give you an example, the act of flipping a coin may be analyzed as a Bernoulli trial. BUT if someone had a supercomputer at his disposal he could solve the equations of motion and deterministically predict the outcome. The deterministic solution is cumbersome, so we stick with the probabilistic solution.
The Poisson model is just a conveniency (some analytical results are possible), but it is also a very prominent tool in Survival/Time Failure Analysis. Its role in statistics in general is also detailed here.
An example which used the Poisson process in survival analysis (in a medical setting) along with the corresponding BUGS code can be found here
This takes care of the Civilization hypothesis IMHO (but feel free to play with other distributions, but keep the Poisson as a default option). Regarding the Olduvai hypothesis (OH), one could use the time-dependent Poisson process (J.F. Lawless, Statistical Models and Methods for Lifetime DataJohn Wiley and Sons, 1982 ISBN 0-471-08544-8 p497-499), since OH does predict a trend in the frequency of backgrounds. Or you could use what I proposed i.e. a hierarchical model. The latter will allow you more flexibility in modelling.
$this->bbcode_second_pass_quote('Andrew_S', '
')This thread, in my book, as a research proposal got an epsilon minus until your first post. Then it went up to an epsilon for an inkling of explanation. Wanna improve? It's stands at epsilon until you explain convincingly.
I liked the idea of using bayesian analysis ... but I do not think that the OH is worth considering. My prior that it is correct is in par with the priors I assign to cow-abductions/mutilatiuons, cold-fusion, ESP i.e. low. So to answer entropyfalls curiosity about my "neural nets"
,in all three of them, I'd like to see evidence gretaer than 60dB before I decide to change my mind substantially .
$this->bbcode_second_pass_quote('Andrew_S', '
')The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
I think I did; there was plenty of information for someone to "build" up a model. There is even more information now (including a link that shows how to do failure/survival analysis). All these are standard processes ... no need to rediscover the wheel. The Statistical Engineering Division of the NIST has a pretty good statistical handbookon line. In any case, why should I have to retype formulas in a website that does not support MathML ? Writing TEX code is really low on my priority list
The aforementioned exposition is the best I can do, without formulas.
I do not know if this site can render MathML properly and whether people are really interested in the actual formulas.
[/quote]
$this->bbcode_second_pass_quote('Andrew_S', '
')You didn't anwer the essential questions.
A lot can be said and answered in words.
Especally by those who have a clear idea of what they're proposing.
I disagree with this, my proposals are crystal clear; anyone with a basic understanding of survival analysis and statistics could grasp them. I do have to question your understanding of the Bayesian method for updating probabilities. In any case, I did not try to follow the analysis of the OH that entropyfalls did, because of the problems I see with the validity of OH as a scientific theory.[/quote]
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by Andrew_S » Tue 27 Dec 2005, 05:50:10
$this->bbcode_second_pass_quote('EnergySpin', '
')
$this->bbcode_second_pass_quote('Andrew_S', '
')
And I discerned so much before your first helpful post. You add nothing here.
That probabilty is a measure of uncertainty was not a matter of dispute.
You were looking for an intepretation in terms of random variables, so appearances suggested otherwise

.
So your "equation" was in fact a tautology merely reiterating what I clarified in an earlier post (2 mutually exclusive hypotheses).
We're on page 4 of a Bayes' thread. Would anyone dare actually name a random variable in this supposed study?
C'mon if your such big boys.
Explain something. Make sense.
The promotion to epsilon was overgenerous considering your "equation" meant nothing.
I don't have time to read up the references you gave at the moment. However, I challenge you to explain and justify the principles here in words.
You seem to make assumptions about my educational background. On a public BB only statements count for much. You will be judged by your statements.
Current standing of proposal:
epsilon minus
Reason:
total lack of explanation of methodology and interpretation.
-
Andrew_S
- Tar Sands

-
- Posts: 634
- Joined: Sun 09 Jan 2005, 04:00:00
-
by EnergySpin » Tue 27 Dec 2005, 06:08:13
$this->bbcode_second_pass_quote('Andrew_S', '
')
C'mon if your such big boys.
Explain something. Make sense.
I don't have time to read up the references you gave at the moment. However, I challenge you to explain and justify the principles here in words.
Go READ THE REFERENCES. In order for us to get to the same page, we need to use the same definitions about:
1) probabilities
2) failure models
The principles of probability theory can be found in any textbook. If you are interested in Bayesian textbooks that are not too mathematical , try the book by ET Jaynes: Probability Theory; the Logic of Science (a basic understanding of variational calculus is required)
$this->bbcode_second_pass_quote('Andrew_S', '
')You seem to make assumptions about my educational background. On a public BB only statements count for much. You will be judged by your statements.
Current standing of proposal:
epsilon minus
Reason:
total lack of explanation of methodology and interpretation.
The methodology and the interpretation have been around for 2 centuries (survival analysis is a lot younger, but is related to Shewart's work on quality control from the 30s!).
I provided the background for someone to judge my statements, but you said you have no time to check the background references. So why should I bother to plagiarize the basics if you do not have the time to do the reading?
And getting a grade from someone over the internet is low on my probability list.
In any case, you are asking for a recipe and people around here provide you with information regarding not only the recipe, but also where to go and purchase the starting materials, the cooking utensils and the type of cooking method to use. You only have to read the references and use them (the code in the programming language BUGS to run a Cox regression was even provided in one of the links).
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by Andrew_S » Tue 27 Dec 2005, 06:09:24
$this->bbcode_second_pass_quote('EnergySpin', '
')
$this->bbcode_second_pass_quote('Andrew_S', '
')What's the justification for a Poisson model?
You presented the basics as if samples from a random sample. Why not now? You do not answer my questions, you merely evade.
I do not evade anything; the data models (e.g Poisson) is the way we link measurements to hypothesis. I think I answered your questions in a pretty straightforward manner, you need to read up on Bayesian methods. I suggest Jaynes book and see (for yourself) how randomness manifests in Bayesian calculations. A shorthand version:
randomness = incomplete state of knowledge NOT a property of the physical world. To give you an example, the act of flipping a coin may be analyzed as a Bernoulli trial. BUT if someone had a supercomputer at his disposal he could solve the equations of motion and deterministically predict the outcome. The deterministic solution is cumbersome, so we stick with the probabilistic solution.
I know what a Poisson process is. They involve certain restrictive assumptions. That's why I ask for justification. I ask for explanation not assumption.
$this->bbcode_second_pass_quote('Andrew_S', '
')The process unfolds over time, but is the parameter assumed to be time-independent or time-dependent? Does this have any bearing on the question above: does how frequently (many trials) the updating is done matter?
The thread could be more effective if you explain your proposal and methods explicitly.
$this->bbcode_second_pass_quote('EnergySpin', '
')I think I did; there was plenty of information for someone to "build" up a model. There is even more information now (including a link that shows how to do failure/survival analysis). All these are standard processes ... no need to rediscover the wheel. The Statistical Engineering Division of the NIST has a pretty good statistical
handbookon line. In any case, why should I have to retype formulas in a website that does not support MathML ? Writing TEX code is really low on my priority list
The aforementioned exposition is the best I can do, without formulas.
I do not know if this site can render MathML properly and whether people are really interested in the actual formulas.
$this->bbcode_second_pass_quote('Andrew_S', '
')You didn't anwer the essential questions.
A lot can be said and answered in words.
Especally by those who have a clear idea of what they're proposing.
$this->bbcode_second_pass_quote('EnergySpin', '
')I disagree with this, my proposals are crystal clear; anyone with a basic understanding of survival analysis and statistics could grasp them. I do have to question your understanding of the Bayesian method for updating probabilities. In any case, I did not try to follow the analysis of the OH that entropyfalls did, because of the problems I see with the validity of OH as a scientific theory.
Are you treating 2 alternative hypotheses as outcomes of a random variable or not?
Yea or nay?
And if so, answer the other questions, about the assumed distribution, the parameter and the time-dependence.
These are simple questions I'm asking. If you have difficulty in answering I doubt whether you know what your doing.
Last edited by
Andrew_S on Tue 27 Dec 2005, 06:26:47, edited 1 time in total.
-
Andrew_S
- Tar Sands

-
- Posts: 634
- Joined: Sun 09 Jan 2005, 04:00:00
-
by Andrew_S » Tue 27 Dec 2005, 06:22:17
$this->bbcode_second_pass_quote('EnergySpin', '
')Go READ THE REFERENCES. In order for us to get to the same page, we need to use the same definitions about:
1) probabilities
2) failure models
The principles of probability theory can be found in any textbook. If you are interested in Bayesian textbooks that are not too mathematical , try the book by ET Jaynes: Probability Theory; the Logic of Science (a basic understanding of variational calculus is required)
<snip>
I provided the background for someone to judge my statements, but you said you have no time to check the background references. So why should I bother to plagiarize the basics if you do not have the time to do the reading?
And getting a grade from someone over the internet is low on my probability list.
In any case, you are asking for a recipe and people around here provide you with information regarding not only the recipe, but also where to go and purchase the starting materials, the cooking utensils and the type of cooking method to use. You only have to read the references and use them (the code in the programming language BUGS to run a Cox regression was even provided in one of the links).
I'm asking for a specific proposal. This should be possible in 1 or at most 2 A4 pages of text. Define the essentials. Be clear. Do not assume too much (e.g at what point did you propose "failure" models? Is here "failure" in contrast to "success" or failure in a more general sense? How does this relate to the problem at hand?).
Although possibly inconvenient, explanation is probably possible without the equations.
-
Andrew_S
- Tar Sands

-
- Posts: 634
- Joined: Sun 09 Jan 2005, 04:00:00
-
by EnergySpin » Tue 27 Dec 2005, 10:12:52
Ok, lets agree on the definitions first.
I use "failure models" in a strict , technical sense (and the meaning is a little bit different from the colloquial meaning of the word). The following exposition is only a short introduction ... there are many more useful toolboxes in time failure/survival analysis that can be used, but we will stick with the "garden-variety" model to make it easier.
From the StatSoft website, this is the General Information section on Time Failure Models/Survival Analysis:
$this->bbcode_second_pass_quote('', '
')These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social and economic sciences, as well as in engineering (reliability and failure time analysis).
Imagine that you are a researcher in a hospital who is studying the effectiveness of a new treatment for a generally terminal disease. The major variable of interest is the number of days that the respective patients survive. In principle, one could use the standard parametric and nonparametric statistics for describing the average survival, and for comparing the new treatment with traditional methods (see Basic Statistics and Nonparametrics and Distribution Fitting).
These models use the inter-failure time intervals (e.g. times between successive blackouts in our case), to estimate the reliability of a given system. Cox's proportional hazard model is an example of a model which can be used to model time-to-failure data series.
Description of Cox's PH Model$this->bbcode_second_pass_quote('', '
')According to the PH model, the failure rate of a system is affected not only by its operation time, but also by the covariates under which it operates. ...
The proportional hazards model assumes that the failure rate (hazard rate) of a unit is the product of:
· an arbitrary and unspecified baseline failure rate

, which is a function of time only,
and,
· a positive function g(x, A), independent of time, which incorporates the effects of a number of covariates ...
The Olduvai theory uses only one such covariate which is the EUP and predicts that as this number goes down, blackouts will increase in frequency, duration till there is no electricity.
Without making any further assumptions, the general equation for the Cox PH model is then:
At this point we usually make an assumption regarding the parametric form of the baseline failure rate i.e. assume that it takes the functional form of the exponential, log-normal, poisson distribution etc (there are non-parametric approaches but I do not want to complicate the discussion at this point).
In any case, using this very simple time-failure model, one regresses the
instantaneous failure rate of the grid (#blackouts/per unit time normalized to the size of the grid) against the
EUP and
time and uses standard tools to estimate the parameters. This is the frequentist version of an empiric validation approach of the Olduvai theory, but I have to ask you to pause and think for a second, whether the Olduvai's theory reliance on a single metric makes
anysense at all.
Edit
----------
If you are interested in reading examples of such analysis as applied to power systems try the following links (but I do not think the EUP is included in those links; they do demo how the methodology is applied to power systems) :
1)
Cox PH Model
2)
Complex System Analysis of Blackouts
3)
ORNL Study on Cascading Blackouts
The last two demo the use of the Poisson distribution for the hazard function within the context of ORNL's time failure models for the US grids.
They discuss the relaiton between frequency, size and duration of the blackout etc. Read up on these models; plenty of evidence that the Olduvai theory was a fallatious one, cause it relied on a simple metric which is actually the "symptom" of a blackout. If you do not have time to read up on the models, just think along the lines of caloric intake analogy.
Not eating enough can be both a cause of disease (think Protein-energy-malnutrition states) AND a symptom of a disease (I contracted a common cold and I don't feel well enough to eat) OR even be symptom of health-well being (I am going on a diet to shape up so that a potential partner takes an interest in me).
Cheers from multi-variate stati-land
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
by entropyfails » Tue 27 Dec 2005, 11:05:00
$this->bbcode_second_pass_quote('EnergySpin', 'T')o get back to the problem of prior assignment and the "correctness" of the theory. EUP is not the proper metric to measure civilization collapse because it lacks specificity and sensitivity
I can see that in the case of the first few years of the supposed drop since his decline in EUP of 0.67% seems well explained by CT and thus CT and OT will have high probabilities for the data. By the time they diverge enough, we’ll already be past the 2011 date and already have the strongest prediction tested. Further out, EUP ends up being a specific and sensitive measure because OT states that civilization doesn’t work at EUP < 3 and CT states that EUP will never drop to 3 (and probably also states that it will continue to operate even at a EUP < 3.)
Thank you for the write up on Survival/Failure Time Analysis. I agree that using that method seems like a much better way of determining the theory's validity in the early years. I haven’t applied the method before so I’m going to go off to attain a basic level of competence in the method so I won’t be talking out my ass regarding how to apply it. Thanks again for being helpful in what you see as an exercise in futility! *laugh*
As to how much sense it makes to use EUP, I guess we can only test it at this point. You may personally put the prior probability at the “worthless” level but several people around here do not and they keep talking about it. I personally put that particular theory of collapse at a very low level as well, so I thought that perhaps we should do the work that Dr. Duncan should have done if he wanted to make these sorts of claims.
EntropyFails
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche
-

entropyfails
- Expert

-
- Posts: 565
- Joined: Wed 30 Jun 2004, 03:00:00
-
by EnergySpin » Tue 27 Dec 2005, 13:21:08
$this->bbcode_second_pass_quote('entropyfails', '
')I can see that in the case of the first few years of the supposed drop since his decline in EUP of 0.67% seems well explained by CT and thus CT and OT will have high probabilities for the data. By the time they diverge enough, we’ll already be past the 2011 date and already have the strongest prediction tested. Further out, EUP ends up being a specific and sensitive measure because OT states that civilization doesn’t work at EUP < 3 and CT states that EUP will never drop to 3 (and probably also states that it will continue to operate even at a EUP < 3.)
Where did the figure "3" come from?
Is it 3 kwh per person? If it is 3 kwh per person, sorry but the WEC has said that an average of 2.5 (the 4-2-1 policy) is not only attainable but also compatible with advanced technical civilization.
$this->bbcode_second_pass_quote('entropyfails', '
')Thank you for the write up on Survival/Failure Time Analysis. I agree that using that method seems like a much better way of determining the theory's validity in the early years. I haven’t applied the method before so I’m going to go off to attain a basic level of competence in the method so I won’t be talking out my ass regarding how to apply it. Thanks again for being helpful in what you see as an exercise in futility! *laugh*
You are welcome , this is just basic stuff

$this->bbcode_second_pass_quote('entropyfails', '
')As to how much sense it makes to use EUP, I guess we can only test it at this point. You may personally put the prior probability at the “worthless” level but several people around here do not and they keep talking about it. I personally put that particular theory of collapse at a very low level as well, so I thought that perhaps we should do the work that Dr. Duncan should have done if he wanted to make these sorts of claims.
The fact that several people around here do keep talking about it, needs further qualifying (cause they also believe some weird stuff). Dr Duncan should have been more careful with correlational research if you ask me. A petroleum engineer (or anyone else) should not write about stuff he does not understand, till he understands them - and it is obvious that Dr Duncan had no clue about power systems when he wrote that article. I can think of many similar publications in the biomedical literature when a single metabolic variable was proposed as the cause behind various health conditions. All of them were subsequently proved to be wrong or have limited applicability. Why? Complex systems have complex descriptions and no size fits all.
By the way , has his article received
any citations in the scientific press? Usually articles who make extraordinary claims receive either a lot of attention (when they are worth exploring further or debunking as in cold fusion) or no attention at all (when the blunder is obvious).
Good luck, keeps us posted about the outcome.
Oh and do look into time dependent Poisson processes as an alternative modelling tools. Since you are going to embark into the Bayesian version of survival analysis may I suggest the following references (I am the author of neither of those, in case you wonder) :
Bayesian Survival Analysis
The book by Lawless I mentioned in my second post is a classic reference for survival analysis from a frequentist perspective (and you will one such book to use the first source, since it assumes you know the basics)!!
"Nuclear power has long been to the Left what embryonic-stem-cell research is to the Right--irredeemably wrong and a signifier of moral weakness."Esquire Magazine,12/05
The genetic code is commaless and so are my posts.
-

EnergySpin
- Intermediate Crude

-
- Posts: 2248
- Joined: Sat 25 Jun 2005, 03:00:00
-
Return to Peak Oil Discussion
Who is online
Users browsing this forum: No registered users and 0 guests