STOCHASTIC PROCESSES. Second ed i ti on. Sheldon M. Ross. University of California, Berkeley. JOHN WILEY & SONS, INC. New York · Chichester. STOCHASTIC PROCESSES. Second Edition. Sheldon M. Ross. University of California, Berkeley. JOHN WILEY & SONS. INC. New York • Chichester • Brisbane. Stochastic Processes - Ross - Ebook download as PDF File .pdf), Text File .txt) or read book online.

Author: | ALLIE MIDDLEBROOK |

Language: | English, Spanish, German |

Country: | Gabon |

Genre: | Art |

Pages: | 389 |

Published (Last): | 24.04.2016 |

ISBN: | 693-7-80450-828-4 |

Distribution: | Free* [*Register to download] |

Uploaded by: | GENOVEVA |

R84) Sheldon M. Ross. Academic Press. . (p.d.f.) is f(x) ≥ 0 such that. F(x) = ∫ x .. Simple random walk is an easy object in the family of stochastic processes. 3 to the general theory of Stochastic Processes, with an eye towards I am much indebted to my colleague Kevin Ross for providing many of. theorem. Levy's upward and downward theorems Doob's decomposition of a stochastic process Doob's submartingale inequality

Printed in the United States of America 10 9 8 7 6 5 4 3 2 On March 30, , a beautiful six-year-old girl died. This attempt, for instance, has led us to study most processes from a sample path point of view. Ross Preface to the Second Edition The second edition of Stochastic Processes includes the following changes' i Additional material in Chapter 2 on compound Poisson random vari- ables, including an identity that can be used to efficiently compute moments, and which leads to an elegant recursive equation for the probabilIty mass function of a nonnegative integer valued compound Poisson random variable; ii A separate chapter Chapter 6 on martingales, including sections on the Azuma inequality; and iii A new chapter Chapter 10 on Poisson approximations, including both the Stein-Chen method for bounding the error of these approximations and a method for improving the approximation itself. In addition, we have added numerous exercises and problems throughout the text. Additions to individual chapters follow: In Chapter 1, we have new examples on the probabilistic method, the multivanate normal distnbution, random walks on graphs, and the complete match problem Also, we have new sections on probability inequalities includ- ing Chernoff bounds and on Bayes estimators showing that they are almost never unbiased. A proof of the strong law of large numbers is given in the Appendix to this chapter.

Compute the conditional distribution of S" Sh. When a faster car encounters a slower one. Let T Tz. Show that W. Consider a fixed point. Find a E[C]. Suppose further that each injured person is out of work for a random amount of time having distribution F. Consider a conditional Poisson process where the distribution of A is the gamma distribution with parameters m and a' that is.

X t for a compound Poisson process 2. For a conditional Poisson process. Are they independent? Are they identically distributed?

Argue that. Compute Cov X s. Explain why this is true. Academic Press. Englewood Cliffs. Introduction to Stochastic Processes. Introduction to Probability Models. Definition 3. Let p. As the number of events by time t will equal the largest value of n for which the nth event occurs before or at time t. A natural generalization is to consider a counting process for which the interarrival times are independent and identically distributed with an arbitrary distribution Such a counting process is called a renewal process.

Sn can be less than or equal to t for at most a finite number of values of n Hence.

From 3. N t must be finite. This follows since the only way in which N 00 -the total number of renewals that occurs-can be finite. S3 also represents the time of the last event prior to or at time t This is. Proceeding inductively. As a prelude to determining the rate at which N t grows. N t N t ' However. I'1m n. To begin. We will exhibit a strategy that results in the long-run proportion of heads being equal to 1.

At this point that coin is discarded never to be used again and a new one is chosen The process is then repeated To determine Ph for this strategy. The result now follows by since tIN t is between two numbers. Suppose we are to flip coins sequentially. If our objective is to maximize the long-run proportion of flips that lands on heads. We have the following definition Definition An integer-valued random variable N is said to be a stopping time for the sequence XI' X 2.

For this reason IIp. E[X] 2: E[I"] n-! I" is determined by XI. From Wald's equation we obtain that. Wald's equation implies However. In this case.. For Example 3 3 B. Now, since S": Remark At first glance it might seem that the elementary renewal theorem should be a simple consequence of Proposition 3. That is, since the average renewal rate will, with probability 1, converge to 11 JL, should this not imply that the expected average renewal rate also converges to lIJL?

We must, however, be careful, consider the following example. Let U be a random variable that is uniformly distributed on 0, 1. Hence, with probability 1,. Therefore, even though the sequence of random variables Yn converges to 0, the expected values of the Yn are all identically 1. We will end this section by showing that N t is asymptotically, as t To prove this result we make use both of the central limit theorem to show that Sn is asymptotically normal and the relationship.

It is not, however, too difficult to make the above argument rigorous.

That is, X is lattice if it only takes on integral. We shall state without proof the following theorem. Thus Blackwell's theorem states that if F is not lattice, then the expected number of renewals in an interval of length a, far from the origin, is approxi- matelyalp. This is quite intuitive, for as we go further away from the origin it would seem that the initial effects wear away and thus. To see this note first that. When F is lattice with period d, then the limit in 34 1 cannot exist For now renewals can only occur at integral multiples of d and thus the expected number of renewals in an interval far from the origin would clearly depend not on the intervals' length per se but rather on how many points of the form nd, n: If interarrivals are always positive, then part ii of Blackwell's theorem states that, in the lattice case,.

We say that h is directly Riemann integrahle if L: A sufficient condition for h to be directly Riemann integrable is that. The following theorem, known as the key renewal theorem, will be stated without proof. If F is not lattice, and if h t is directly Riemann integrable, then. To obtain a feel for the key renewal theorem start with Blackwell's theorem and reason as follows By Blackwell's theorem, we have that.

Blackwell's theorem and the key renewal theorem can be shown to be equivalent. Problem 3. In Section 9. The key renewal theorem is a very important and useful result. It is used when one wants to compute the limiting value of g t , some probability or expectation at time t. The technique we shall employ for its use is to derive an equation for g t by first conditioning on the time of the last renewal prior to or at t.

This, as we will see, will yield an equation of ttie form. We start with a lemma that gives the distribution of SN r , the time of the last renewal prior to or at time t.

We now present some examples of the utility of the key renewal theorem. Once again the technique we employ will be to condition on SN t. Initially it is on and' it remains on for a time ZI, it then goes off and remains off for a time Y 1 , it then goes on for a time Z2, then off for a time Y 2; then on, and so forth.

We suppose that the random vectors Zn' Y n , n ;? Proof Say that a renewal takes place each time the system goes on Conditioning on the time of the last renewal pnor to or at t yields.

Theorem is quite important because many systems can be modelled by an alternating renewal process. For instance, consider a renewal process and let Yet denote the time from t until the next renewal and let A t be the time from t since the last renewal That is,.

To do so let an on-off cycle correspond to a renewal and say that the system is "on" at time t if the age at t is less than or equal to x.

In other words, the system is "on" the first x units of a renewal interval and "off" the remaining time Then, if the renewal distribution is not lattice, we have by Theorem We will find this technique of looking backward in time to be quite valuable in our studies of Markov chains in Chapters 4 and See Problem 3.

This result. But looking backwards the excess life at t is exactly the age at t of the original process. Thus by Theorem 3. Remark To better understand the inspection paradox. But by 3.? Suppose that customers 3. For another illustration of the varied uses of alternating renewal processes consider the following example.

The store uses the following s. The order is assumed to be instantaneously filled. The amounts desired by the customers are assumed to be independent with a common distribution G.. For if this were the case. Let X t denote the inventory level at time t.

S ordering policy' If the inventory level after serving a customer is below s. In fact.. Since the line is covered by renewal intervals. Thus we have proven the following. Y see Figure 3. F s By using the corresponding result for the ordinary renewal process. If a renewal does not Occur at t.

As in the ordinary case. We leave the proof of the following proposition for the reader. By the strong law for delayed renewal processes part i of Theorem 3. Suppose we want to determine the rate at which the pattern occurs. The distribution until the first renewal is the distribution of the time until the pattern first occurs.

But by Blackwell's theorem part iv of Theorem 3. If we let N n denote the number of times the pattern occurs by time n. By using the same logic on the pattern 0. Since the expected time to go from 0. But since in order for the pattern 0.

Suppose now that we are interested in the expected time that the pattern 0. Also let NA denote the number of flips until A occurs. Let N 81A denote respectively the number of additional flips needed for B to appear starting with A.

I"" A If we let N t denote the number of times the system becomes nonfunctional that is.. Suppose that the system is said to be functional at any time if at least one component is up at that time such a system is called parallel. Now one way for a breakdown to occur in t.

I I But by Blackwell's theorem the above is just h times the reciprocal of the mean time between breakdowns. To do so let us first look at the probability of a breakdown in t. Since all other possibilities taken together clearly have probability a h. We will need the following lemma. Say that cycle 2 ends at this point.

We can now verify that the above is indeed equal to the expected length of a down period divided by the expected time length of a cycle or time between breakdowns. Say that cycle 1 ends at this point Now flip coin 1 until two tails in a row occur. In general. Our objective is to continually flip among these coins so as to make the long-run proportion of tails equal to min PI' P2 ' The following strategy. Call the coin with tail probability P the bad coin and the one with probability ap the good one.

Let 8 m denote the number of flips in cycle m that use the bad coin and let G m be the number that use the good coin.: To show that the preceding policy meets our objective. As a check of the above. Whereas the preceding argument supposed that the good coin is used first in each cycle. Since e is arbitrary.

For suppose that we start observing a renewal process at time t. Let YD t denote the excess at t for a delayed renewal process. Joo 1 - sp. Then the process we observe is a delayed renewal process whose initial distribution is the distribution of Yet. When p. SX dF. We shall assume that the R n. We denote by Rn the reward earned at the time of the nth renewal. ND S may be interpreted as the number of renewals in time t of a delayed renewal process.

R I E[R] 'jo-. EfiRd IX. I for all I. For an alternating renewal process see Section Then the total reward earned by t is just the total on time in [0. A similar argument holds when the returns are nonpositive. Now since the age of the renewal process a time s into a renewal cycle is just s. Let A t denote the age at t of a renewal process.

Then the average value of the excess will. Whenever there are N travelers waiting in the depot. SN t represents the length of the renewal interVal containing the point t.

Why was this to be expected? If we say that a cycle is completed whenever a train leaves.. If the depot incurs a cost at the rate of nc dollars per unit time whenever there are n travelers waiting and an additional cost of K each time a train is dispatched. Why is this not surprising? The expected length of a cycle is the expected time required for N travelers to arrive. OJ To show that L exists and is constant.

The service times of customers are assumed to be independent and identically distributed. We shall assume that 3. Since the queueing process begins anew after each cycle. As L represents the long-run average reward. N and so the result follows from Remarks 1 The proof of Theorem 3. Most queueing processes in which customers arrive in accordance to a renewal process such as those in Section 36 are regenerative processes with cycles beginning each time an arrival finds the system empty Thus for instance in the single-server queueing model with renewal arrivals.

I t dt represents the amount of time in the first cycle that X t j Since E [I. To obtain some of the properties of this regenerative process. S E of Chapter 1 the ballot problem example the expression for the probability that the first visit to 0 in the symmetric random walk occurs at time 2n.

Thus from Lemma 3. So assume for n. Thus the proof is complete Since u. A sample path for the random walk. Z2n -: For the sample path presented in Figure We call it such since for large k and n we have. Vn"1T -1 we see that ElTJ One interesting consequence of the above is that it tells us that the propor- tion of time the symmetric random walk is positive is not converging to the constant value!

The reason why the above the fact that the limiting probability that a regener- ative process is in some state does not equal the long-run proportion of time it spends in that state is not a contradiction. It also follows directly from Lemma 3. OIl E[TJ. UkUn-k k: Hence f t Since 6 is arbitrary. In this case.

A is the rate of the renewal process. In the case of the equilibrium renewal process. A stationary point process is said to be regular or orderly if This is known as Korolyook's theorem. N 1 -NC: O and hence n-1 P A. N W'" 1. In defining a renewal process we suppose that F F oo that there will be no further renewals. Xn are exchangeable Would XI"". Xn are said to be exchangeable if X'I'. If F is the uniform 0.

Consider a single-server bank in which potential customers arrive at a Poisson rate A However. Xn whenever ii.

Now argue that the expected number of uniform 0. The random variables Xl'. Sn are distnbuted as the order statistics of a set of independent uniform 0. Prove that the renewal function met. Also let NI. Door 1 leads her to freedom after two-days' travel..

Suppose at all times she is equally to choose any of the three doors. Observe the Xr sequentially.. Now start sampling the remaining Xr-again acting as if the sample was just beginning-and stop after an additional N 3. Consider a miner trapped in a room that contains three doors. Now start sampling the remaining X..

Let A t and Y t denote respectively the age and excess at t.. Let A t and Y t denote the age and excess at t of a renewal-process Fill in the missing terms: Use Proposition 3. Show how Blackwell's theorem follows from the key renewal theorem.

A process is in one of n states. Consider a renewal process whose interarnval distnbution is the gamma distnbution with parameters n. From state n it returns to 1 and starts over.. Prove Equation 3. Obtain a renewal- type equation for: If h is directly Riemann integrable and F nonlattice with finite mean. In Problem 3. Consider successive flips of a fair coin. Apply the key renewal theorem to obtain the limiting values in a and b 3. Would the number of events by time t constitute a possibly delayed renewal process if an event corresponds to a customer: What if F were exponential?

Consider successive flips of a coin having probability p of landing heads. A coin having probability p of landing heads is flipped k times Additional flips of the coin are then made until the pattern of the first k is repeated possibly by using some of the first k flips.

At the moment she quits 0 find her expected winnings b find the expected number of bets that she has won. Show that the expected number of additional flips after the initial k is 2k.

Draw cards one at a time. Find the expected number of flips until the following sequences appear. The life of a car is a random variable with distribution F An individudl has a policy of trading in his car either when it fails or reaches the age of A. Prove Blackwell's theorem for renewal reward processes That is.

In both a and b you are expected to compute the ratio of the expected cost incurred in a cycle to the expected time of a cycle The answer should.

For a renewal reward process show that Assume the distribution of Xl is nonlattice and that any relevant function is directly Riemann integrable. There is no resale value of a failed car Let C1 denote the cost of a new car and suppose that an additional cost C2 is incurred whenever the car fails.

Let R A denote the resale value of an A-year-old car. When the cycle reward is defined to equal the cycle length.

A system consisting of four components is said to work whenever both at least one of components 1 and 2 work and at least one of components 3 and 4 work Suppose that component i alternates between working and being failed in accordance with a nonlattice alternating renewal process with distributions F. Consider the policy that flips each newly chosen coin until m consecutive flips land tails.

Suppose in Example 3 3 A that a coin's probability of landing heads is a beta random variable with parameters nand m. For the queueing system of Section 3. Stationary and Related Stochastic Processes. Packages arnve at a mailing depot in accordance with a Poisson process having rate A. R Cox. J where P. Consider a regenerative process satisfying the conditions of Theorem 3. Renewal Theory. Suppose that a reward at rate r j is earned whenever the process is in state j.

Reference 9 provides an illuminating review paper in renewal theory For a proof of the key renewal theorem under the most stringent conditions. If the expected reward dunng a cycle is finite. Cramer and M. Give an example where the system is never empty after the initial arrival. Theory of Probability. Vol A First Course in Stochastic Processes. Stochastic Models. Senes B. Stochastic Modeling and the Theory of Queues.

Vols I and II. An Algorithmic Approach. Volume I.

Vol 1. M Ross. Stochastic Models in Operations Research. This is called the Markovian property. Unless otherwise mentioned. We suppose that whenever the process is in state i. Such a stochastic process is known as a Markov chain.

The value P" represents the probability that the process will. Equation 4. Since probabilities are nonnegative and since the process must make a transition into some state. For if we knew the number in the system at time t.

G for the service distribution. From 4. Suppose that customers ar- rive at a service center in accordance with a Poisson process with rate A. Since Y n. There is a single server and those arrivals finding the server free go immediately into service.

The letter M stands for the fact that the interarrival distribution of customers is exponential. The General Random Walk. Suppose further that the service distribution is exponential with rate p.

Suppose that customers ar- rive at a single-server service center in accordance with an arbitrary renewal process having interarrival distribution G. If we let XII denote the number of customers in the system as seen by the nth arrival. Identically Distributed Ran- dom Variables. Now consider ISnl. Thus in the simple random walk the process always either goes up one step with probability p or down one step with probability q.

Sn for which IS.. From Proposition 4. These equations are co 4. Two states i and j accessible to each other are said to communicate. Proof The first two parts follow tnvially from the definition of communication To prove iii.

I' "'I ' where the second inequality follows. Let Then t'l denotes the probability of ever making a transition into state j. I' 'I pn j. In denotes the number of visits to j Since the result follows The argument leading to the above proposition is doubly important for it also shows that a transient state will only be visited a finite number of times hence the name transient. This leads to the conclusion that in a finite-state Markov chain not all states can be transient To see this.

But as the process must be in some state after time T. The Markov chain whose state space is the set of all integers and has transition proba- bilities P.

On the other hand. Another is that it represents the winnings of a gambler who on each play of the game either wins or loses one dollar..

Since all states clearly communicate it follows from Corollary 4. By using an approximation.. Hence L: However, 4pQ - p: We could also look at symmetric random walks in more than one dimension.

For instance, in the two-dimensional symmetric random walk the process would, at each transition, either take one step to the left, nght, up, or down, each having probability 1. Similarly, in three dimensions the process would, with probability t make a transition to any of the six adjacent points. By using the same method as in the one-dimensional random walk it can be shown that the two-dimensional symmetric random walk is recur- rent, but all higher-dimensional random walks are transient.

Corollary 4. It is easy to see that the opportunity number of the first success is a geometnc random vanable with mean lIP71' and is thus finite with probability 1. The result follows since i being recurrent implies that the number of potential opportunities is infinite. Let N, t denote the number of transitions into j by time t. Let f. L" denote the expected number of transitions needed to return to state j.

That is,. By interpreting transitions into state j as being renewals, we obtain the follow- ing theorem from Propositions 3. The proof of the following proposition is left as an exercise. A positive recurrent, aperiodic state is called ergodic Before presenting a theorem that shows how to obtain the limiting probabilities in the ergodic case, we need the fol. Hence, if the initial probability distribution is the stationary distribution, then Xn will have the same distribution for all n.

M " '" L; pn'I To show that the above is actually an equality, suppose that the inequality is strict for some j Then upon adding these inequalities we obtain.

TTk 2: P7I P, ,- 0 for all M. PI S ,;0 2: Thus, for case i , no stationary distribution exists and the proof is complete.

But now '1T, must be interpreted as the long-run proportion of time that the Markov chain is in state j see Problem 4. That is, a j is the probability of j arrivals during a service period The transition probabilities for this chain are. As co. Substitution into 4. Since the TTk are the unique solution to 4. The exact value of f3 can usually only be obtained by numerical methods. Initially an item is put into use, and when it fails it is replaced at the beginning of the next time period by a new item Suppose that the lives of the items are independent and each will fail in its ith period of use with J?

Let Xn denote the age of the item in use at time. Then if we let. It is worth noting that 4 3 11 is as expected since the limiting distribution of age in the nonlattice case is the equilibr!! It works as follows. To find the stationary probabilities of this chain.. Let p x. If we let Xn denote the number of members of the population at the begin- ning of period n.

Since each of these Xo individuals will independently be alive at the beginning of the next period with probability 1. It also follows from the preceding that P XI. XII ' To verify the claim. To derive an expression for E [N2]. Consider a given state of this chain. Since visits to state 0 constitute renewals it follows from Theorem O can be obtained by solving the following set of linear equations obtained by conditioning on the next state visited.

But if N is the number of transitions between successive visits into state 0. PII' For this chain. I-a 1. This Markov chain has three classes. P of losing 1 unit. J' the probability of ever entering j given that the process starts in i The following proposition.

Assuming successive plays of the game are independent. Since each transient state is only visited finitely. Let f.. N denote the probability that. Imagining that the gambler continues to play after reaching either 0 or n. Suppose now that we want to determine the expected number of bets that the gambler. In matrix notation. Q The existence of the inverse is easily established. T would be a closed class of states For transient states i and j.. Solution The matrix Q. Starting in state 3. All offspring of the zeroth generation constitute the first generation and their number is denoted by Xl' In general.

Let 1To denote the probability that. Since each family is assumed to act independently. The number of individuals initially present. Proof To show that 1To is the smallest solution of 4.

The most important example is probably the simplex algorithm of linear programming. If one looks at the algorithm's efficiency from a "worse case" point of view. The importance of the above representation stems from the following. Lemma 4. Since the sum of a large number of independent Bernoulli random variables. Thus each run is an increasing segment of the sequence. The distribution of a later run can be easily obtained if we know x. To obtain the unconditional distribution of the length of a given run.

For if a run starts with x.

Since it seems plausible that 17' y. Now II. Hence it would seem that the rate at which runs occur. Each X. So it seems plausible that the long-run proportion of such values that are less than y would equal the probability that a uniform 0. To obtain the limiting distnbution of In we will first hazard a guess and then venfy our guess by using the analog of Theorem 4.

At each unit of time a request is made to retrieve one of these. Hence the limiting distribution of Ln. The above could also have been computed from 4. To compute the average length of a run note from 4. In fact. Oearly if the P. Consider the following restricted class of rules that.

In addition. Before writing down the steady-state equations. The steady-state probabilities can now easily be seen to be 0. The above equations are equivalent to 0. Then from Equation 4.

It should be noted that this is an obvious necessary condition for time reversibility since a transition from i to j going backward. We say that such a chain is in steady state. Consider now a stationary Markov chain having transition probabilities p. The condition for time reversibility. But this is exactly what the preceding equation states. P'J is equal to the rate at which it goes from j to i namely. J for all i.

Thus the reversed process is also a Markov chain with transition probabili- ties given by 1T. J and stationary probabilities 1T. ExAMP1E 4. If this random variable takes on the value j. Suppose that m is large This is so since if - We can argue.. That these stationary probabilities are also limiting probabilities follows from the fact that since Q is an irreduc- ible transition probability matrix.

This can also be accomplished without computing A. Consider a graph having a positive number w.. The Markov chain describing the sequence of vertices visited by the particle is called a random walk on an edge weighted graph.

Consider a random walk on an edge weighted graph with a finite number of vertices If this Markov chain is irreducible then it is. Let leaf i denote the leaf on ray i. Assume that a particle moves along the vertices of the graph in the following manner Whenever it is at the central vertex O. Starting at vertex O. Wheneverit is at a leaf. Since the total of the sum of the weights on the edges out of each of the vertices is and the sum of the weights on the edges out of vertex 0 is r.

A star graph with weights: To evaluate this quantity. Therefore N. Tz is the additional time from T.. To determine E18722. In fact we can show the following. Proof The proof of necessity is as indicated. To understand the necessity of this. Consider the following path from state I. For any given probability vector r. By using Theorem 4 7. At each unit of time a request is made to retrieve one of these elements-element i being requested independently of the past with probability P.

Suppose we are given a set of n elements-numbered 1 through n-that are to be arranged in some ordered list. After being requested. Since a similar result holds in general.

To illustrate this we start with the following theorem. Proof Summing the above equality over all i yields L 11'. Since 11'. The importance of Theorem 4. Since the state of this Markov chain always increases by one until it hits a value chosen by the interarrival distribution and then drops to 1. The concept of the reversed chain is useful even when the process is not urne reversible. I I Hence. Hence it seems that the reversed process is just the excess or residual bye process.

To complete the proof that the reversed process is the excess and the limiting probabilities are given by 4. Thus by looking at the reversed chain we are able to show that it is the excess renewal process and obtain at the same time the limiting distribution of both excess and age.

In fact, this example yields additional insight as to why the renewal excess and age have the same limiting distnbution. The technique of using the reversed chain to obtain limiting probabilities will be further exploited in Chapter 5, where we deal with Markov chains in continuous time. A semi-Markov process is one that changes states in accordance with a Markov chain but takes a random amount of time between changes.

More specifically consider a stochastic process with states 0, 1,.. Thus a semi-Markov process does not possess the Markovian property that given the present state the future is independent of the past For in predicting the future not only would we want to know the present state, but also the length of time that has been spent in that state. Of course, at the moment of transition, all we would need to know is the new state and nothing about the past.

A Markov chain is a semi-Markov process in which. That is, all transition times of a Markov chain are identically L Let H, denote the distribution of time that the semi-Markov process spends in state i before making a transition That is, by conditioning on the next state, we see.

Let T" denote the time between successive transitions into state i and let Jl. As a corollary we note that Pi is also equal to the long-run proportion of time that the process is in state i. That is, 1l-,11l equals the long-run proportion of time in state i. Proof Follows from Proposition of Chapter 3.

While Proposition 4 8. That is, the 1Tj , j 2! J-- N: N, m Y,U 2: From Theorem 4. Suppose that a machine in good condition will remain this way for a mean time ILl and will then go to either the fair condition or the broken condition with respective probabilities t and t. A machine in the fair condition will remain that way for a mean time ILz and will then break down. A broken machine will be repaired, which takes a mean time IL3, and when repaired will be in the good condition with probability i and the fair condition with probability 1.

What proportion of time is the machine in each state? Letting the states be 1, 2, 3, we have that the TT, satisfy. The problem of determining the limiting distribution of a semi- Markov process is not completely solved by deriving the P,. Proof Say that a cycle begins each time the process enters state i and say that it is "on" if the state is i and it will remain i for at least the next x time units and the next state is j Say it is "off" otherwise Thus we have an alternating renewal process Conditioning on whether the state after i is j or not, we see that.

A store that stocks a certain commodity uses the following s, S ordering policy; if its supply at the beginning of a time penod is x, then it orders. The order is immediately filled. The daily demands are independent and equal j with probability Xi' All demands that cannot be immediately met are lost.

Let Xn denote the inventory level at the end of the nth time penod. For a Markov chain prove that. Prove that if the number of states is n, and if state j is accessible from state i, then it is accessible in n or fewer steps.

J fk pn-k ii il.

Show that the symmetric random walk is recurrent in two dimensions and transient in three dimensions. For the symmetnc random walk starting at 0: Let XI' X h Let R, denote the ith record value. Compute transition probabilities where appropriate.

At the beginning of every time period, each of N individuals is in one of three possible conditions: If a noninfected individual becomes infected during a time period then he or she will be in an infectious condition during the following time period, and from then on will be in an infected but not.

If a pair is in contact and one of the members of the pair is infectious and the other is noninfected then the noninfected person becomes infected and is thus in the infectious condition at the beginning of the next period. Let Xn and Y n denote the number of infectious and the number of noninfected individuals, respectively, at the beginning of time period n.

If so, give its transition probabilities. If so, give its transition probabil- ities. That is, the column sums all equal 1. If a doubly stochastic chain has n states and is ergodic, calculate its limiting probabilities. Show that in a finite Markov chain there are no null recurrent states and not all states can be transient.

An individual possesses r umbrellas, which she employs in going from her home to office and vice versa If she is at home the office at the beginning end of a day and it is raining, then she will take an umbrella with her to the office home , provided there is one to be taken.

If it is not raining, then she never takes an umbrella Assume that, independent of the past, it rains at the begInning end of a day with probability p. She gets wet if it is raining and all umbrellas are at her other location. Consider a positive recurrent irreducible penodic Markov chain and let l1J denote the long-run proportion of time in state j, j 2: Prove that " , j 2: EA transitions that. A subsystem description is a system object that contains information defining the characteristics of an operating environment controlled by the system.

For example, in an analysis of urban systems dynamics , A. Steiss [10] defined five intersecting systems, including the physical subsystem and behavioral system. For sociological models influenced by systems theory, where Kenneth D. Bailey [11] defined systems in terms of conceptual , concrete , and abstract systems, either isolated , closed , or open.

Walter F. Buckley [12] defined systems in sociology in terms of mechanical , organic , and process models. Bela H. Banathy [13] cautioned that for any inquiry into a system understanding its kind is crucial, and defined "natural" and "designed", i.

Artificial systems inherently have a major defect: they must be premised on one or more fundamental assumptions upon which additional knowledge is built.

These fundamental assumptions are not inherently deleterious, but they must by definition be assumed as true, and if they are actually false then the system is not as structurally integral as is assumed. For example, in geometry this is very evident in the postulation of theorems and extrapolation of proofs from them.

It is important not to confuse these abstract definitions. Theorists include in natural systems subatomic systems, living systems , the solar system , galaxies , and the Universe. Artificial systems include our physical structures, hybrids of natural and artificial systems, and conceptual knowledge. The human elements of organization and functions are emphasized with their relevant abstract systems and representations.

A cardinal consideration in making distinctions among systems is to determine how much freedom the system has to select its purpose, goals, methods, tools, etc. George J. Klir [14] maintained that no "classification is complete and perfect for all purposes", and defined systems as abstract, real , and conceptual physical systems , bounded and unbounded systems , discrete to continuous, pulse to hybrid systems , etc. The interactions between systems and their environments are categorized as relatively closed and open systems.

It seems most unlikely that an absolutely closed system can exist or, if it did, that it could be known by man. Important distinctions have also been made [15] between hard systems — technical in nature and amenable to methods such as systems engineering , operations research, and quantitative systems analysis — and soft systems that involve people and organisations, commonly associated with concepts developed by Peter Checkland and Brian Wilson through Soft Systems Methodology SSM involving methods such as action research and emphasis of participatory designs.

Where hard systems might be identified as more "scientific", the distinction between them is often elusive. Cultural system[ edit ] A cultural system may be defined as the interaction of different elements of culture. While a cultural system is quite different from a social system , sometimes both together are referred to as a "sociocultural system". A major concern of the social sciences is the problem of order. Main article: Economic system An economic system is a mechanism social institution which deals with the production , distribution and consumption of goods and services in a particular society.

The economic system is composed of people , institutions and their relationships to resources, such as the convention of property. It addresses the problems of economics , like the allocation and scarcity of resources. The international sphere of interacting states is described and analysed in systems terms by several international relations scholars, most notably in the neorealist school.

This systems mode of international analysis has however been challenged by other schools of international relations thought, most notably the constructivist school , which argues that an over-large focus on systems and structures can obscure the role of individual agency in social interactions. Systems-based models of international relations also underlies the vision of the international sphere held by the liberal institutionalist school of thought, which places more emphasis on systems generated by rules and interaction governance, particularly economic governance.

Application of the system concept[ edit ] Systems modeling is generally a basic principle in engineering and in social sciences. The system is the representation of the entities under concern. Hence inclusion to or exclusion from system context is dependent on the intention of the modeler. No model of a system will include all features of the real system of concern, and no model of a system must include all entities belonging to a real system of concern.

In information and computer science[ edit ] In computer science and information science , system is a software system which has components as its structure and observable inter-process communications as its behavior. Again, an example will illustrate: There are systems of counting, as with Roman numerals , and various systems for filing papers, or catalogues, and various library systems, of which the Dewey Decimal Classification is an example.

This still fits with the definition of components which are connected together in this case to facilitate the flow of information. System can also refer to a framework, aka platform , be it software or hardware, designed to allow software programs to run.

A flaw in a component or system that can cause the component or system to fail to perform its required function, e. A defect, if encountered during execution, may cause a failure of the component or system. Engineering also has the concept of a system referring to all of the parts and interactions between parts of a complex project. Systems engineering is the branch of engineering that studies how this type of system should be planned, designed, implemented, built, and maintained.

Expected result is the behavior predicted by the specification, or another source, of the component or system under specified conditions. In management science , operations research and organizational development OD , human organizations are viewed as systems conceptual systems of interacting components such as subsystems or system aggregates, which are carriers of numerous complex business processes organizational behaviors and organizational structures.