CN103514371A - Measuring and risk evaluation method of executive capability of scheduled task - Google Patents
Measuring and risk evaluation method of executive capability of scheduled task Download PDFInfo
- Publication number
- CN103514371A CN103514371A CN201310430955.6A CN201310430955A CN103514371A CN 103514371 A CN103514371 A CN 103514371A CN 201310430955 A CN201310430955 A CN 201310430955A CN 103514371 A CN103514371 A CN 103514371A
- Authority
- CN
- China
- Prior art keywords
- state
- node
- probability
- plan target
- path
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a measuring and risk evaluation method of executive capability of a scheduled task. The method comprises the following steps that firstly, a Markov decision-making process of the scheduled task is built, and the optimal solution of the Markov decision-making process is calculated in an existing method; secondly, the optimal execution path of each state is calculated, and calculation results are stored in an optimal execution path set; finally, according to a task execution time unit, a plan trace and evaluation method (PlaTE for short) is executed repeatedly until the task success rate reaches an expected index or time consumed by task execution exceeds an appointed limitation. The method is beneficial to improving a commanding, dispatching and management decision level, and has positive practical significance.
Description
Technical field
The present invention relates to a kind of plan target executive capability tolerance and methods of risk assessment, be mainly used in solving the plan target uncertainty and the execution risk that in objective world, exist and be difficult to the problem that assessment is held.
Background technology
A lot of processes of real world can be described as plan target, such as carrying out certain, a military operation team captures task, an accident emergency disposal group carries out the rescue work of fire or earthquake, a rescue group carries out the rescue of patients ' lives, the construction in some project management process ,Ru houses, robot planning etc.The implementation of plan target is often accompanied by uncertain factor, causes result or quality that plan target is carried out to exist some to be difficult to situation about accurately estimating, forms the execution risk of plan target.
Take fire fighting process as example, rescue team expects to arrive in rescue site with the fastest speed after receiving fire alarm, yet in the process that Xiang moves scene of fire, inevitably to be subject to the impact of the conditions such as weather, traffic, make rescue personnel's time of arrival and fire extinguishing progress all can have some uncertainties.If these situations before task completes accurately estimation can cause the efficiency of whole fire fighting task and quality all to have the risk of uncertain and tasks carrying.How this uncertain task is carried out to quantitative evaluation, make rescue command personnel can just have in advance a kind of appraisal procedure that can quantize, to the execution of plan target, can there is the assurance of a science also can fully see clearly the risk factor of tasks carrying process, thereby the execution of assurance and control plan task more effectively, this becomes the problem that solution is needed in plan target risk assessment field badly.Yet how prior art finds optimal path and completes this type of plan target if mainly laying particular emphasis on, just participant to the executive capability of plan target with carry out risk assessment and do not propose clear and definite solution.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of plan target executive capability tolerance and methods of risk assessment for above-mentioned realistic problem, the method can for commander or other supvrs provide can reference quantizating index, also can provide reference frame for plan target executive capability tolerance and risk assessment.
The present invention addresses the above problem adopted technical scheme: a kind of plan target executive capability tolerance and methods of risk assessment, is characterized in that: comprise the steps:
(1), the executor of task is called to intelligent body, the plan target that intelligent body will be carried out has uncertainty and passes through sequence, after, the execution of a sequence task practice condition current with this task is associated, and irrelevant with the practice condition before this task, first, set up the Markovian decision process of plan target, be designated as S, A, T, R, s
0, G, wherein, S is the set of each state in plan target implementation, i.e. S={s
i, 1≤i≤k}, wherein s
ifor state, the quantity that k is state; A is the behavior set that intelligent body can be taked, i.e. A={a
i, 1≤a
i≤ h}, wherein a
ifor behavior, the quantity that h is behavior; T is the transition probability between state in plan target implementation, T (s
i, a, s
j) represent by state s
ithe behavior a of the taking s that gets the hang of
jtransition probability; R is the quantifiable return set that the task of hitting the target can obtain, R (s
i) be state s
ithe return of obtaining; s
0for the initial state of intelligent body, the target state set that G is intelligent body;
(2), calculate the optimum solution Q of Markovian decision process in (1), ∏, finds the best execution route of the task of hitting the target, wherein, Q is for calculating the value of each state obtaining, ∏ has recorded the optimum behavior of each state;
(3), be its best execution route p of each state computation, and result of calculation is kept in best execution route set P, computing method are as follows:
The component unit that existing definition status node n is path that plan target experiences, each state node n consists of its state elements s and probability element pr, be n=(s, pr), correspondingly, n.s is the corresponding state of state node n, reached at the probability that n.pr is this state node n, for example, original state node n
0the initial state of corresponding plan target process can be expressed as n
0=(s
0, 1);
A certain state node n in the path that plan target experiences
jreached at definition of probability be by a certain start node n
sset out, utilize optimal path to arrive the destination node n in this path
jthe probability of success, therefore have:
In above formula, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s is for carrying out at state node n the state that π (n.s) is transformed into, (n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s to T for n.s, π (n.s), n' is the corresponding state node of state n'.s, and n' is the optimum accessible state node of state node n.The corresponding a plurality of optimum accessible state node n' of general state node n meeting; N ∈ p (n
s, n
j) what represent is that the value of n is for from n
sset out and utilize best execution route to arrive state node n
jall state nodes on path;
The implication of formula (1) is: for path p (n
s, n
j) on all state node n, by calculating take to get the hang of after optimum behavior π (n.s) transition probability of n'.s by state n.s, then get T (n.s, π (n.s), n'.s) maximal value, get the corresponding optimum accessible state node n' of transition probability maximal value, finally by path p (n
s, n
j) the transition probability maximal value of upper all state nodes carries out product calculation.
Start node n when path
sduring as the start node of whole plan target, during s=0, a certain state node n
jreached at probability brief note be:
Pr(p(n
j))=Pr(p(n
0,n
j))——(2)
Correspondingly, for a state node set N, it is for destination node n
d' Pr for reached at probability (N) represent, the expression formula of Pr (N) is as follows:
What formula (3) represented is that the interior all state node n of this state node set are for destination node n
d' reached at probability sum; In like manner, for certain path p, the set of the upper all nodes of N Shi Gai path p, for a corresponding set of paths P of set of node N, it is for certain destination node n
dreached at probability by following formula, calculate and obtain:
Here the calculating due to set of paths P is all the starting point n by plan target
0set out, according to formula (2), learn direct representation to be destination node n
dreached at probability;
For certain state node n, its best track definition of carrying out is:
Wherein, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s for carrying out the state that π (n.s) is transformed in state node n, T (n.s, π (n.s), n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s; N' is the corresponding state node of state n'.s.Generally, the corresponding a plurality of optimum accessible state node n' of state node n meeting; Formula (5) represent when transition probability T (n.s, π (n.s) n'.s) obtain peaked time, the node that the value of n' will obtain as us, the best that jointly forms state node n with node n is carried out track.
For certain state node n, its best execution route is defined as and by n, to the best that is designated the termination state g ∈ G just returning, is carried out the set of all state nodes on track, this best execution route has represented optimal execution path, is not occurring plan target executive mode best in unexpected and uncertain situation.
(4), the continuous executive plan tracking of the time quantum of tasks carrying and appraisal procedure (Plan Trace and Evaluation according to plan, below brief note is PlaTE method), until reaching index or the plan target of expectation, plan target success ratio carries out the limit that the spent time surpasses appointment;
The time quantum t that plan target is carried out refers to the chronomere that is easily quantified as numerical value, according to the different task domain features under plan target, sets respectively, and t is some minutes or some hours or some days;
Mission Success rate ρ
trefer in plan target execution environment, being successfully completed a kind of probability index of task in section sometime, have great probability within the aforementioned time period, to complete this plan target; Total consuming time expression for current plan target, generally at least need how long can guarantee that this plan target can be accomplished, thereby for commander or other supvrs provide can reference quantizating index, also for plan target executive capability tolerance and risk assessment provide reference frame;
For plan target, carry out the spent time, employing plan tracking of the present invention and appraisal procedure are calculated, and concrete grammar is as follows:
1. set initial value, set of node N={n
0, plan target execution time t=0, probability of success ρ=0 of plan target;
If 2. the plan target execution time has reached the upper limit of appointment, enter step 10., otherwise enter next step;
3. will increase time quantum, i.e. a t=t+1 execution time;
4. for set each state in N obtains its optimal path, the Optimal path method that obtains each state is provided by step (3), by calculating the optimal path obtaining, puts into set P, upgrades on each optimal path each state node at t reached at probability constantly:
For certain the state node n on path
s, it arrives destination node n
jeukodal probability by following formula (6), provided:
Pr
t(p(n
s,n
j))=Pr(p(n
s,n
j))×Pr
t-1(p(n
s))——(6)
Pr (p (n wherein
s, n
j)) by formula (1), provided, and Pr
t-1(p (n
s)) by formula (2), provided, be n
0to n
sat t-1 reached at probability constantly; When t=1, Pr
t-1(p (n
s)) be Pr (p (n
0, n
s)), by formula (1), provided;
5. set of node N is put to sky;
6. from set P, take out a best execution route p, search on this best execution route except destination node n
jthe non-eukodal node of outer remaining each state node, is designated as set β, and outer remaining each state node of the upper non-destination node of best execution route p is designated as p-n
j,
Set of node β is inserted in set of node N, calculate non-destination node n on best execution route simultaneously
jreached at the probability of the through node n' of each state node n in the non-eukodal set of node of outer remaining each state node, this can reach probability and be provided by formula (7):
Pr
t(p(n'))=Pr
t-1(p(n))×T(n.s,π(n.s),n'.s)——(7)
If 7. best execution route set P is not empty, enters step 6., otherwise enter next step;
8. calculate in this time period in set of paths P all state nodes for the destination node n in this path
jmission Success rate, ρ
t=Pr (P (n
j)), Pr (P (n wherein
j)) for the reached probability of set of paths P for destination node, by formula (4), provided,
9. ρ=ρ+ρ
t, keep set of node N, enter step 2.;
10. algorithm finishes, and returns to probability of success ρ.
As preferably, in step (2), the optimum solution Q of Markovian decision process in adopted value process of iteration calculation procedure (1), ∏.
Compared with prior art, the invention has the advantages that: adopt after appraisal procedure of the present invention, can provide the desired time demand of uncertain task of carrying out for command scheduling and the management decision personnel of plan target; For the fixing situation of task execution time, the method can be carried out risk assessment for task being completed to possibility, directly provides the risk index that clear and definite task completes, therefore, the present invention is conducive to improve command scheduling and management and decision-making level, has positive realistic meaning.
Accompanying drawing explanation
Fig. 1 plans Mission Capability tolerance and methods of risk assessment process flow diagram in the embodiment of the present invention;
Fig. 2 is that in the embodiment of the present invention, plan is followed the tracks of and appraisal procedure process flow diagram.
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
The invention provides a kind of plan target executive capability tolerance and methods of risk assessment, it comprises the steps:
(1), the executor of task is called to intelligent body, the plan target that intelligent body will be carried out has uncertainty and passes through sequence, after, the execution of a sequence task practice condition current with this task is associated, and irrelevant with the practice condition before this task, first, set up the Markovian decision process of plan target, be designated as S, A, T, R, s
0, G, wherein, S is the set of each state in plan target implementation, i.e. S={s
i, 1≤i≤k}, wherein s
ifor state, the quantity that k is state; A is the behavior set that intelligent body can be taked, i.e. A={a
i, 1≤a
i≤ h}, wherein a
ifor behavior, the quantity that h is behavior; T is the transition probability between state in plan target implementation, T (s
i, a, s
j) represent by state s
ithe behavior a of the taking s that gets the hang of
jtransition probability; R is the quantifiable return set that the task of hitting the target can obtain, R (s
i) be state s
ithe return of obtaining; s
0for the initial state of intelligent body, the target state set that G is intelligent body;
(2), utilize existing method to calculate the optimum solution Q of Markovian decision process in (1), ∏, find the best execution route of the task of hitting the target, wherein, Q is for calculating the value of each state obtaining, ∏ has recorded the optimum behavior of each state, and existing method can the value of being chosen as process of iteration;
(3), be its best execution route p of each state computation, and result of calculation is kept in best execution route set P, computing method are as follows:
State node n is the component unit in path that plan target experiences, and each state node n consists of its state elements s and probability element pr, i.e. n=(s, pr), correspondingly, n.s is the corresponding state of state node n, reached at the probability that n.pr is this state node n, for example, original state node n
0the initial state of corresponding plan target process can be expressed as n
0=(s
0, 1);
A certain state node n in the path that plan target experiences
jreached at definition of probability be by a certain start node n
sset out, utilize optimal path to arrive the destination node n in this path
jthe probability of success, therefore have:
In above formula, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s is for carrying out at state node n the state that π (n.s) is transformed into, (n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s to T for n.s, π (n.s), n' is the corresponding state node of state n'.s, and n' is the optimum accessible state node of state node n.Generally, the corresponding a plurality of optimum accessible state node n' of state node n meeting; N ∈ p (n
s, n
j) what represent is that the value of n is for from n
sset out and utilize best execution route to arrive state node n
jall state nodes on path.
The implication of formula (1) is: for path p (n
s, n
j) on all state node n, by calculating take to get the hang of after optimum behavior π (n.s) transition probability of n'.s by state n.s, then get T (n.s, π (n.s), n'.s) maximal value, get the corresponding optimum accessible state node n' of transition probability maximal value, finally by path p (n
s, n
j) the transition probability maximal value of upper all state nodes carries out product calculation.
Especially, the start node n in path
sduring as the start node of whole plan target, during s=0, a certain state node n
jreached at probability brief note be:
Pr(p(n
j))=Pr(p(n
0,n
j))——(2)
Correspondingly, for a state node set N, it is for destination node n
d' Pr for reached at probability (N) represent, the expression formula of Pr (N) is as follows:
What formula (3) represented is that the interior all state nodes of this state node set are for destination node n
d' reached at probability sum; In like manner, for certain path p, the set of the upper all nodes of N Shi Gai path p, for a corresponding set of paths P of set of node N, it is for certain destination node n
dreached at probability by following formula, calculate and obtain:
Here the calculating due to set of paths P is all the starting point n by plan target
0set out, according to formula (2), learn direct representation to be destination node n
dreached at probability;
For certain state node n, its best track definition of carrying out is:
Wherein, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s for carrying out the state that π (n.s) is transformed in state node n, T (n.s, π (n.s), n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s; N' is the corresponding state node of state n'.s.Generally, the corresponding a plurality of optimum accessible state node n' of state node n meeting; Formula (5) represent when transition probability T (n.s, π (n.s) n'.s) obtain peaked time, the node that the value of n' will obtain as us, the best that jointly forms state node n with node n is carried out track;
For certain state node n, its best execution route is defined as and by n, to the best that is designated the termination state g ∈ G just returning, is carried out the set of all state nodes on track, this best execution route has represented optimal execution path, is not occurring plan target executive mode best in unexpected and uncertain situation;
(4), the continuous executive plan tracking of the time quantum of tasks carrying and appraisal procedure (Plan Trace and Evaluation according to plan, below brief note is PlaTE method), until reaching index or the plan target of expectation, plan target success ratio carries out the limit that the spent time surpasses appointment.
The time quantum t that plan target is carried out refers to the chronomere that is easily quantified as numerical value, according to the different task domain features under plan target, sets respectively, may be some minutes, some hours or some days;
Mission Success rate ρ
trefer in plan target execution environment, in section sometime, (this time period is the integral multiple of time quantum t) is successfully completed a kind of probability index of task, has great probability within the aforementioned time period, to complete this plan target; Total consuming time expression for current plan target, generally at least need how long can guarantee that this plan target can be accomplished, thereby for commander or other supvrs provide can reference quantizating index, also for plan target executive capability tolerance and risk assessment provide reference frame.
For plan target, carry out the spent time, the present invention adopts PlaTE method to calculate, and specific algorithm is as follows:
1. set initial value, set of node N={n
0, plan target execution time t=0, probability of success ρ=0 of plan target;
If 2. the plan target execution time has reached the limit that the upper limit of appointment or spent time of tasks carrying surpass appointment, enter step 10., otherwise enter next step;
3. will increase time quantum, i.e. a t=t+1 execution time;
4. for each state in set N obtains its optimal path, by calculating the optimal path obtaining, put into set P, the Optimal path method that obtains each state is provided by step (3), upgrades each state node on each optimal path at t reached at probability constantly:
For certain the state node n on path
s, it arrives destination node n
jeukodal probability by following formula (6), provided:
Pr
t(p(n
s,n
j))=Pr(p(n
s,n
j))×Pr
t-1(p(n
s))——(6)
Pr (p (n wherein
s, n
j)) by formula (1), provided, and Pr
t-1(p (n
s)) by formula (2), provided, be n
0to n
sat t-1 reached at probability constantly; When t=1, Pr
t-1(p (n
s)) be Pr (p (n
0, n
s)), by formula (1), provided;
5. set of node N is put to sky;
6. from set P, take out a best execution route p, search on this best execution route except destination node n
jthe non-eukodal node of outer remaining each state node, is designated as set β, and outer remaining each state node of the upper non-destination node of best execution route p is designated as p-n
j,
Set of node β is inserted in set of node N, calculate non-destination node n on best execution route simultaneously
jreached at the probability of the through node n' of each state node n in the non-eukodal set of node of outer remaining each state node, this can reach probability and be provided by formula (7):
Pr
t(p(n'))=Pr
t-1(p(n))×T(n.s,π(n.s),n'.s)——(7)
If 7. best execution route set P is not empty, enters step 6., otherwise enter next step;
8. calculate in this time period in set of paths P all state nodes for the destination node n in this path
jmission Success rate, ρ
t=Pr (P (n
j)), Pr (P (n wherein
j)) for the reached probability of set of paths P for destination node, by formula (4), provided,
9. ρ=ρ+ρ
t, keep set of node N, enter step 2.;
10. algorithm finishes, and returns to probability of success ρ.
So far, can set the tasks according to the selected time assessed value of a kind of probability of success of carrying out of PlaTE method is above-mentioned probability of success ρ.Correspondingly, when commander or management decision personnel expect that actor is so that efficiency completes this task faster, i.e. a given less time t, the risk index of its tasks carrying is provided by 1-ρ.And commanding also can determine rational task execution time according to this assessed value based on number percent, such as selected ρ > 95%, the in the situation that of a given time t, this task is only less than 5% probability and can not be done within this time period.And commanding can set a safer and more safe time period that completes this task completely, when t is enough large, assessed value ρ can, ad infinitum close to 1, have the probability close to 100% to complete this task.
Claims (1)
1. plan target executive capability tolerance and a methods of risk assessment, is characterized in that: comprise the steps:
(1), the executor of task is called to intelligent body, the plan target that intelligent body will be carried out has scarcely determinacy and passes through sequence, after, the execution of a sequence task practice condition current with this task is associated, and irrelevant with the practice condition before this task, first, set up the Markovian decision process of plan target, be designated as S, A, T, R, s
0, G, wherein, S is the set of each state in plan target implementation, i.e. S={s
i, 1≤i≤k}, wherein s
ifor state, the quantity that k is state; A is the behavior set that intelligent body can be taked, i.e. A={a
i, 1≤a
i≤ h}, wherein a
ifor behavior, the quantity that h is behavior; T is the transition probability between state in plan target implementation, T (s
i, a, s
j) represent by state s
ithe behavior a of the taking s that gets the hang of
jtransition probability; R is the quantifiable return set that the task of hitting the target can obtain, R (s
i) be state s
ithe return of obtaining; s
0for the initial state of intelligent body, the target state set that G is intelligent body;
(2), calculate the optimum solution Q of Markovian decision process in (1), ∏, finds the best execution route of the task of hitting the target, wherein, Q is for calculating the value of each state obtaining, ∏ has recorded the optimum behavior of each state;
(3), be its best execution route p of each state computation, and result of calculation is kept in best execution route set P, computing method are as follows:
State node n is the component unit in path that plan target experiences, and each state node n consists of its state elements s and probability element pr, i.e. n=(s, pr), correspondingly, n.s is the corresponding state of state node n, reached at the probability that n.pr is this state node n, for example, original state node n
0the initial state of corresponding plan target process can be expressed as n
0=(s
0, 1);
A certain state node n in the path that plan target experiences
jreached at definition of probability be by a certain start node n
sset out, utilize optimal path to arrive the destination node n in this path
jthe probability of success, therefore have:
In above formula, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s is for carrying out at state node n the state that π (n.s) is transformed into, (n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s to T for n.s, π (n.s), n' is the corresponding state node of state n'.s, and n' is the optimum accessible state node of state node n; The corresponding a plurality of optimum accessible state node n' of general state node n meeting; N ∈ p (n
s, n
j) what represent is that the value of n is for from n
sset out and utilize best execution route to arrive state node n
jall state nodes on path;
The implication of formula (1) is: for path p (n
s, n
j) on all state node n, by calculating take to get the hang of after optimum behavior π (n.s) transition probability of n'.s by state n.s, then get T (n.s, π (n.s), n'.s) maximal value, get the corresponding optimum accessible state node n' of transition probability maximal value, finally by path p (n
s, n
j) the transition probability maximal value of upper all state nodes carries out product calculation;
Start node n when path
sduring as the start node of whole plan target, during s=0, a certain state node n
jreached at probability brief note be:
Pr(p(n
j))=Pr(p(n
0,n
j))——(2)
Correspondingly, for a state node set N, it is for destination node n
d' Pr for reached at probability (N) represent, the expression formula of Pr (N) is as follows:
What formula (3) represented is that the interior all state nodes of this state node set are for destination node n
d' reached at probability sum; In like manner, for certain path p, the set of the upper all nodes of N Shi Gai path p, for a corresponding set of paths P of set of node N, it is for certain destination node n
dreached at probability by following formula, calculate and obtain:
Here the calculating due to set of paths P is all its starting point n by plan target
0set out, according to formula (2), learn direct representation to be destination node n
dreached at probability;
For certain state node n, its best track definition of carrying out is:
Wherein, n.s is the corresponding state of state node n, π (n.s) is the optimum behavior of state node n, n'.s for carrying out the state that π (n.s) is transformed in state node n, T (n.s, π (n.s), n'.s) representative is taked the get the hang of transition probability of n'.s of behavior π (n.s) by state n.s; N' is the corresponding state node of state n'.s, general, the corresponding a plurality of optimum accessible state node n' of state node n meeting; Formula (5) represents that (n.s, π (n.s) n'.s) obtain peaked time, and the value of n' is as the node that will obtain, and the best that jointly forms state node n with node n is carried out track as transition probability T;
For certain state node n, its best execution route is defined as and by n, to the best that is designated the termination state g ∈ G just returning, is carried out the set of all state nodes on track, this best execution route has represented optimal execution path, is not occurring plan target executive mode best in unexpected and uncertain situation;
(4), the continuous executive plan tracking of the time quantum of tasks carrying and appraisal procedure (Plan Trace and Evaluation according to plan, below brief note is PlaTE method), until reaching index or the plan target of expectation, plan target success ratio carries out the limit that the spent time surpasses appointment;
The time quantum t that plan target is carried out refers to the chronomere that is easily quantified as numerical value, according to the different task domain features under plan target, sets respectively, and t is some minutes or some hours or some days;
Mission Success rate ρ
trefer in plan target execution environment, being successfully completed a kind of probability index of task in section sometime, have great probability within the aforementioned time period, to complete this plan target; Total consuming time expression for current plan target, generally at least need how long can guarantee that this plan target can be accomplished, thereby for commander or other supvrs provide can reference quantizating index, also for plan target executive capability tolerance and risk assessment provide reference frame;
For plan target, carry out the spent time, employing plan tracking of the present invention and appraisal procedure are calculated, and concrete grammar is as follows:
1. set initial value, set of node N={n
0, plan target execution time t=0, probability of success ρ=0 of plan target;
If 2. the plan target execution time has reached the upper limit of appointment, enter step 10., otherwise enter next step;
3. will increase time quantum, i.e. a t=t+1 execution time;
4. for set each state in N obtains its optimal path, the Optimal path method that obtains each state is provided by step (3), by calculating the optimal path obtaining, puts into set P, upgrades on each optimal path each state node at t reached at probability constantly:
For certain the state node n on path
s, it arrives destination node n
jeukodal probability by following formula (6), provided:
Pr
t(p(n
s,n
j))=Pr(p(n
s,n
j))×Pr
t-1(p(n
s))——(6)
Pr (p (n wherein
s, n
j)) by formula (1), provided, and Pr
t-1(p (n
s)) by formula (2), provided, be n
0to n
sat t-1 reached at probability constantly; When t=1, Pr
t-1(p (n
s)) be Pr (p (n
0, n
s)), by formula (1), provided;
5. set of node N is put to sky;
6. from set P, take out a best execution route p, search on this best execution route except destination node n
jthe non-eukodal node of outer remaining each state node, is designated as set β, and outer remaining each state node of the upper non-destination node of best execution route p is designated as p-n
j,
Set of node β is inserted in set of node N, calculate non-destination node n on best execution route simultaneously
jreached at the probability of the through node n' of each state node n in the non-eukodal set of node of outer remaining each state node, this can reach probability and be provided by formula (7):
Pr
t(p(n'))=Pr
t-1(p(n))×T(n.s,π(n.s),n'.s)——(7)
If 7. best execution route set P is not empty, enters step 6., otherwise enter next step;
8. calculate in this time period in set of paths P all state nodes for the destination node n in this path
jmission Success rate, ρ
t=Pr (P (n
j)), Pr (P (n wherein
j)) for the reached probability of set of paths P for destination node, by formula (4), provided,
9.
ρ=
ρ+
ρ t, keep set of node N, enter step 2.;
10. algorithm finishes, and returns to probability of success ρ.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310430955.6A CN103514371B (en) | 2013-09-22 | 2013-09-22 | A kind of plan target executive capability tolerance and methods of risk assessment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310430955.6A CN103514371B (en) | 2013-09-22 | 2013-09-22 | A kind of plan target executive capability tolerance and methods of risk assessment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103514371A true CN103514371A (en) | 2014-01-15 |
CN103514371B CN103514371B (en) | 2016-08-17 |
Family
ID=49897080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310430955.6A Expired - Fee Related CN103514371B (en) | 2013-09-22 | 2013-09-22 | A kind of plan target executive capability tolerance and methods of risk assessment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103514371B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197871A (en) * | 2018-01-19 | 2018-06-22 | 顺丰科技有限公司 | The mission planning method and system that express delivery receipts are dispatched officers |
CN109523029A (en) * | 2018-09-28 | 2019-03-26 | 清华大学深圳研究生院 | For the adaptive double from driving depth deterministic policy Gradient Reinforcement Learning method of training smart body |
CN109583647A (en) * | 2018-11-29 | 2019-04-05 | 上海电气分布式能源科技有限公司 | A kind of energy storaging product multiple users share method and power supply system |
CN112700074A (en) * | 2019-10-22 | 2021-04-23 | 北京四维图新科技股份有限公司 | Express task planning method and device |
-
2013
- 2013-09-22 CN CN201310430955.6A patent/CN103514371B/en not_active Expired - Fee Related
Non-Patent Citations (3)
Title |
---|
QIAN HONG 等: "《Proceedings of 2010 Third Pacific-Asia Conference on Web Mining and Web-Based Application》", 31 December 2010 * |
李娟 等: "基于SPEM的CMM软件过程元模型", 《软件学报》 * |
齐婷婷: "基于马尔可夫决策过程的IT项目进度计划方法", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197871A (en) * | 2018-01-19 | 2018-06-22 | 顺丰科技有限公司 | The mission planning method and system that express delivery receipts are dispatched officers |
CN109523029A (en) * | 2018-09-28 | 2019-03-26 | 清华大学深圳研究生院 | For the adaptive double from driving depth deterministic policy Gradient Reinforcement Learning method of training smart body |
CN109523029B (en) * | 2018-09-28 | 2020-11-03 | 清华大学深圳研究生院 | Self-adaptive double-self-driven depth certainty strategy gradient reinforcement learning method |
CN109583647A (en) * | 2018-11-29 | 2019-04-05 | 上海电气分布式能源科技有限公司 | A kind of energy storaging product multiple users share method and power supply system |
CN109583647B (en) * | 2018-11-29 | 2023-06-23 | 上海电气分布式能源科技有限公司 | Multi-user sharing method and power supply system for energy storage products |
CN112700074A (en) * | 2019-10-22 | 2021-04-23 | 北京四维图新科技股份有限公司 | Express task planning method and device |
CN112700074B (en) * | 2019-10-22 | 2024-05-03 | 北京四维图新科技股份有限公司 | Express delivery task planning method and device |
Also Published As
Publication number | Publication date |
---|---|
CN103514371B (en) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Peng et al. | A Bayesian approach for system reliability analysis with multilevel pass-fail, lifetime and degradation data sets | |
CN109540150A (en) | One kind being applied to multi-robots Path Planning Method under harmful influence environment | |
CN109767128B (en) | Imaging satellite autonomous task planning method based on machine learning | |
CN103514371A (en) | Measuring and risk evaluation method of executive capability of scheduled task | |
CN110443399B (en) | Intelligent scheduling method for aviation rescue of vehicle accident | |
CN105426970A (en) | Meteorological threat assessment method based on discrete dynamic Bayesian network | |
CN109657868A (en) | A kind of probabilistic programming recognition methods of task sequential logic constraint | |
Kaveh et al. | Fuzzy-multi-mode resource-constrained discrete time-cost-resource optimization in project scheduling using ENSCBO | |
Harris et al. | Risk and reliability modelling for multi-vehicle marine domains | |
CN110619148A (en) | Equipment ADC (analog to digital converter) efficiency evaluation method based on interval gray number | |
Marzouk et al. | Quantifying weather impact on formwork shuttering and removal operation using system dynamics | |
Pinciroli et al. | Agent-based modeling and reinforcement learning for optimizing energy systems operation and maintenance: the Pathmind solution | |
Zhao et al. | Combined forecasting model of water traffic accidents based on gray-bp neural network | |
CN104504207A (en) | Agent-based scenic spot visitor behavior simulation modeling method | |
CN108564162B (en) | Method and device for constructing memory matrix of equipment fault early warning model | |
Marin et al. | Supply chain and hybrid modeling: the panama canal operations and its salinity diffusion | |
Bai et al. | Designing domain work breakdown structure (DWBS) using neural networks | |
Graf et al. | Bayesian updating in natural hazard risk assessment | |
CN110727291A (en) | Centralized cluster reconnaissance task planning method based on variable elimination | |
Wu et al. | Mission-Integrated Path Planning for Planetary Rover Exploration. | |
Dhingra et al. | Design and implementation of neuro fuzzy model for software development time estimation | |
Maheswaran et al. | Human-agent collaborative optimization of real-time distributed dynamic multi-agent coordination | |
Suárez B et al. | A real time approach for task allocation in a disaster scenario | |
Shang et al. | Research on railway emergency rescue time prediction based on IFBN-GERTS network model | |
Vasseur et al. | Uncertainty analysis in probabilistic risk assessment: Comparison of probabilistic and non probabilistic approaches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160817 Termination date: 20170922 |
|
CF01 | Termination of patent right due to non-payment of annual fee |