CN115356910A - Cluster control method based on perception control and perception tolerance - Google Patents

Cluster control method based on perception control and perception tolerance Download PDF

Info

Publication number
CN115356910A
CN115356910A CN202210905373.8A CN202210905373A CN115356910A CN 115356910 A CN115356910 A CN 115356910A CN 202210905373 A CN202210905373 A CN 202210905373A CN 115356910 A CN115356910 A CN 115356910A
Authority
CN
China
Prior art keywords
state
perception
individual
control unit
end point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210905373.8A
Other languages
Chinese (zh)
Inventor
洪慧
叶超宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210905373.8A priority Critical patent/CN115356910A/en
Publication of CN115356910A publication Critical patent/CN115356910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P.I., P.I.D.

Abstract

The invention discloses a cluster control method based on perception control and perception tolerance, which aims to overcome the problem of poor cluster performance of the existing cluster intelligent control and comprises the following steps: configuring cluster information of a group, initializing iteration times, configuring state information of an individual, and substituting the state information into a path planning method to obtain path trajectory data; driving the individual to move and index according to the path track data to obtain a current state target value and an end point state target value; calculating to obtain the current perception information amount of the individual; updating individual perception attributes; predicted state error and error correction amount; updating the terminal state; updating path track data, and adding one to the iteration times; and judging whether the iteration times are equal to the termination iteration times, if so, terminating the iteration, and otherwise, continuing the iteration.

Description

Cluster control method based on perception control and perception tolerance
Technical Field
The invention belongs to the field of cluster intelligent control, and particularly relates to a cluster control method based on perception control and perception tolerance.
Background
In the field of machine manufacturing and automation, robots and unmanned planes have important research and application significance, and with the rise of artificial intelligence and digital communication technology, intelligent robot or aircraft technology gradually becomes a research subject of multidisciplinary and multi-field mixing. The reason that the number of people is large is also suitable for intelligent machines, generally, a robot cluster has higher execution efficiency, which means that more tasks can be completed in the same time, or the same task is executed in less time; the driving group needs to depend on a cluster control technology, and compared with the driving of a single individual, the driving group needs to consider the communication transmission and arrangement structure problem among the cluster individuals.
Due to noise and various engineering errors existing in a real environment, a controlled individual often cannot accurately reach a target state set manually, an additional control method is needed to deal with interference factors in the environment, the most common PID control method which belongs to the most extensive PID control method is applied at the same time, and an input value is adjusted according to historical data and the occurrence rate of differences, so that the system is more accurate and stable. PID can effectively cope with control errors caused by environmental noise, but is not ideal in the face of cluster congestion caused by an increase in the number of individuals. How to solve the problem of cluster performance index reduction caused by complex environment interference and individual number increase simultaneously through constructing a method model becomes the key point of cluster intelligent control research.
Disclosure of Invention
To overcome the disadvantages and problems of the prior art, the present invention provides a cluster control method based on sensing control and sensing tolerance.
In order to realize the purpose, the invention adopts the following technical scheme:
a cluster control method based on perception control and perception tolerance comprises the following steps:
step 1: the processor configures cluster information of a group, wherein the cluster information comprises the number of individuals, the number of times of terminating iteration, the maximum error of a terminal state and individual perception attributes, and the individual perception attributes comprise extreme acceleration, a perception range, a perception tolerance and an iteration step length of a path planning method;
and 2, step: the processor initializes iteration times and configures the state information of an individual, wherein the state information comprises an initial state and a terminal state;
and step 3: the processor substitutes the state information into a path planning method to calculate path trajectory data, wherein the path trajectory data comprises speed information and position information of an individual in each iteration step, and the path trajectory method can be a path planning method based on a Dubins curve;
and 4, step 4: the processor drives the individual to move according to the path track data, and a current state target value and an end point state target value of the individual are obtained according to the path track data index;
and 5: the sensor senses the self state of the individual where the sensor is located and other individual states in the sensing range of the sensor, and the current sensing information quantity of the individual is obtained through calculation according to the self state and other individual states;
and 6: the sensor inputs the self state and the perception information quantity to the current state control unit, and inputs the perception information quantity, the destination state target value and the perception tolerance to the perception attribute control unit;
and 7: the perception attribute control unit updates the individual perception attribute according to the perception information amount, the terminal state target value and the perception tolerance;
and step 8: the current state control unit predicts a state error and an error correction amount;
and step 9: the perception attribute control unit inputs the updated individual perception attribute to the sensor and the terminal state control unit, the current state control unit inputs the state error and the error correction value to the terminal state control unit, and the terminal state control unit updates the terminal state according to the state error, the error correction value and the updated terminal state;
step 10: the processor substitutes the updated individual perception attribute and the updated end point state into a path planning method to update to obtain path track data, and the iteration number is increased by one;
step 11: and the processor judges whether the iteration times are equal to the termination iteration times, if so, the iteration is terminated, and if not, the step 4 to the step 11 are iterated.
Preferably, the step 5 specifically includes:
step 51: the sensor senses the self state of the individual in which the sensor is positioned and the states of other individuals in the sensing range of the sensor;
step 52: the sensor calculates according to the self state and other individual states to obtain Euclidean distance between the individual and other individuals;
step 53: the sensor calculates the sensing angle occupied by other individuals according to the following formula:
Figure BDA0003772272880000031
wherein phi is the sensing angle occupied by other individuals,
Figure BDA0003772272880000032
BL is the diameter of the individual, and DisSefOth is the Euclidean distance;
and accumulating the perception angles of all other individuals to obtain an overall perception angle, and dividing the overall perception angle by 2 pi to obtain the perception information amount.
Preferably, the step 7 specifically includes:
step 71: the perception attribute control unit compares the perception information quantity and the perception tolerance to obtain a difference value of the perception information quantity and the perception tolerance, judges whether the difference value of the perception information quantity and the perception tolerance is larger than 0, if yes, the difference value of the perception attribute is kept, if not, the absolute value of the difference value of the perception attribute is judged to be smaller than eta times of the perception tolerance, eta is an interval parameter for controlling an individual to reversely adjust the individual perception attribute, eta belongs to (0,1), if yes, the difference value of the perception attribute is kept, and if not, the difference value of the perception attribute is set to be 0;
step 72: and amplifying the difference value of the two values to obtain a result, and updating the individual perception attribute according to the following formula:
PecpAtrbNew=PecpAtrbCurr+α·PecpDiff,
in the formula, pecPatrbNew is the updated individual perception attribute, pecPatrbBerr is the current individual perception attribute, and alpha is a proportionality coefficient;
step 73: and pre-judging whether the current individual and other individuals collide according to the path track data and other individual states, if so, setting a collision signal to be 1, and otherwise, setting the collision signal to be 0.
Preferably, the step 8 specifically includes:
step 81: subtracting the target value of the current state from the self state to obtain a state error;
step 82: and judging whether the number of the state errors in the state error container is equal to the error prediction sequence length or not, if so, obtaining an error correction value according to an error prediction model, inputting the error correction value into the terminal state control unit, initializing the state error container, otherwise, inputting the state errors into the state error container, setting the error correction value to be 0, and inputting the error correction value into the terminal state control unit.
Preferably, the step 9 specifically includes:
step 91: inputting the collision signal into a terminal state control unit, judging whether the collision signal is equal to 1, if so, changing the terminal state, otherwise, not changing the terminal state;
and step 92: comparing the module length of the self state with the module length of the target value of the end point state to obtain the difference between the two, judging whether the difference between the two is less than or equal to the maximum error of the end point state, if so, changing the end point state, otherwise, not changing the end point state;
step 93: inputting the error correction amount to an end point state control unit, and updating an end point state by adopting the following formula:
StateEndNew=StateEnd+β·ErrCorrVal,
in the formula, stateEndView is the updated end point state, stateEnd is the current end point state, and beta is an error amplification coefficient;
step 94: and inputting the updated limit acceleration and the updated iteration step length into the end point state control unit, and substituting the updated limit acceleration, the updated iteration step length and the updated end point state into the path track method by the end point state control unit.
Compared with the prior art, the invention has the outstanding and beneficial technical effects that:
in the invention, the linkage control of group movement is realized from the perception control and the perception tolerance, so that the group behaviors, information interaction and task intentions are organically combined, the efficiency and performance of cooperatively completing tasks by each individual in the group can be obviously improved, the problems of complex environment and emergency can be flexibly solved, and cluster congestion cannot be caused in the case of increasing the number of the individuals, therefore, the invention has the advantages of high working efficiency, strong robustness, high reliability, excellent cluster performance and the like.
In the invention, the cluster control method based on the perception control and the perception tolerance helps the cluster to effectively deal with the interference of a complex environment so as to improve the stability and the accuracy of control and solve the problem that the survival rate of the cluster in a closed space is rapidly reduced along with the increase of the number of individuals.
Drawings
FIG. 1 is a schematic flow chart of the steps of the present invention;
FIG. 2 is a schematic diagram of a process framework of the present invention;
Detailed Description
To facilitate understanding of those skilled in the art, the present invention is further described below in conjunction with the accompanying drawings and the specific embodiments.
It should be noted that the cluster control method based on perception control and perception tolerance is applied to a group robot system, and is used for realizing the desired cluster motion by controlling the motion of an individual. The group robot system includes a group, which is composed of a plurality of individuals, which refer to a single robot. A robot refers to an intelligent machine capable of semi-autonomous or fully autonomous operation.
As shown in fig. 1, a cluster control method based on sensing control and sensing tolerance includes the following steps:
step 1: the processor configures cluster information of a population, wherein the cluster information comprises an individual number (N), a termination iteration number (TermStepCnt), a terminal state maximum error (TermStepCnt), and individual perception attributes (PecPatrbcurr), wherein the individual perception attributes comprise a limit acceleration (MaxAcc), a perception range (DetRange), a perception tolerance (PecpLmtTagt) and an iteration step size (IterStepSize);
in the above steps, the sensing range is a detection area of the sensor, if the individual is located within the sensing range, the sensor can detect the state of the individual, and if the individual is located outside the sensing range, the sensor does not detect the state of the individual.
Step 2: the processor initializes iteration times (AgentItercnt), and configures the state information of the individual, wherein the state information comprises an initial state (StateInit) and an end state (StateEnd);
in the above step, the initial state includes the position and velocity of the individual at the initial time, the end state includes the position and velocity of the individual at the end time, and the parameter type of the initial state and the parameter type of the end state are the same. The number of iterations is initially 0.
And 3, step 3: the processor substitutes the state information into a path planning method to calculate path track data (PathTrajSet), wherein the path track data comprises speed information and position information of an individual in each iteration step;
in the above steps, the parameter type of the initial state, the parameter type of the end point state, and the parameter type of the input path planning method are the same. The path trajectory method may be preset in the processor. The path planning method has no limitation but needs to satisfy the following conditions:
(1) The path planning method accords with a kinematics rule, path trajectory data is obtained by substituting the initial state and the end state into the path planning method, and when the path trajectory data is adopted to drive the individual to move, the motion state of the individual is continuously changed;
(2) The path track data comprise the limit acceleration and the iteration step length of the individual, and the iteration step length is the iteration times from step 4 to step 11 in one second;
(3) The path trace data is required to ensure that the mapping between the inputs (i.e., initial state and end state) and the outputs (i.e., path trace data) is unique, i.e., the same input can only be computed to yield the same output.
And 4, step 4: the processor drives the individual to move according to the path track data, and obtains a current state target value (StateUrTagt) and an end point state target value (StateEndTagt) of the individual according to the path track data index;
in the above steps, the individual may include at least one of a motor and a steering engine, and the processor drives the individual to move according to the path trajectory data, that is, the processor controls the motor to operate and the steering engine to steer according to the path trajectory data. Because the path track data is a discrete track sequence in an ideal state, the current state target value and the end point state target value of an individual are contained in the path track data, and the current state target value and the end point state target value are obtained through indexing so as to be convenient for subsequent iteration. The initial state is data obtained by indexing in the path track data when the index position is 0, the target value of the individual current state is data obtained by indexing in the path track data when the index position is 1, and the target value of the end state is data obtained by indexing in the path track data when the index position is the last position.
And 5: the sensor senses the self state (StaeBurSef) of an individual where the sensor is located and other individual states (StaeBurOth) in a sensor sensing range, and calculates and obtains the current sensing information quantity (PecPinfoAmt) of the individual according to the self state and the other individual states;
in the above steps, the sensor is embedded in the individual, the sensor detects data of the corresponding individual as the self state of the individual, and the other individuals are individuals except the individual where the sensor is located in the sensing range. The number of the sensors is consistent with that of the individuals, the sensors correspond to the individuals one by one, and the sensors are embedded in the corresponding individuals.
Step 6: the sensor inputs the self state and the perception information quantity into a current state control unit (StateCurrECU), and inputs the perception information quantity, the destination state target value and the perception tolerance into a perception attribute control unit (PecPatrrbECU);
in the above steps, the current state control unit and the sensing attribute control unit are part of the processor.
And 7: the perception attribute control unit updates the individual perception attribute according to the perception information quantity, the destination state target value and the perception tolerance;
in the above steps, the perception attribute control unit realizes iterative adjustment of the individual perception attribute according to the perception information amount, the end point state target value and the perception tolerance, and further realizes iterative acquisition of a more real individual perception attribute by the perception attribute control unit in the motion process of the individual.
And 8: the current state control unit predicts a state error (StateErr) and an error correction amount (ErrCorrVal);
in the above steps, the current state control unit realizes iterative detection of the state error and the error correction value, and further realizes that the current state control unit can iteratively detect a more real state error and error correction value in the movement process of the individual.
And step 9: the perception attribute control unit inputs the updated individual perception attribute to a sensor and a terminal state control unit (StateEndTagtECU), the current state control unit inputs a state error and an error correction value to the terminal state control unit, and the terminal state control unit updates the terminal state according to the state error, the error correction value and the updated terminal state;
in the above step, the end point state control unit is a part of the processor. The updated individual perception attributes comprise updated limit acceleration, a perception range and an iteration step length, the perception attribute control unit inputs the updated perception range to the sensor, and the perception attribute control unit inputs the updated perception range and the updated iteration step length to the end point state control unit. The state error, the error correction value, the updated sensing range and the updated iteration step length are used for updating the end point state, so that the end point state of the individual is iteratively compensated for the environmental error, and the accuracy of the motion state of the individual is improved.
Step 10: the processor substitutes the updated individual perception attribute and the updated end point state into a path planning method to update to obtain path track data, and the iteration number is increased by one;
in the above steps, the number of iterations plus one is:
AgentIterCntNew=AgentIterCnt+1,
in the formula, agenttitercntnew is the iteration number of the output, and agenttitercnt is the iteration number of the input.
Step 11: the processor judges whether the iteration times are equal to the termination iteration times, if so, the iteration is terminated, and if not, the step 4 to the step 11 are iterated;
in the above steps, step 4 to step 11 are perceptual control processes, and negative feedback control is realized in an iterative manner, so that cluster control can be performed more accurately.
The step 5 specifically includes:
step 51: the sensor senses the self state of the individual in which the sensor is positioned and other individual states in the sensing range.
Step 52: the sensor calculates the Euclidean distance (DisSefOth) between the individual and other individuals according to the self state and the states of other individuals;
in the above step, the self state includes the position (LocSef) of the current individual, the other individual state includes the position (AgentOth) of the other individual, the euclidean distance is the distance between the position of the individual and the position of the other individual, and the calculation formula of the euclidean distance is as follows:
Figure BDA0003772272880000091
in the above formula, locsef.x is the X-axis coordinate of the position of the individual, locsef.y is the Y-axis coordinate of the position of the individual, agentoh.x is the X-axis coordinate of the other individual, and agentoh.y is the Y-axis coordinate of the other individual.
Step 53: the sensor calculates the sensing angle (phi) occupied by other individuals according to the following formula:
Figure BDA0003772272880000101
wherein phi is the sensing angle occupied by other individuals,
Figure BDA0003772272880000102
BL is the diameter of the individual itself and DisSefOth is the Euclidean distance.
Step 54: the sensor accumulates the perception angles of all other individuals to obtain a total perception angle (PecPang), and divides the total perception angle by 2 pi to obtain the perception information amount;
in the above steps, the formula for calculating the amount of the sensing information is:
Figure BDA0003772272880000103
in the step 7, the method specifically comprises:
step 71: the perception attribute control unit compares the perception information quantity with the perception tolerance to obtain a difference value (PecPdiff) of the perception information quantity and the perception tolerance, judges whether the difference value of the two is larger than 0, if so, the difference value of the two is kept, if not, the absolute value of the difference value of the two is judged whether to be smaller than eta times of the perception tolerance, eta is an interval parameter for controlling the individual to reversely adjust the individual perception attribute, eta belongs to (0,1), if so, the difference value of the two is kept, and if not, the difference value of the two is set to be 0;
in the above step, the calculation formula of the difference between the two is:
PecpDiff=PecpInfoAmt-PecpLmtTagt,
η may be preset in the perceptual property control unit of the processor.
Step 72: the perception attribute control unit amplifies the difference value of the two by K times and then enters the following formula to update the individual perception attribute:
PecpAtrbNew=PecpAtrbCurr+α·PecpDiff,
in the formula, pecPatrbNew is the updated individual perception attribute, pecPatrbBerr is the current individual perception attribute, and alpha is a proportionality coefficient;
in the above steps, the sensing attribute control unit includes a limit acceleration control unit (maxcecu), a sensing range control unit (dectrngeecu), and an iteration step control unit (iterstepsize ecu). The individual perception attributes comprise an extreme acceleration, a perception range and an iteration step length, the extreme acceleration control unit updates the extreme acceleration according to the formula, the perception range control unit updates the perception range according to the formula, the iteration step length control unit updates the iteration step length according to the formula, and the extreme acceleration control unit, the perception range control unit and the iteration step length control unit can respectively adopt different proportionality coefficients alpha, so that the individual perception attributes are obtained through updating. The proportionality coefficient alpha can be preset in the extreme acceleration control unit, the sensing range control unit and the iteration step length control unit respectively. The perception attribute control unit inputs the updated perception range to the sensor, thereby adjusting the perception range of the sensor. And the perception attribute control unit inputs the updated limit acceleration and the updated iteration step length to the terminal state control unit for subsequent adjustment of the terminal state.
Step 73: the processor pre-judges whether the current individual and other individuals collide according to the path track data and other individual states, if so, the collision signal is set to be 1, and if not, the collision signal is set to be 0;
in the above steps, the path trajectory data includes a path of the current individual, the state of the other individual includes speed directions of the other individual, the processor determines whether the path of the current individual and the speed directions of the other individual are crossed, if so, it is determined that the current individual and the other individual collide, otherwise, it is determined that the current individual and the other individual do not collide.
In the step 8, the method specifically includes:
step 81: subtracting the target value of the current state from the self state to obtain a state error (StateErr);
step 82: judging whether the quantity of the state errors in the state error container is equal to an error prediction sequence length (ErrPredLength), if so, obtaining an error correction value according to an error prediction model, inputting the error correction value to an end point state control unit, initializing the state error container, otherwise, inputting the state errors to the state error container, setting the error correction value to be 0, and inputting the error correction value to the end point state control unit;
in the above steps, the error prediction sequence length is related to the error prediction model. The state error container is part of the processor and the error prediction model may be a kalman filter model.
The step 9 specifically includes:
step 91: the processor inputs the collision signal into the terminal state control unit, judges whether the collision signal is equal to 1, if so, changes the terminal state, otherwise, does not change the terminal state;
in the steps, the problem of avoiding collision of the current individual and other individuals is solved.
And step 92: the processor compares the module length of the self state with the module length of the target value of the end point state to obtain the difference (CurrEndDiff) between the two, judges whether the difference between the two is less than or equal to the maximum error of the end point state, if so, the end point state is changed, otherwise, the end point state is not changed;
in the above steps, comparing the module length of the self state with the module length of the target value of the end point state to obtain the difference between the two values:
the difference between the two = the mode length of the self-state-the mode length of the end-point state target value.
Step 93: the processor inputs the error correction amount to the end point state control unit, and updates the end point state by adopting the following formula:
StateEndNew=StateEnd+β·ErrCorrVal,
in the formula, stateEndView is the updated end point state, stateEnd is the current end point state, and beta is an error amplification coefficient;
in the above steps, β is preset in the processor, and the magnitude of β can be adjusted by adopting an actual test mode, so that the deviation between the actual trajectory and the ideal trajectory of the individual is smaller.
Step 94: the processor inputs the updated limit acceleration and the updated iteration step length into the end point state control unit, and the end point state unit substitutes the updated limit acceleration, the updated iteration step length and the updated end point state into the path track method.
As shown in fig. 2, which is a schematic diagram of a method framework provided by the present invention, the present invention is a control method based on sensing control and sensing tolerance, which includes the following 5 parts:
(1) A sensing area: the method comprises the steps that an individual obtains an own state and other individual states in a sensing range through a sensor, sensing capacity of the current individual can be generated through a correlation method by means of the other individual states and the own state, and the own state and the sensing capacity serve as input signals of a processor, wherein the own state comprises the current state and an end point state, and the end point state can be sensed and transmitted only at the end of single iteration;
(2) Target status zone: the sensing tolerance and the initial terminal target state need to be set in advance, and the current state target value and the terminal state target value can be generated by a path planning method;
(3) The ECU area is a processor, a decision control and error correction part in the invention depends on a basic control unit and a hierarchical structure, wherein a perception attribute control unit is used as the highest layer, and an output signal is used for controlling the variation amplitude of output values of an extreme acceleration control unit, a perception range control unit and an iteration step length control unit and realizing decision transformation generated when an individual prejudges a collision; the current state control unit compares and analyzes the current state value of the individual with the state target value, is used for evaluating errors and other noises in the environment, and outputs a signal to the terminal state control unit for adjusting terminal information; the terminal state control unit receives the error analysis reference signal input by the current state control unit so as to adjust terminal information to resist environmental factors, and receives the decision conversion reference signal input by the highest layer perception attribute control unit so as to realize obstacle avoidance operation among individuals in a closed environment;
(4) The path planning method area: the system is used for helping the individual plan a future path and guiding the individual to make an exercise behavior;
(5) An outer zone: including environmental feedback functions, sensors, and action generator hardware facilities.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (5)

1. A cluster control method based on perception control and perception tolerance is characterized by comprising the following steps:
step 1: the processor configures cluster information of a group, wherein the cluster information comprises the number of individuals, iteration termination times, maximum errors of end point states and individual perception attributes, and the individual perception attributes comprise extreme acceleration, a perception range, a perception tolerance and an iteration step length of a path planning method;
step 2: the processor initializes iteration times and configures the state information of an individual, wherein the state information comprises an initial state and a terminal state;
and step 3: the processor substitutes the state information into a path planning method to calculate path track data, wherein the path track data comprises speed information and position information of individuals in each iteration step;
and 4, step 4: the processor drives the individual to move according to the path track data, and obtains a current state target value and an end point state target value of the individual according to the path track data index;
and 5: the sensor senses the self state of the individual in which the sensor is positioned and other individual states in a sensing range, and the current sensing information quantity of the individual is calculated according to the self state and other individual states;
and 6: the sensor inputs the self state and the perception information quantity to the current state control unit, and inputs the perception information quantity, the destination state target value and the perception tolerance to the perception attribute control unit;
and 7: the perception attribute control unit updates the individual perception attribute according to the perception information amount, the terminal state target value and the perception tolerance;
and step 8: the current state control unit predicts a state error and an error correction amount;
and step 9: the sensing attribute control unit inputs the updated individual sensing attribute to the sensor and the terminal state control unit, the current state control unit inputs the state error and the error correction value to the terminal state control unit, and the terminal state control unit updates the terminal state according to the state error, the error correction value and the updated terminal state;
step 10: the processor substitutes the updated individual perception attribute and the updated end point state into a path planning method to update to obtain path track data, and the iteration number is increased by one;
step 11: and the processor judges whether the iteration times are equal to the termination iteration times, if so, the iteration is terminated, and if not, the step 4 to the step 11 are iterated.
2. The method according to claim 1, wherein the step 5 specifically includes:
step 51: the sensor senses the self state of the current individual and the states of other individuals in a sensing range;
step 52: calculating according to the self state and other individual states to obtain Euclidean distances between the individual and other individuals;
step 53: and calculating the perception angles occupied by other individuals according to the following formula:
Figure FDA0003772272870000021
wherein phi is the sensing angle occupied by other individuals,
Figure FDA0003772272870000022
BL is the diameter of the individual, and DisSefOth is the Euclidean distance;
and accumulating the perception angles of all other individuals to obtain an overall perception angle, and dividing the overall perception angle by 2 pi to obtain the perception information amount.
3. The method according to claim 1, wherein the step 7 specifically includes:
step 71: the perception attribute control unit compares the perception information quantity and the perception tolerance to obtain a difference value of the perception information quantity and the perception tolerance, judges whether the difference value of the perception information quantity and the perception tolerance is larger than 0, if yes, the difference value of the perception attribute is kept, if not, the absolute value of the difference value of the perception attribute is judged to be smaller than eta times of the perception tolerance, eta is an interval parameter for controlling an individual to reversely adjust the individual perception attribute, eta belongs to (0,1), if yes, the difference value of the perception attribute is kept, and if not, the difference value of the perception attribute is set to be 0;
step 72: and amplifying the difference value of the two values to update the individual perception attribute according to the following formula:
PecpAtrbNew=PecpAtrbCurr+α·PecpDiff,
in the formula, pecPatrbNew is the updated individual perception attribute, pecPatrbBerr is the current individual perception attribute, and alpha is a proportionality coefficient;
step 73: and pre-judging whether the current individual and other individuals collide according to the path track data and other individual states, if so, setting a collision signal to be 1, and otherwise, setting the collision signal to be 0.
4. The method according to claim 1, wherein the step 8 specifically includes:
step 81: subtracting the target value of the current state from the self state to obtain a state error;
step 82: and judging whether the number of the state errors in the state error container is equal to the error prediction sequence length or not, if so, obtaining an error correction value according to an error prediction model, inputting the error correction value into an end point state control unit, initializing the state error container, otherwise, inputting the state errors into the state error container, setting an end point state target value to be 0, and inputting the error correction value into the end point state target value.
5. The method for cluster control based on sensing control and sensing tolerance according to claim 1, wherein the step 9 specifically includes:
step 91: inputting a collision signal into a terminal state control unit, judging whether the collision signal is equal to 1, if so, changing the terminal state, and if not, not changing the terminal state;
and step 92: comparing the module length of the self state with the module length of the target value of the end point state to obtain the difference between the two, judging whether the difference between the two is less than or equal to the maximum error of the end point state, if so, changing the end point state, otherwise, not changing the end point state;
step 93: inputting the error correction amount to an end point state control unit, and updating the end point state by adopting the following formula:
StateEndNew=StateEnd+β·ErrCorrVal,
in the formula, stateEndView is the updated end point state, stateEnd is the current end point state, and beta is an error amplification coefficient;
step 94: and inputting the updated limit acceleration and the updated iteration step length into the end point state control unit, and substituting the updated limit acceleration, the updated iteration step length and the updated end point state into the path track method by the end point state control unit.
CN202210905373.8A 2022-07-29 2022-07-29 Cluster control method based on perception control and perception tolerance Pending CN115356910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210905373.8A CN115356910A (en) 2022-07-29 2022-07-29 Cluster control method based on perception control and perception tolerance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210905373.8A CN115356910A (en) 2022-07-29 2022-07-29 Cluster control method based on perception control and perception tolerance

Publications (1)

Publication Number Publication Date
CN115356910A true CN115356910A (en) 2022-11-18

Family

ID=84030955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210905373.8A Pending CN115356910A (en) 2022-07-29 2022-07-29 Cluster control method based on perception control and perception tolerance

Country Status (1)

Country Link
CN (1) CN115356910A (en)

Similar Documents

Publication Publication Date Title
Ollero et al. Predictive path tracking of mobile robots. Application to the CMU Navlab
CN105607636B (en) A kind of wheeled mobile robot Leader-Follower Formation control method based on Integral Sliding Mode algorithm
Mueller et al. Iterative learning of feed-forward corrections for high-performance tracking
Wang et al. Research on logistics autonomous mobile robot system
CN115157238A (en) Multi-degree-of-freedom robot dynamics modeling and trajectory tracking method
CN112025697A (en) Integral model prediction control method of omnidirectional mobile robot
Liu et al. Path tracking control of a self‐driving wheel excavator via an enhanced data‐driven model‐free adaptive control approach
CN113126623A (en) Adaptive dynamic sliding mode automatic driving vehicle path tracking control method considering input saturation
CN110744552B (en) Flexible mechanical arm motion control method based on singular perturbation theory
CN112416021A (en) Learning-based path tracking prediction control method for rotor unmanned aerial vehicle
Kim et al. TOAST: Trajectory Optimization and Simultaneous Tracking Using Shared Neural Network Dynamics
CN113910218B (en) Robot calibration method and device based on kinematic and deep neural network fusion
Guo et al. Optimal navigation for AGVs: A soft actor–critic-based reinforcement learning approach with composite auxiliary rewards
CN115356910A (en) Cluster control method based on perception control and perception tolerance
CN114967472A (en) Unmanned aerial vehicle trajectory tracking state compensation depth certainty strategy gradient control method
CN114063621B (en) Wheel type robot formation tracking and obstacle avoidance control method
CN114012733B (en) Mechanical arm control method for scribing of PC component die
CN114879508A (en) Grinding robot path tracking control method based on model prediction control
CN111975777A (en) Robot joint space self-adaptive trajectory planning method based on Radau pseudo-spectrum
CN113352320A (en) Q learning-based Baxter mechanical arm intelligent optimization control method
Caruntu et al. Decentralized predictive formation control for mobile robots without communication
Normey-Rico et al. A Smith predictor based generalized predictive controller for mobile robot path-tracking
Lu et al. A novel steering control for real autonomous vehicles via PI adaptive dynamic programming
CN111152213A (en) Mechanical arm vibration compensation method and device based on hybrid control
Xie et al. A Fuzzy Neural Controller for Model-Free Control of Redundant Manipulators With Unknown Kinematic Parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination