CN116774590B - Adaptive regulation and control method and system for influencing interference - Google Patents
Adaptive regulation and control method and system for influencing interference Download PDFInfo
- Publication number
- CN116774590B CN116774590B CN202311033692.5A CN202311033692A CN116774590B CN 116774590 B CN116774590 B CN 116774590B CN 202311033692 A CN202311033692 A CN 202311033692A CN 116774590 B CN116774590 B CN 116774590B
- Authority
- CN
- China
- Prior art keywords
- target
- influencing
- influence
- factor
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000003044 adaptive effect Effects 0.000 title claims description 14
- 230000008859 change Effects 0.000 claims abstract description 104
- 238000013507 mapping Methods 0.000 claims abstract description 28
- 239000013598 vector Substances 0.000 claims description 47
- 230000033001 locomotion Effects 0.000 claims description 24
- 230000003287 optical effect Effects 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 11
- 238000009499 grossing Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006798 recombination Effects 0.000 description 2
- 238000005215 recombination Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Transportation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a self-adaptive regulation and control method and a system for influencing interference, which relate to the technical field of image processing, and the method comprises the following steps: acquiring continuous frame images of the surrounding environment of a target, and calculating the change trend of relative pixel points of influence factors and the target under the continuous frame images; constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points; predicting the position of the influence factors according to the mapping model, and judging whether influence risks exist or not; if the influence risk exists, planning the direction of the change trend of the target pixel point according to the mapping model; and carrying out change rate planning after the change trend of the target pixel point subjected to direction planning, and outputting corresponding control parameters according to the change trend of the target pixel point subjected to change rate planning. The method can evaluate and predict the influence caused by various influence factors in the environment in real time, thereby effectively avoiding the influence factors and being suitable for various complex environments.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a self-adaptive regulation and control method and system for influencing interference.
Background
With the development of technology, image processing and computer vision are often used to process tracking and identifying objects.
In vehicle problems, there are various dynamic and static influencing factors, such as pedestrians, other vehicles, buildings, animals, etc. The traditional influence factor avoidance control method is mainly based on sensor data, such as radar, laser radar and the like, to avoid influence factors.
However, these methods have high difficulty in predicting the influencing factors, have poor deep understanding of the environment, and may not be capable of efficiently and accurately avoiding the influencing factors.
Disclosure of Invention
The invention aims to solve the technical problem that future information of influence factors can be predicted, and the safety of driving personnel is ensured.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a method for adaptive regulation and control for interference, the method comprising:
acquiring continuous frame images of the surrounding environment of a target, and calculating the change trend of relative pixel points of influence factors and the target under the continuous frame images;
constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points;
predicting the position of the influence factors according to the mapping model, and judging whether influence risks exist or not;
if the influence risk exists, planning the direction of the change trend of the target pixel point according to the mapping model;
and carrying out change rate planning after the change trend of the target pixel point subjected to direction planning, and outputting corresponding control parameters according to the change trend of the target pixel point subjected to change rate planning.
Further, acquiring continuous frame images of the surrounding environment of the target, and calculating the relative pixel point change trend of the influencing factors and the target under the continuous frame images, wherein the method comprises the following steps:
by the formula:preprocessing continuous frame images of the surrounding environment of the target to obtain preprocessed continuous frame images of the surrounding environment of the targetThe image, wherein,for successive frame images before preprocessing,for the preprocessed successive frame images,as a function of the pre-processing function,as the number of weights to be used,the weight is preset for each;
detecting and identifying influencing factors in the preprocessed continuous frame images of the surrounding environment of the target;
by the formula:calculating gradient and optical flow of the influence factor pixel points in the preprocessed continuous frame images of the surrounding environment of the target, obtaining the change trend of the influence factor relative to the pixel points of the target under the continuous frame images, wherein,as influencing factorsThe trend of the pixel point change relative to the target under the continuous frame images,andthe preprocessed images at time t and time t-1 respectively,to at the same timeThe gradient of the lower P-point,the pixel value of the P point in the preprocessed image at the time t-1 is the value of the preprocessed image at the P point,is a regularization parameter.
Further, detecting and identifying influencing factors in the preprocessed continuous frame images of the surrounding environment of the target includes:
by the formula:extracting features of the preprocessed continuous frame images of the surrounding environment of the target, wherein F is an image preprocessed by a convolutional neural network CNNExtracting the obtained characteristic representation;
by the formula:detecting influence factors to obtain a set of influence factors with the highest possibility, wherein,for the set of influencing factors detected at time t,representing the probability of the influence factor set O under the condition of F for a given feature;
by the formula:performing influence factor recognition to obtain a recognition result, wherein,for the result of the influence factor recognition by the classifier,is a classifier.
Further, constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points, including:
by the formula:calculating gray values between two adjacent frames of images, and representing the motion condition of the relative pixel point change trend through the gray values, wherein,for the gray value at pixel point p in the previous frame image,for being positioned at pixel point in current frame imageThe gray value at which the color is to be changed,an optical flow vector estimated at position p;
obtaining the geographic position of a target, constructing a grid image by taking the geographic position as the center, mapping the geographic position to a corresponding position on the grid image, and drawing the direction and the change rate of the influence factors on the grid image by the direction and the change rate of the change trend of the relative pixel points;
and updating the corresponding positions of the influence factors in the grid image according to the characterized motion conditions of the relative pixel point change trend, and smoothing the updating process of the grid image by using a filter.
Further, predicting the position of the influencing factor according to the mapping model, and judging whether the influencing risk exists, including:
by the formula:predicting the position of influencing factors in the next frame grid image, whichIn,to predict the position of the influencing factors in the next frame grid image,to influence the x-axis coordinates in the grid image at the current time,to influence the y-axis coordinates in the grid image at the current time of the factor,to influence the x-component of the direction and speed of motion in the grid image at the current moment of the factor,to influence the y-component of the direction and speed of motion in the grid image at the current moment of the factor,is the time interval between two frames;
by the formula:calculating the distance between the target and the influencing factorWherein, the method comprises the steps of, wherein,to influence the x-coordinate of the predicted position of the factor in the next frame,for the x-coordinate of the current moment of the object,to influence the y-coordinate of the predicted position of the factor in the next frame,is the object ofY coordinate of the current moment of (2);
by the formula:the relative speed between the target and the influencing factor is calculated, wherein,in order to influence the optical flow vector of the factor,as a velocity vector of the object,is the relative speed between the target and the influencing factor;
setting a distance threshold and a speed threshold collision, if< threshold, andjudging that influence risks exist if the risk is greater than the collision; otherwise, the risk is not influenced.
Further, if there is an influence risk, planning a direction of a change trend of the target pixel point according to the mapping model, including:
obtaining a target end point position, and passing through the formula:calculating an endpoint vectorWherein, the method comprises the steps of, wherein,as the end point position vector, the position vector,the current position vector of the target;
by the formula:calculating azimuth angle of influencing factors relative to target positionWherein, the method comprises the steps of, wherein,in order to influence the current y-coordinate of the factor,for the current y-coordinate of the object,in order to influence the current x-coordinate of the factor,is the current x coordinate of the target;
by the formula:will azimuth angleConversion into direction vectors;
By the formula:obtaining the change trend of the target pixel point after direction planningWherein, the method comprises the steps of, wherein,is the endpoint vector.
Further, the method for planning the change rate after the change trend of the target pixel point after the direction planning comprises the following steps:
by the formula:calculate the currentThe distance D of the influencing factor from the target, wherein,in order to influence the current position vector of the factor,the current position vector of the target;
setting an adjusting parameter k, and passing through the formula:obtaining the change trend of the target pixel point after the change rate planningWherein, the method comprises the steps of, wherein,the smoothing factor is a small positive number, so as to avoid zero error,and obtaining the change trend of the target pixel point after the direction planning.
In a second aspect, an adaptive regulation and control system for interference, comprising:
the acquisition module is used for acquiring continuous frame images of the surrounding environment of the target and calculating the change trend of relative pixel points of the influence factors and the target under the continuous frame images;
the construction module is used for constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points;
the judging module is used for predicting the position of the influence factor according to the mapping model and judging whether the influence risk exists or not;
the planning module is used for planning the direction of the change trend of the target pixel point according to the mapping model if the influence risk exists;
and the output module is used for carrying out change rate planning on the change trend of the target pixel point after the direction planning, and outputting corresponding control parameters according to the change trend of the target pixel point after the change rate planning.
In a third aspect, a computing device includes:
one or more processors;
and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the above-described methods.
In a fourth aspect, a computer readable storage medium stores a program that when executed by a processor implements the above method.
The scheme of the invention at least comprises the following beneficial effects:
according to the scheme, the continuous frame images are acquired, so that the influence caused by various influence factors in the environment can be estimated and predicted in real time, and further, according to the pixel point change trend of the factors, a mapping model can be constructed and future influence factors can be predicted, so that whether risks exist or not can be judged in time; thereby effectively avoiding influence factors and being applicable to various complex environments.
Drawings
Fig. 1 is a schematic flow chart of an adaptive regulation method for interference influence according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an adaptive regulation system for interference influence according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention proposes an adaptive regulation method for interference, which includes:
step 1, acquiring continuous frame images of the surrounding environment of a target, and calculating the change trend of relative pixel points of influencing factors and the target under the continuous frame images;
step 2, constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points;
step 3, predicting the position of the influence factor according to the mapping model, and judging whether an influence risk exists;
step 4, if the influence risk exists, planning the direction of the change trend of the target pixel point according to the mapping model;
and 5, carrying out change rate planning after the change trend of the target pixel point subjected to the direction planning, and outputting corresponding control parameters according to the change trend of the target pixel point subjected to the change rate planning.
In a preferred embodiment of the present invention, the step 1 further includes:
step 11, through the formula:preprocessing continuous frame images of the surrounding environment of the target to obtain preprocessed continuous frame images of the surrounding environment of the target, wherein,for successive frame images before preprocessing,for the preprocessed successive frame images,as a function of the pre-processing function,as the number of weights to be used,the weight is preset for each;
step 12, detecting and identifying influencing factors in the preprocessed continuous frame images of the surrounding environment of the target;
step 13By the formula:calculating gradient and optical flow of the influence factor pixel points in the preprocessed continuous frame images of the surrounding environment of the target, obtaining the change trend of the influence factor relative to the pixel points of the target under the continuous frame images, wherein,as influencing factorsThe trend of the pixel point change relative to the target under the continuous frame images,andthe preprocessed images at time t and time t-1 respectively,to at the same timeThe gradient of the lower P-point,the pixel value of the P point in the preprocessed image at the time t-1 is the value of the preprocessed image at the P point,is a regularization parameter.
In the above step 11, the input continuous frame images may be subjected to noise cancellation, image quality improvement, and the like by a specific preprocessing formula to optimize the image quality. Meanwhile, the preprocessing process can take time continuity among images into consideration, so that dynamic change of the environment is better captured.
In step 12 described above, the preprocessed image is more easily detected and identified for various influencing factors. These influencing factors may include road conditions, pedestrians, other vehicles, etc. Thus, the driving environment can be more accurately understood, and accurate information can be provided for subsequent obstacle avoidance decisions.
In the above step 13, by calculating the gradient and the optical flow of the image, the dynamic change information of the influencing factors in the environment can be obtained. For example, by gradient calculation, edge and shape information of the object can be obtained to evaluate the size, shape and pose of the object; through optical flow calculation, the motion information of the object can be obtained so as to predict the motion direction and speed of the object. In this way, possible influences of influencing factors on the driving of the vehicle, such as possible collision risks, can be predicted.
In a preferred embodiment of the present invention, the step 12 further includes:
step 121, by the formula:extracting features of the preprocessed continuous frame images of the surrounding environment of the target, wherein F is an image preprocessed by a convolutional neural network CNNExtracting the obtained characteristic representation;
step 122, by the formula:detecting influence factors to obtain a set of influence factors with the highest possibility, wherein,for the set of influencing factors detected at time t,representing the probability of the influence factor set O under the condition of F for a given feature;
step 123, by the formula:performing influence factor recognition to obtain a recognition result, wherein,for the result of the influence factor recognition by the classifier,is a classifier.
In a preferred embodiment of the present invention, the step 2 further includes:
step 21, by the formula:calculating gray values between two adjacent frames of images, and representing the motion condition of the relative pixel point change trend through the gray values, wherein,for the gray value at pixel point p in the previous frame image,for being positioned at pixel point in current frame imageThe gray value at which the color is to be changed,an optical flow vector estimated at position p;
step 22, obtaining the geographic position of the target, constructing a grid image by taking the geographic position as a center, mapping the geographic position to a corresponding position on the grid image, and drawing the direction and the change rate of the influence factors on the grid image by the direction and the change rate of the change trend of the relative pixel points;
and step 23, updating the corresponding positions of the influence factors in the grid image according to the motion condition of the represented relative pixel point change trend, and smoothing the updating process of the grid image by using a filter.
In the above step 21, by calculating the difference in gray value between the adjacent two frames of images, more accurate motion information can be acquired. The difference of gray values can reflect the movement speed and direction of the influencing factors in the image, so that the movement condition of the influencing factors can be described more accurately, and the influencing factors can be tracked more accurately.
In step 22, the geographic location of the target and the movement of the influencing factors are combined to create a gridded image, so that the environmental information can be displayed more intuitively and the subsequent processing and analysis are facilitated. On the grid image, the direction and the change rate of the influencing factors can be represented by the direction and the length of the arrow, so that the movement condition of the influencing factors is clear.
In step 23, the grid image is filtered, so that noise can be suppressed and the position update of the influencing factors can be smoother. The method is beneficial to eliminating errors caused by factors such as image shake, illumination change and the like, and improving the tracking accuracy of motion conditions of influencing factors.
In a preferred embodiment of the present invention, the step 3 further includes:
step 31, by the formula:predicting the position of the influencing factor in the next frame of grid image, wherein,to predict the position of the influencing factors in the next frame grid image,to influence the x-axis coordinates in the grid image at the current time,to influence the y-axis coordinates in the grid image at the current time of the factor,to influence the x-component of the direction and speed of motion in the grid image at the current moment of the factor,to influence the y-component of the direction and speed of motion in the grid image at the current moment of the factor,is the time interval between two frames;
step 32, by the formula:calculating the distance between the target and the influencing factorWherein, the method comprises the steps of, wherein,to influence the x-coordinate of the predicted position of the factor in the next frame,for the x-coordinate of the current moment of the object,to influence the y-coordinate of the predicted position of the factor in the next frame,the y coordinate of the current moment of the target;
step 33, by the formula:the relative speed between the target and the influencing factor is calculated, wherein,in order to influence the optical flow vector of the factor,as a velocity vector of the object,is the relative speed between the target and the influencing factor;
step 34, distance threshold and speed threshold collision are set,if it is< threshold, andjudging that influence risks exist if the risk is greater than the collision; otherwise, the risk is not influenced.
In step 31, the position of the influencing factor in the next frame is predicted. This step may help the system learn in advance about future changes in influencing factors in order to make adaptive measures in advance.
In step 32 above, the distance between the target and the influencing factor is calculated. The purpose of this step is to determine if the objects are close enough that a collision may occur.
In step 33 above, the relative speed between the target and the influencing factor is calculated. This step can help the system better understand the dynamic relationship between the two to determine if there is a risk of collision.
In step 34 above, distance and speed thresholds are set to determine if there is a risk of collision. If the distance between the target and the influencing factor is smaller than the threshold value and the relative speed is larger than the threshold value, judging that collision risk exists. This step is based on a preset safety distance and safety speed to determine the risk of collision.
In a preferred embodiment of the present invention, the step 4 further includes:
step 41, obtaining a target end point position, and passing through the formula:calculating an endpoint vectorWherein, the method comprises the steps of, wherein,as the end point position vector, the position vector,the current position vector of the target;
step 42, by the formula:calculating azimuth angle of influencing factors relative to target positionWherein, the method comprises the steps of, wherein,in order to influence the current y-coordinate of the factor,for the current y-coordinate of the object,in order to influence the current x-coordinate of the factor,is the current x coordinate of the target;
step 43, by the formula:will azimuth angleConversion into direction vectors;
Step 44, by the formula:obtaining the change trend of the target pixel point after direction planningWherein, the method comprises the steps of, wherein,is the endpoint vector.
In the above step 41, the direction in which the target should travel can be obtained by calculating the vector difference between the target end position and the current position. This is a key step in the path planning, which ensures that the target is traveling in the correct direction.
In step 42, the approximate direction of the influencing factor can be known by calculating the azimuth angle of the influencing factor relative to the target position. This may help avoid possible influencing factors so as not to influence the travel of the target.
In the above step 43, the azimuth information may be converted into vector information that can be directly used by converting the azimuth into a direction vector. The information in vector form is more intuitive and easier to process.
In the step 44, the direction plan of the change trend of the target pixel point is obtained by comparing the endpoint vector and the influence factor direction vector. This helps guide the target around influencing factors, driving towards the target end point.
In a preferred embodiment of the present invention, the step 5 further includes:
step 51, by the formula:calculating the distance D between the current influencing factors and the target, wherein,in order to influence the current position vector of the factor,the current position vector of the target;
step 52, setting the adjustment parameter k, and passing through the formula:obtaining the change trend of the target pixel point after the change rate planningWherein, the method comprises the steps of, wherein,the smoothing factor is a small positive number, so as to avoid zero error,and obtaining the change trend of the target pixel point after the direction planning.
And step 53, outputting corresponding control parameters by obtaining the change trend of the target pixel point after the change rate planning.
In step 51 described above, the relative positions of the target and the influencing factors can be obtained by calculating the distances between the influencing factors and the target. This distance can be used to evaluate the extent to which the influencing factor affects the target. If the distance is too close, it may be necessary to immediately adjust the direction or speed of movement of the target.
In the above step 52, the movement rate of the target can be adjusted by setting the adjustment parameter k. The value of the adjustment parameter k may be set according to actual needs, for example, when the target approaches the influencing factor, the moving speed of the target may be reduced by increasing the value of k to avoid collision.
Also, in the above-described step 52, by introducing the smoothing factor, it is possible to avoid zero removal error in calculating the target moving speed, which can ensure the stability and accuracy of the calculation.
As shown in fig. 2, the present invention further provides an adaptive regulation system for interference, which includes:
the acquisition module 10 is used for acquiring continuous frame images of the surrounding environment of the target and calculating the change trend of relative pixel points of the influence factors and the target under the continuous frame images;
the construction module 20 is configured to construct a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel point;
the judging module 30 is configured to predict the position of the influencing factor according to the mapping model, and judge whether there is an influencing risk;
the planning module 40 is configured to plan a direction of a change trend of the target pixel according to the mapping model if there is an influence risk;
the output module 50 is configured to perform rate-of-change planning after the trend of the target pixel point after the direction planning, and output corresponding control parameters according to the trend of the target pixel point after the rate-of-change planning.
Embodiments of the present invention also provide a computing device comprising: a processor, a memory storing a computer program which, when executed by the processor, performs the method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Furthermore, it should be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. Also, the steps of performing the series of processes described above may naturally be performed in chronological order in the order of description, but are not necessarily performed in chronological order, and some steps may be performed in parallel or independently of each other. It will be appreciated by those of ordinary skill in the art that all or any of the steps or components of the methods and apparatus of the present invention may be implemented in hardware, firmware, software, or a combination thereof in any computing device (including processors, storage media, etc.) or network of computing devices, as would be apparent to one of ordinary skill in the art after reading this description of the invention.
The object of the invention can thus also be achieved by running a program or a set of programs on any computing device. The computing device may be a well-known general purpose device. The object of the invention can thus also be achieved by merely providing a program product containing program code for implementing said method or apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is apparent that the storage medium may be any known storage medium or any storage medium developed in the future. It should also be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. The steps of executing the series of processes may naturally be executed in chronological order in the order described, but are not necessarily executed in chronological order. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.
Claims (7)
1. An adaptive regulation method for influencing interference, the method comprising: acquiring continuous frame images of the surrounding environment of a target, and calculating the change trend of relative pixel points of influence factors and the target under the continuous frame images; constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points; predicting the position of the influence factors according to the mapping model, and judging whether the influence factors exist or notAt risk of impact, including: by the formula:predicting the position of the influencing factor in the next frame grid image, wherein +.>For predicting the position of the influencing factors in the next frame grid image +.>For influencing the x-axis coordinates in the grid image at the current time instant +.>For influencing the y-axis coordinate in the grid image at the current time, the following is +.>For influencing the x-component of the direction and the speed of movement in the grid image at the current moment +.>For influencing the y-component of the direction and the speed of movement in the grid image at the current moment +.>Is the time interval between two frames; by the formula: />Calculating the distance between the target and the influencing factor +.>Wherein->For influencing the x-coordinate of the predicted position of the factor in the next frame +.>For the x-coordinate of the current moment of the object, +.>For influencing the y-coordinate of the predicted position of the factor in the next frame +.>The y coordinate of the current moment of the target; by the formula: />The relative speed between the target and the influencing factor is calculated, wherein,optical flow vector for influencing factors, +.>For the velocity vector of the target, +.>Is the relative speed between the target and the influencing factor; setting a distance threshold and a speed threshold collision, if +>< threshold, and->Judging that influence risks exist if the risk is greater than the collision; otherwise, the risk is not affected; if the influence risk exists, planning the direction of the change trend of the target pixel point according to the mapping model, wherein the method comprises the following steps: obtaining a target end point position, and passing through the formula:calculating an endpoint vector +.>Wherein, the method comprises the steps of, wherein,/>is an end position vector>The current position vector of the target; by the formula: />Calculating the azimuth angle +_of the influencing factor relative to the target position>Wherein->For the current y-coordinate of influencing factors, +.>For the current y-coordinate of the object, +.>For the current x-coordinate of influencing factors, +.>Is the current x coordinate of the target; by the formula: />Azimuth angle +.>Conversion into a direction vector>The method comprises the steps of carrying out a first treatment on the surface of the By the formula:obtaining the change trend +.>Wherein->Is an endpoint vector; performing change rate planning after the change trend of the target pixel point after direction planning, and outputting corresponding control parameters according to the change trend of the target pixel point after change rate planning, wherein the control parameters comprise: by the formula:calculating the distance D between the current influencing factor and the target, wherein +.>For the current position vector of the influencing factor, +.>The current position vector of the target; setting an adjusting parameter k, and passing through the formula:obtaining the change trend +.>Wherein->Is a smoothing factor, takes a small positive number for avoiding zero errors, ++>And obtaining the change trend of the target pixel point after the direction planning.
2. The adaptive control method for interference according to claim 1, wherein continuous frame images of the surrounding of the target are obtained, and the trend of the change of the relative pixel points of the influence factors and the target under the continuous frame images is calculatedComprising: by the formula:preprocessing continuous frame images of the surrounding environment of the target to obtain preprocessed continuous frame images of the surrounding environment of the target, wherein ∈>For consecutive frame images before preprocessing, +.>For the preprocessed successive frame images +.>For preprocessing function, ++>For the number of weights, +.>The weight is preset for each; detecting and identifying influencing factors in the preprocessed continuous frame images of the surrounding environment of the target; by the formula:calculating gradient and optical flow of the influence factor pixel points in the preprocessed continuous frame images of the surrounding environment of the target, obtaining the change trend of the influence factor relative to the pixel points of the target under the continuous frame images, wherein,for influencing factors->Relative pixel point change trend of the continuous frame images and the target is +.>And->The pre-processed images at time t and t-1, respectively, < >>Is at->Gradient of lower P point, ++>The pixel value of the P point in the preprocessed image at the time of t-1 is the value of the preprocessed image at the P point of the pixel, </u >>Is a regularization parameter.
3. The adaptive regulation and control method for influence interference according to claim 2, wherein detecting and identifying influence factors in the preprocessed continuous frame images of the target surrounding includes: by the formula:extracting features of the preprocessed continuous frame images of the surrounding environment of the target, wherein F is +.f from the preprocessed images through a convolutional neural network CNN>Extracting the obtained characteristic representation; by the formula: />Detecting influence factors to obtain a set of influence factors with maximum possibility, wherein ∈10>For the set of influencing factors detected at time t, < +.>Representing the probability of the influence factor set O under the condition of F for a given feature; by the formula: />Performing influence factor recognition to obtain recognition result, wherein ++>For the result of the influence factor recognition by the classifier,is a classifier.
4. The adaptive control method for interference according to claim 3, wherein constructing a mapping model corresponding to a target according to the direction and the change rate of the relative pixel change trend comprises: by the formula:calculating gray values between two adjacent frames of images, and representing the motion condition of the relative pixel point change trend through the gray values, wherein +_>For the gray value at pixel point p in the previous frame image,for being located at pixel point in the current frame image +.>Gray value at>An optical flow vector estimated at position p; obtaining the geographic position of a target, and constructing a grid chart by taking the geographic position as the centerMapping the geographic position to a corresponding position on the grid image, and drawing the direction and the change rate of the influence factors on the grid image according to the direction and the change rate of the change trend of the relative pixel points; and updating the corresponding positions of the influence factors in the grid image according to the characterized motion conditions of the relative pixel point change trend, and smoothing the updating process of the grid image by using a filter.
5. An adaptive regulation system for interference, comprising: the acquisition module is used for acquiring continuous frame images of the surrounding environment of the target and calculating the change trend of relative pixel points of the influence factors and the target under the continuous frame images; the construction module is used for constructing a mapping model corresponding to the target according to the direction and the change rate of the change trend of the relative pixel points; the judging module is used for predicting the position of the influence factor according to the mapping model and judging whether the influence risk exists or not, and comprises the following steps: by the formula:predicting the position of the influencing factor in the next frame grid image, wherein +.>For predicting the position of the influencing factors in the next frame grid image +.>For influencing the x-axis coordinates in the grid image at the current time instant +.>For influencing the y-axis coordinate in the grid image at the current time, the following is +.>For influencing the x-component of the direction and the speed of movement in the grid image at the current moment +.>To influence the y-component of the direction and speed of motion in the grid image at the current moment of the factor,is the time interval between two frames; by the formula: />Calculating the distance between the target and the influencing factor +.>Wherein->For influencing the x-coordinate of the predicted position of the factor in the next frame +.>For the x-coordinate of the current moment of the object, +.>To influence the y-coordinate of the predicted position of the factor in the next frame,the y coordinate of the current moment of the target; by the formula: />Calculating a relative speed between the target and the influencing factor, wherein +.>Optical flow vector for influencing factors, +.>For the velocity vector of the target, +.>Between the goal and the influencing factorsIs a relative velocity of (2); setting a distance threshold and a speed threshold collision, if< threshold, and->Judging that influence risks exist if the risk is greater than the collision; otherwise, the risk is not affected; the planning module is configured to plan, if there is an influence risk, a direction of a change trend of the target pixel point according to the mapping model, including: obtaining a target end point position, and passing through the formula: />Calculating an endpoint vector +.>Wherein->Is an end position vector>The current position vector of the target; by the formula:calculating the azimuth angle +_of the influencing factor relative to the target position>Wherein->For the current y-coordinate of influencing factors, +.>For the current y-coordinate of the object,for the current x-coordinate of influencing factors, +.>Is the current x coordinate of the target; by the formula:azimuth angle +.>Conversion into a direction vector>The method comprises the steps of carrying out a first treatment on the surface of the By the formula:obtaining the change trend +.>Wherein->Is an endpoint vector; the output module is used for carrying out change rate planning after the change trend of the target pixel point after the direction planning, outputting corresponding control parameters according to the change trend of the target pixel point after the change rate planning, and comprises the following steps: by the formula:calculating the distance D between the current influencing factor and the target, wherein +.>For the current position vector of the influencing factor, +.>The current position vector of the target; setting an adjusting parameter k, and passing through the formula:obtaining the change trend +.>Wherein->Is a smoothing factor, takes a small positive number for avoiding zero errors, ++>And obtaining the change trend of the target pixel point after the direction planning.
6. A computing device, comprising: one or more processors; one or more processors; storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1-4.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program which, when executed by a processor, implements the method according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311033692.5A CN116774590B (en) | 2023-08-17 | 2023-08-17 | Adaptive regulation and control method and system for influencing interference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311033692.5A CN116774590B (en) | 2023-08-17 | 2023-08-17 | Adaptive regulation and control method and system for influencing interference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116774590A CN116774590A (en) | 2023-09-19 |
CN116774590B true CN116774590B (en) | 2023-11-07 |
Family
ID=88006669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311033692.5A Active CN116774590B (en) | 2023-08-17 | 2023-08-17 | Adaptive regulation and control method and system for influencing interference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116774590B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10187930A (en) * | 1996-12-19 | 1998-07-21 | Hitachi Ltd | Running environment recognizing device |
CN105933615A (en) * | 2016-07-04 | 2016-09-07 | 北方民族大学 | Unmanned aerial vehicle based image acquisition system, image acquisition method and unmanned aerial vehicle |
CN108090919A (en) * | 2018-01-02 | 2018-05-29 | 华南理工大学 | Improved kernel correlation filtering tracking method based on super-pixel optical flow and adaptive learning factor |
CN111506058A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and device for planning short-term path of automatic driving through information fusion |
CN111597961A (en) * | 2020-05-13 | 2020-08-28 | 中国科学院自动化研究所 | Moving target track prediction method, system and device for intelligent driving |
CN111814590A (en) * | 2020-06-18 | 2020-10-23 | 浙江大华技术股份有限公司 | Personnel safety state monitoring method, equipment and computer readable storage medium |
CN115712306A (en) * | 2022-09-23 | 2023-02-24 | 彭兵兵 | Unmanned aerial vehicle navigation method for multi-machine cooperation target tracking |
CN116135640A (en) * | 2021-11-18 | 2023-05-19 | 广州汽车集团股份有限公司 | Anti-collision early warning method and system for vehicle and vehicle |
CN116543368A (en) * | 2023-05-05 | 2023-08-04 | 河南大学 | Image processing method for indoor environment and collision-free system |
-
2023
- 2023-08-17 CN CN202311033692.5A patent/CN116774590B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10187930A (en) * | 1996-12-19 | 1998-07-21 | Hitachi Ltd | Running environment recognizing device |
CN105933615A (en) * | 2016-07-04 | 2016-09-07 | 北方民族大学 | Unmanned aerial vehicle based image acquisition system, image acquisition method and unmanned aerial vehicle |
CN108090919A (en) * | 2018-01-02 | 2018-05-29 | 华南理工大学 | Improved kernel correlation filtering tracking method based on super-pixel optical flow and adaptive learning factor |
CN111506058A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and device for planning short-term path of automatic driving through information fusion |
CN111597961A (en) * | 2020-05-13 | 2020-08-28 | 中国科学院自动化研究所 | Moving target track prediction method, system and device for intelligent driving |
CN111814590A (en) * | 2020-06-18 | 2020-10-23 | 浙江大华技术股份有限公司 | Personnel safety state monitoring method, equipment and computer readable storage medium |
CN116135640A (en) * | 2021-11-18 | 2023-05-19 | 广州汽车集团股份有限公司 | Anti-collision early warning method and system for vehicle and vehicle |
CN115712306A (en) * | 2022-09-23 | 2023-02-24 | 彭兵兵 | Unmanned aerial vehicle navigation method for multi-machine cooperation target tracking |
CN116543368A (en) * | 2023-05-05 | 2023-08-04 | 河南大学 | Image processing method for indoor environment and collision-free system |
Also Published As
Publication number | Publication date |
---|---|
CN116774590A (en) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109635685B (en) | Target object 3D detection method, device, medium and equipment | |
CN106462976B (en) | Method for tracking shape in scene observed by asynchronous sensor | |
WO2021134296A1 (en) | Obstacle detection method and apparatus, and computer device and storage medium | |
JP2021523443A (en) | Association of lidar data and image data | |
US10867211B2 (en) | Method for processing a stream of video images | |
CN107609486A (en) | To anti-collision early warning method and system before a kind of vehicle | |
Zhu et al. | Moving object detection with deep CNNs | |
CN111932583A (en) | Space-time information integrated intelligent tracking method based on complex background | |
CN113052873B (en) | Single-target tracking method for on-line self-supervision learning scene adaptation | |
JP5716671B2 (en) | Runway recognition device, vehicle, runway recognition method, and runway recognition program | |
CN106780560B (en) | Bionic robot fish visual tracking method based on feature fusion particle filtering | |
CN112329832B (en) | Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network | |
CN116611603B (en) | Vehicle path scheduling method, device, computer and storage medium | |
WO2023093306A1 (en) | Vehicle lane change control method and apparatus, electronic device, and storage medium | |
CN116664620A (en) | Picture dynamic capturing method and related device based on tracking system | |
CN113608663A (en) | Fingertip tracking method based on deep learning and K-curvature method | |
EP3654276B1 (en) | Analysis device, analysis method, and program | |
CN118238832B (en) | Intelligent driving method and device based on visual perception | |
CN104219488A (en) | Method and device of generating target image as well as video monitoring system | |
CN117311393A (en) | Unmanned aerial vehicle autonomous flight path planning method and system | |
CN112711255A (en) | Mobile robot obstacle avoidance method, control device and storage medium | |
US11080562B1 (en) | Key point recognition with uncertainty measurement | |
CN116774590B (en) | Adaptive regulation and control method and system for influencing interference | |
CN113643355A (en) | Method and system for detecting position and orientation of target vehicle and storage medium | |
CN116665097A (en) | Self-adaptive target tracking method combining context awareness |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |