CN113721449A - Multi-rotor-wing aircraft control system and method - Google Patents

Multi-rotor-wing aircraft control system and method Download PDF

Info

Publication number
CN113721449A
CN113721449A CN202110009800.XA CN202110009800A CN113721449A CN 113721449 A CN113721449 A CN 113721449A CN 202110009800 A CN202110009800 A CN 202110009800A CN 113721449 A CN113721449 A CN 113721449A
Authority
CN
China
Prior art keywords
target
image
tracking
model
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110009800.XA
Other languages
Chinese (zh)
Inventor
莫雳
张棚柯
宋韬
范世鹏
林德福
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110009800.XA priority Critical patent/CN113721449A/en
Publication of CN113721449A publication Critical patent/CN113721449A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-gyroplane control system and a method, which comprises a measuring module, a processing module and an information transmission module, wherein the processing module comprises a visual processor and a flight control main chip, a detection model, a tracking model and an adjustment model are arranged in the visual processor, and the detection model frames a suggestion region from a subsequent image after a task target is determined; after determining a task target, the tracking model selects a tracking selection frame from subsequent images; and adjusting the model to obtain the overlap degree of the recommended area and the tracking selection frame, controlling the rotation angular speed of the photoelectric pod according to the overlap degree, and controlling the gyroplane to interact with or land on the task target by the flight control main chip through normal expected overload under a real-time resolving sight system. The multi-rotor-wing machine control system and the method have the advantages of high identification speed, safety and stability, and are particularly suitable for tracking, interacting and landing of a fast moving target.

Description

Multi-rotor-wing aircraft control system and method
Technical Field
The invention relates to a multi-gyroplane control system and a multi-gyroplane control method, and belongs to the field of gyroplanes.
Background
The multi-rotor aircraft has the advantages of small volume, light weight, simple structure, flexible flight control mode, strong adaptability to complex terrains and narrow spaces and the like. In recent years, with the development of microelectronic technology and microprocessor technology, multi-rotor aircrafts have been widely applied in military and civil fields, such as battlefield reconnaissance, battlefield striking, power inspection, post-disaster search and rescue, indoor and outdoor reconnaissance, express delivery field, agricultural plant protection and the like.
With the wide use of the multi-rotor aircraft, the requirements for the multi-rotor aircraft are gradually increased, the task execution difficulty of the multi-rotor aircraft is gradually increased, and the task environment is continuously complicated and deteriorated.
The existing multi-gyroplane control system has insufficient camera precision, GPS data stability and sensor data reliability, and limits the environmental perception capability, the autonomous capability and the execution capability of the gyroplane.
In addition, present target tracking and gyroplane independently descend and use PID mostly to cooperate visual information to realize, but PID control's robustness is relatively poor, it is great to receive the environmental impact, lead to control accuracy lower, when the target is in the mobile state, for example interact with other gyroplanes, snatch the target object of removal, when descending the target and be the mobile platform, gyroplane has great shake, can make interactive process increase, descend unstably, cause gyroplane side to turn over the damage even.
It is therefore desirable to devise an autonomy control system and method for a rotorcraft that addresses the above-mentioned problems.
Disclosure of Invention
In order to overcome the above problems, the present inventors have conducted intensive studies to design a multi-rotor machine control system, which is characterized by comprising a measurement module, a processing module and an information transmission module,
the measuring module is used for sensing the environment and comprises a GPS, an extended IMU sensor and a photoelectric pod;
the processing module is used for resolving the data detected by the measuring module so as to control the multi-gyroplane and the measuring module;
the information transmission module is used for communicating with the ground base station.
The processing module comprises a vision processor and a flight control main chip.
A detection model and a tracking model and an adjustment model are arranged in the vision processor,
after determining a task target, the detection model selects a suggestion area from a subsequent image;
after determining a task target, the tracking model selects a tracking selection frame from subsequent images;
the adjusting model obtains the overlapping degree of the suggested area and the tracking selection frame, and the rotation angular speed of the photoelectric pod is controlled according to the overlapping degree.
In another aspect, the present invention also provides a multi-rotor machine control method, comprising the steps of:
s1, the gyroplane flies to a target area, the photoelectric pod shoots images, the images are transmitted to a ground base station, a task target is selected by the ground base station frame, and the images containing the task target are transmitted to a visual processor;
s2, the photoelectric pod transmits the shot image to the vision processor, and the vision processor determines the position of the target according to the image and controls the photoelectric pod to track the target;
and S3, the flight control main chip solves the normal expected overload under the sight line system, and controls the gyroplane to fly to the mission target through the normal expected overload.
Specifically, in step S1, the rotorcraft flies under the control of the ground base station, the ground base station acquires the image captured by the photoelectric pod in real time, determines whether a task target exists in the image, and when the task target appears in the image, the ground base station selects an area in which the task target is located in the image frame, and transmits the image containing the target frame to the vision processor.
In step S2, the vision processor receives the initial map and starts recognizing the image captured by the optoelectronic pod, including the following substeps:
s21, selecting a suggested area containing the similar target from the image frame;
s22, selecting a tracking selection frame from the image frame;
and S23, determining the task target position and adjusting the rotation angular speed of the photoelectric pod.
Further, in step S21, the vision processor selects a suggested region from each frame image by a detection model, which is a neural network model obtained through a large amount of data learning.
In step S22, a regression model is trained using the target to be tracked selected from the initial image frames, and the target is continuously selected and tracked in the subsequent other images by the regression model, so as to provide tracking frames.
In step S23, a task target area is determined by comparing the recommended area with the tracking frame, and after the target area is determined, the rotation angular velocity of the photoelectric pod is adjusted so that the target area is located at the center of the field of view of the photoelectric pod.
According to the present invention, in step S3, the normal expected overload under the line of sight is solved in real time by:
Figure BDA0002884581840000031
wherein, acIndicating the normal expected overload in the line of sight, and N indicating the proportionality index, preferably 4, VrThe relative speed of the direction of the line of sight is indicated,
Figure BDA0002884581840000032
angular velocity of rotation for photoelectric pod
The invention has the advantages that:
(1) according to the multi-gyroplane control system and the multi-gyroplane control method, the independent vision processor 4 is arranged, and measurement calculation and control of the gyroplane are divided into two chips to operate, so that the calculation precision and speed are improved;
(2) according to the multi-rotor-wing machine control system and the multi-rotor-wing machine control method, the task target is determined by decomposing the identification process into the suggestion region and the tracking selection frame, so that the calculation amount is reduced, the calculation speed is improved, the identification speed is high, and the multi-rotor-wing machine control system and the multi-rotor-wing machine control method are particularly suitable for tracking, interacting and landing of a fast moving target;
(3) according to the multi-rotor-wing aircraft control system and the multi-rotor-wing aircraft control method, when the multi-rotor-wing aircraft interacts with a task target or lands on the task target, the relative speed with the task target is low, and the multi-rotor-wing aircraft control system and the multi-rotor-wing aircraft control method are safer and more stable;
(4) according to the multi-rotor-wing aircraft control system and method provided by the invention, the flying speed is high when the moving task target is tracked, and the upper task target can be tracked in a short time.
Drawings
FIG. 1 illustrates a schematic view of a multi-rotor machine control system in accordance with a preferred embodiment of the present invention;
FIG. 2 illustrates a schematic view of a multi-rotor machine control system in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-rotor machine control method in accordance with a preferred embodiment of the present invention;
fig. 4 shows a task object recognition diagram in embodiment 1 of the present invention.
Reference numerals
1-GPS;
2-extended IMU sensors;
3-a photovoltaic pod;
4-a vision processor.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In one aspect, the present disclosure provides a multi-rotor machine control system including a measurement module, a processing module, and an information transmission module, as shown in fig. 1 and 2.
The measuring module is used for sensing the environment and comprises a GPS (global positioning system) 1, an extended IMU (inertial measurement unit) sensor 2 and a photoelectric pod 3.
Further, the extended IMU sensor includes a gyroscope, an accelerometer, a magnetometer, and a barometer, wherein the altitude information provided by the barometer is used as a supplement to the GPS information to compensate for the low altitude accuracy of the GPS detection.
According to the invention, the optoelectronic pod is mounted on the head of the rotorcraft, so as to guarantee a wide field of view; the GPS is arranged at the tail part of the rotorcraft and is far away from other electronic equipment so as to reduce electromagnetic interference; the extended IMU sensor is mounted at the centroid to ensure accuracy of the detected data.
In a preferred embodiment, the photoelectric pod is an EOT90a3 photoelectric pod and is used for shooting images in real time, and the photoelectric pod has the advantages of high shooting angle control precision, low delay and the like.
In a preferred embodiment, the gyroscope uses L3GD20H, the accelerometer and magnetometer use LSM303D, and the barometer uses MS 5611.
The processing module is used for resolving data detected by the measuring module, and further controlling the multi-gyroplane and the measuring module, and comprises a vision processor 4, a flight control main chip and a power management module for providing electric energy.
The traditional processing module transmits the data detected by the measuring module to the flight control main chip, and the data is resolved by the flight control main chip, so that the multi-gyroplane and the measuring module are controlled, the limit of the computing capability of the flight control chip is avoided, accurate resolving cannot be achieved, resolving speed is slow, information accuracy obtained by the flight control chip is low, environment perception capability is poor, autonomous recognition cannot be well completed, control accuracy is low, and the gyroplane has large shaking and other phenomena during interaction and landing.
In the invention, the measurement calculation and the control of the gyroplane are split into two chips for operation by arranging the independent vision processor 4, so that the calculation precision and speed are improved.
In a preferred embodiment, the vision processor 4 is a TX2 processor, and the flight control host chip is an STM32F 427; TX2 has a small size, high image processing performance, low power consumption, and the like, and is suitable for mounting on a rotorcraft.
According to a preferred embodiment of the invention, the optoelectronic pod 3 is connected to the vision processor 4 via a USART; the gyroscope, the accelerometer, the magnetometer and the barometer are connected with the flight control main chip through the SPI; the vision processor 4 is connected with the flight control main chip through mavros.
The information transmission module is used for communicating with the ground base station, transmitting the image to the ground base station and transmitting the instruction of the ground base station to the gyroplane.
The instructions include selection of mission objectives and direct control of the rotorcraft.
According to the invention, the information transmission module can transmit the image shot by the photoelectric pod to the ground base station, the ground base station frame selects the target, receives the image containing the target frame transmitted back by the ground base station, and transmits the image to the visual processor;
after the vision processor obtains the image containing the target frame, the photoelectric pod transmits a subsequently shot image to the vision processor.
Further, the vision processor is used for framing out a target area in a subsequently shot image and controlling the rotation angular speed of the photoelectric pod so that the target area is positioned in the center of the shot image.
Further, a detection model and a tracking model and an adjustment model are provided in the vision processor.
The detection model is used for selecting a suggestion region from a subsequent image after determining a task target; the tracking model is used for framing a tracking selection frame from a subsequent image after a task target is determined; the adjusting model is used for acquiring the overlapping degree of the recommended area and the tracking selection frame and controlling the rotation angular speed of the photoelectric pod according to the overlapping degree.
The flight control main chip resolves normal expected overload under a sight line in real time, controls the rotor wing to work through the normal expected overload and flies to a task target, and therefore grabbing or landing with the task target is completed on the task target.
In another aspect, the present invention provides a method of controlling a multi-rotor machine, as shown in fig. 3, comprising the steps of:
s1, the gyroplane flies to a target area, the photoelectric pod shoots images, the images are transmitted to a ground base station, a task target is selected by the ground base station frame, and the images containing the task target are transmitted to a visual processor;
s2, the photoelectric pod transmits the shot image to the vision processor, and the vision processor determines the position of the target according to the image and controls the photoelectric pod to track the target;
and S3, the flight control main chip solves the normal expected overload under the sight line system, and controls the gyroplane to fly to the mission target through the normal expected overload.
In step S1, the rotorcraft flies under the control of the ground base station, the ground base station acquires the image captured by the photoelectric pod in real time, determines whether a task target exists in the image, and when the task target appears in the image, the ground base station selects an area in which the task target is located in the image, and transmits the image containing the target frame to the vision processor.
In a preferred embodiment, the frequency of images shot by the photoelectric pod is 20-30 Hz, so that the continuity of the images is ensured, the total amount of image information transmission is reduced, the performance requirement on an information transmission module is lowered, and the electric energy of the rotorcraft is saved.
According to the invention, the ground base station selects any frame of image frame from the received images to select the target area, and the image of the frame containing the target frame is taken as an initial image and transmitted to the visual processor through the information transmission module so that the visual processor can identify the task target.
In step S2, after receiving the initial map, the vision processor starts recognizing the image captured by the optoelectronic pod, specifically, the vision processor includes the following sub-steps:
s21, selecting a suggested area containing the similar target from the image frame;
s22, selecting a tracking selection frame from the image frame;
and S23, determining the task target position and adjusting the rotation angular speed of the photoelectric pod.
According to the present invention, step S21 is performed in the detection model, step S22 is performed in the tracking model, and step S23 is performed in the adjustment model.
In the visual identification process, the task target is determined by decomposing the identification process into a suggestion region and tracking a selection frame, compared with the traditional method of directly identifying the task target, the method has the advantages that the requirement on the calculation precision of the model is reduced, the calculation amount of the model is reduced, the calculation accuracy of the model is ensured, the calculation speed is improved, and the gyroplane can identify and track the task target which moves rapidly; meanwhile, the problems that in the traditional task target identification process, the regression precision of a target frame is low, the target contour identification is inaccurate, and the identification difficulty is high are solved, and the overall target tracking effect is improved.
Preferably, steps S21 and S22 are performed in synchronization, and the vision processor frames a suggested region and a tracking frame in the image every time the electro-optical pod delivers an image.
Specifically, in step S21, the vision processor selects a suggested region having an object similar to or of the same kind as the shape of the target to be tracked from each frame image. Preferably, any number of proposed regions may be included in each frame of the image.
In the invention, the visual processor selects the suggestion region through a detection model frame, a target model which is washed by a sample is stored in the detection model, and the detection model calls the appearance information of the target from the target model, namely the detection model can select an object which is similar to the appearance of the target to be tracked from the image frame to obtain the suggestion region with a boundary frame.
Preferably, the detection model is a neural network model obtained through a large amount of data learning, the detection model divides the image into a plurality of sampling regions, preferably into 7 × 7 grid regions, predicts the position sizes of B target bounding boxes with centers falling in the sampling regions and the confidence degrees of the targets in the bounding boxes in each sampling region, and classifies the objects in the bounding boxes.
Further, the output of the detection model is a multidimensional tensor, preferably a tensor of 7 × 7 × (5 × B + C). Wherein 5 denotes the position, size and confidence (px, py, pw, ph, pp) of each bounding box, the position of which includes the coordinate px of the bounding box center point on the X-axis and the coordinate py on the Y-axis; the size of the bounding box includes a width value pw and a height value ph of the bounding box, with a confidence represented by the letter pp. B denotes the number of candidate frame areas preset per mesh, usually 2, and C denotes the number of classes, specifically, specific categories of cars, pedestrians, bicycles, airplanes, etc.
The confidence coefficient refers to the probability that the neural network predicts that the region belongs to the category, for example, the neural network outputs a vehicle at a certain position, and the confidence coefficient is 0.6, which means that the neural network considers that the region is a vehicle at 60% probability and not a vehicle at 40% probability.
Further, in the detection model, the last fully connected layer can predict the coordinates (px, py) of the center point and width and height (pw, ph) of the bounding box of the object and the confidence coefficient pp and the probability of the object being classified into each class. Wherein the width and height are normalized values relative to the image, and the center position coordinates are normalized values relative to the grid position, both between 0 and 1.
The detection model multiplies the classification information predicted by each grid with the confidence of the corresponding bounding box to obtain the classification confidence of each bounding box, and filters the bounding boxes with over-low confidence, and usually filters out the bounding boxes with the confidence less than 0.7; and carrying out non-maximum inhibition processing on the rest windows according to the order of the confidence level, filtering the overlapped windows, and outputting the rest windows as final output, namely the final output is the suggested area.
In step S22, the target to be tracked selected from the frame in the initial image is used to train the regression model, and the target is selected and tracked continuously in the subsequent other images through the regression model, that is, the tracking frame including the target to be tracked is provided in each subsequent frame of image.
Preferably, the vision processor performs a pre-processing on the initial image after obtaining the initial image to improve a processing speed and a recognition accuracy, and includes the following processes:
s201, constructing a Gaussian distribution regression label, wherein the Gaussian distribution regression label can be y1Indicating that the closer to the center value the greater the edge is to 0, y1The matrix and the search window are the same size.
Usually, the tracking algorithm does not search for the target in the whole image, but takes a region 2.5 times the size of the target at the position of the target in the previous frame to search for the target, and the region is called as a "search window". I.e. the search window is a sub-image cropped from the whole image, the image being in the form of a matrix.
S202, from the 1 st frame, the target position P1Cutting out a search window and extracting a characteristic x1,x1The same size as the search window, needs to be on x1Adding a cosine window; wherein x is1Is a Histogram of Oriented Gradients (HOG) of the search window region;
the cosine window is a window having a center of 1 and closer to 0 as the edge goes, and the addition of the cosine window can suppress surrounding samples and highlight middle samples.
S203, use x1And y1Training the regression model f1Let y1=f1(x1) It is true that the first and second sensors,
the regression label has the highest median score and 0 marginal score. The regression label is y1, and knowing the input x1 and the output y1, the training regression model f is trained1It is true that f is solved1The parameter (c) of (c). For example, for a one-dimensional equation, knowing X is 1 and Y is 2, we can get the function Y is 2X, and then when X is 2, we can get Y is 41And y1Is to change the function input and output from numbers to matrices.
According to the present invention, in step S22, a tracking target is identified by the following sub-steps:
s221, in the t frame image, from PtMiddle extracted feature xtConstructing a Gaussian distribution regression label ytTraining regression model ytThe regression model is responsive to samples of a fixed size window; in this substep, when t is 1, in substeps S221 to S223, as in the operation procedures in substeps S201 to S203 described above, due to the feature x, the result istThe regression model is continuously changed, and the inverse solution is changed every frame.
S222, in the t +1 th frame image, from Pt+1Generating a candidate window near the position; specifically, at the target position of the previous frame, a region 2.5 times the size of the target of the previous frame is selected as a candidate window, and a regression model y is usedtTesting the response of each candidate window; generally, the function of a regression model is called a filter, and the response is the output obtained by applying the filter to a certain region.
S223, obtaining a maximum response window and a position Pt+1At the position Pt+1Namely the target position to be tracked in the t +1 th frame image, and the response window is a tracking selection frame.
S224, when the image of the t +2 th frame, which is the next frame, is to be obtained, the substeps S221 to S223 are repeated.
In step S23, an adjustment model is set to determine the task target area by comparing the suggested area with the tracking selection box.
Specifically, a task target area is determined by comparing the overlap degree of the suggested area and the tracking selection frame, wherein the overlap degree is as follows:
Figure BDA0002884581840000111
because the suggested region and the tracking selection frame are selected on the same frame of the graph line, the intersection and the union of the suggested region and the tracking selection frame can be directly read, and the respective areas are read according to the number of the pixel points.
In the present invention, there is only one tracking frame, and there may be a plurality of suggested regions obtained in step S21, each region in the plurality of suggested regions respectively resolving the degree of overlap with the tracking frame, and the suggested region with the highest degree of overlap is selected as the target region.
Further, a threshold value is set in the adjustment model, and the suggested area with the overlapping degree larger than the threshold value is used as a target area; when the tracking selection frame is not obtained in the step S22 or a suggested area with the overlapping degree larger than the threshold value does not exist, the target is judged to be blocked, the gyroplane reminds the ground base station to perform manual intervention through the information transmission module, and images are continuously shot until the suggested area with the overlapping degree larger than the threshold value appears or other instructions of the ground base station are received.
In the present invention, preferably, the value of the threshold is 0.2 to 0.7, preferably 0.5.
In step S23, after the target area is determined, the rotation angular velocity of the photoelectric pod is adjusted so that the target area is located at the center of the field of view of the photoelectric pod, preferably so that the midpoint of the target area is located at the center of the field of view of the photoelectric pod.
Preferably, through the calculation of the target area in the continuous frame images, the angular rate control command for controlling the rotation of the photoelectric pod is controlled, and the photoelectric pod is controlled to rotate based on the angular rate control command, so that the task target is always positioned in the center of the visual field.
The task target area is always positioned in the center of the view field, so that the gyroplane is not easy to lose the task target, a simpler data model is provided for a guidance algorithm in the process that the follow-up gyroplane approaches the task target, the calculated amount is reduced, the calculation speed is improved, and meanwhile, the calculation of normal expected overload under the visual system is possible.
Specifically, the pixel deviation is calculated by PID for the midpoint of the target region in the images of the successive frames, and the angular velocity control command, that is, the angular velocity control command is obtained
Figure BDA0002884581840000121
Figure BDA0002884581840000122
Wherein, the controlxRepresenting the component of the generated angular rate control command in the X-axis, controlyRepresenting the component of the generated angular rate control command on the Y-axis, where the X-axis and the Y-axis are two coordinate axes perpendicular to each other on the image. err (r)x,erryRepresenting pixel deviation, i.e., the pixel value of the point in the target region in the image from the center of the image; k isp,ki,kdBoth represent PID parameters, preferably with the value (80, 0, 5), and dt is the adjacent frame time interval.
In the invention, the vision processor acquires the rotation angle of the photoelectric pod in real time, transmits the rotation angle and the rotation angular velocity of the photoelectric pod to the flight control main chip, and the flight control main chip calculates the normal expected overload under the sight system and controls the gyroplane to fly to the task target through the normal expected overload.
Specifically, in step S3, the normal expected overload under the line of sight is solved in real time by:
Figure BDA0002884581840000131
wherein, acIndicating the normal expected overload in the line of sight, and N indicating the proportionality index, preferably 4, VrThe relative speed of the direction of the line of sight is indicated,
Figure BDA0002884581840000132
is the angular velocity of rotation of the photovoltaic pod.
In the invention, the sight line direction refers to the connecting line direction of the photoelectric pod and the task target, and the sight line direction relative speed refers to the speed of the task target relative to the rotorcraft under the sight line.
In the invention, a PN method is adopted to take the connecting line angle between the gyroplane and a task target and the line-of-sight angular rate as input, and the landing speed is greatly improved as long as the angle is ensured to be unchanged and the target is not required to be chased by tailgating.
Further, as can be seen from the normal expected overload solution in the line of sight system, the smaller the relative speed of the rotorcraft to the mission target, the smaller the overload of the rotorcraft, and the closer the speed to the mission target. The invention controls the gyroplane to fly to the task target through normal expectation overload under the sight system, achieves the effects of higher speed in the process of tracking the moving task target and lower speed when the moving task target is close to the task target, and ensures that the interaction with the moving task target or the landing on the moving task target is safer and more stable.
According to the invention, different strategies are adopted by the rotorcraft in the tracking and interactive landing stages, in the tracking process, namely, the horizontal tracking uses proportional guidance, the altitude direction adopts PID control, after the horizontal speed and the direction are ensured to be consistent with the target, the altitude is lowered, the rotorcraft interacts with or lands on the task target, and preferably, the PID value of the PID control in the altitude direction is 0.8, 0.4 and 1.5.
Preferably, the line-of-sight direction relative speed VrAs the relative velocity V between the rotorcraft and the mission target in a line-of-sight coordinate systemLThe component in the X-axis direction is obtained by the following formula:
Vr=[1,0,0]*VL(II)
Wherein, VrIs a scalar quantity, VLIs a vector.
Preferably, the relative velocity V between the rotor and the mission target in the line of sight coordinate systemLThe relative velocity between the rotor and the mission target in the geographic coordinate system can be transformed as follows:
Figure BDA0002884581840000141
wherein the content of the first and second substances,
Figure BDA0002884581840000142
representing a transformation between a geographic coordinate system and a line-of-sight coordinate systemMatrix, VnRepresenting the relative speed between the rotary wing aircraft and the task target under the geographic coordinate system;
preferably, the conversion matrix
Figure BDA0002884581840000143
Obtained by the following formula:
Figure BDA0002884581840000144
preferably, VnThree components V in a geographic coordinate systemx、VyAnd VzThe difference is obtained by differentiating the deviation values x, y and z of the rotor and the target in three directions in the geographic coordinate system respectively.
In the geographic coordinate system, the deviation value z in the vertical direction is obtained by the difference between the height value of the gyroplane and the height value of a target, the height value of the gyroplane is obtained by a barometer on the gyroplane, and the height value of the task target is obtained by calculating continuous frame images containing the task target;
in the geographic coordinate system, the deviation values x and y in the horizontal direction are obtained by the following formula (five):
Figure BDA0002884581840000151
in the present invention, q isxRepresenting the component of the rotation angle of the optoelectronic pod in the geographic coordinate system in the direction of the X-axis, qyRepresenting the component of the rotation angle of the optoelectronic pod in the geographic coordinate system in the direction of the Y-axis, qzRepresenting the component of the photoelectric pod rotation angle in the Z-axis direction in a geographic coordinate system; theta denotes the pitch angle of the rotor in the geographical coordinate system,
Figure BDA0002884581840000152
and the rolling angle of the rotary wing aircraft in the geographic coordinate system is represented and obtained through a gyroscope.
In the invention, the geographic coordinate system refers to a geographic coordinate system ENU (northeast), wherein the x axis is parallel to the ground horizontal plane and points to the geographic east (E), the y axis is parallel to the ground horizontal plane and points to the geographic north (N), the z axis is vertical to the ground and points upwards, and the direction of the z axis is upwards (U) according to the right-hand rule.
The sight line coordinate system is that the center of a lens of the photoelectric pod is taken as an origin O, an OX axis is a connecting line of the gyroplane and a task target, a pointing mobile platform is positive, an OZ axis is positioned in a plumb plane vertical to the ground and is vertical to the OX and is positive upwards, and the direction of an OY axis is determined by a right-hand rule.
Examples
Example 1:
there are several cars on the ground, which cruise around the rotor disk, and the image frequency of the photoelectric pod on the rotorcraft is 20 Hz.
In step S1, the gyroplane flies toward the target area, the photoelectric pod captures an image, the image is transmitted to the ground base station, the ground base station frames one of the vehicles as a task target, and the image containing the task target is transmitted to the vision processor;
in step S2, the detection model is a BP neural network model obtained through a large amount of data learning, which divides the image into 7 × 7 mesh regions for recognition, and filters out a bounding box with a confidence level less than 0.7, so as to frame out a suggested region containing similar targets (all cars); selecting a tracking selection frame through the tracking model frame; the adjustment model determines a task target area by calculating the degree of overlap and controls the rotation angular velocity of the electro-optical pod so that the task target area is located in the center of the image, as shown in fig. 4.
The degree of overlap is:
Figure BDA0002884581840000161
the control command of the photoelectric pod rotation angular velocity is to carry out PID calculation and acquisition on pixel deviation of the midpoint of a target area in the images of the continuous frames
Figure BDA0002884581840000162
Figure BDA0002884581840000163
Wherein, the controlxRepresenting the component of the generated angular rate control command in the X-axis, controlyRepresenting the component of the generated angular rate control command on the Y-axis, where the X-axis and the Y-axis are two coordinate axes perpendicular to each other on the image. err (r)x,erryRepresenting pixel deviation, i.e., the pixel value of the point in the target region in the image from the center of the image; k isp,ki,kdThe value is (80, 0, 5), and dt is 0.05 s.
Gyroplanes resolve normal expected overload under line of sight according to the following equation
Figure BDA0002884581840000164
The value of the proportional guidance coefficient is 4, and the gyroplane is normally controlled to expect overload through the sight system, so that the gyroplane stably lands on the top of the automobile framed and selected by the ground base station.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

Claims (10)

1. A multi-gyroplane control system is characterized by comprising a measuring module, a processing module and an information transmission module,
the measuring module is used for sensing the environment and comprises a GPS, an extended IMU sensor and a photoelectric pod;
the processing module is used for resolving the data detected by the measuring module so as to control the multi-gyroplane and the measuring module;
the information transmission module is used for communicating with the ground base station.
2. The multi-rotor machine control system of claim 1,
the processing module comprises a vision processor and a flight control main chip.
3. The multi-rotor machine control system of claim 2,
a detection model and a tracking model and an adjustment model are arranged in the vision processor,
after determining a task target, the detection model selects a suggestion area from a subsequent image;
after determining a task target, the tracking model selects a tracking selection frame from subsequent images;
the adjusting model obtains the overlapping degree of the suggested area and the tracking selection frame, and the rotation angular speed of the photoelectric pod is controlled according to the overlapping degree.
4. A method of multi-rotor aircraft control comprising the steps of:
s1, the gyroplane flies to a target area, the photoelectric pod shoots images, the images are transmitted to a ground base station, a task target is selected by the ground base station frame, and the images containing the task target are transmitted to a visual processor;
s2, the photoelectric pod transmits the shot image to the vision processor, and the vision processor determines the position of the target according to the image and controls the photoelectric pod to track the target;
and S3, the flight control main chip solves the normal expected overload under the sight line system, and controls the gyroplane to fly to the mission target through the normal expected overload.
5. The multi-rotor machine control method according to claim 4,
in step S1, the rotorcraft flies under the control of the ground base station, the ground base station acquires the image captured by the photoelectric pod in real time, determines whether a task target exists in the image, and when the task target appears in the image, the ground base station selects an area in which the task target is located in the image, and transmits the image containing the target frame to the vision processor.
6. The multi-rotor machine control method according to claim 4,
in step S2, the vision processor receives the initial map and starts recognizing the image captured by the optoelectronic pod, including the following substeps:
s21, selecting a suggested area containing the similar target from the image frame;
s22, selecting a tracking selection frame from the image frame;
and S23, determining the task target position and adjusting the rotation angular speed of the photoelectric pod.
7. The multi-rotor machine control method according to claim 6,
in step S21, the vision processor selects a suggested region from each frame of the image by using a detection model, which is a neural network model obtained through a large amount of data learning.
8. The multi-rotor machine control method according to claim 6,
in step S22, a regression model is trained using the target to be tracked selected from the initial image frames, and the target is continuously selected and tracked in the subsequent other images by the regression model, so as to provide tracking frames.
9. The multi-rotor machine control method according to claim 6,
in step S23, a task target area is determined by comparing the recommended area with the tracking frame, and after the target area is determined, the rotation angular velocity of the photoelectric pod is adjusted so that the target area is located at the center of the field of view of the photoelectric pod.
10. The multi-rotor machine control method according to claim 4,
in step S3, the normal expected overload under the line of sight is solved in real time by:
Figure FDA0002884581830000021
wherein, acIndicating the normal expected overload in the line of sight, and N indicating the proportionality index, preferably 4, VrThe relative speed of the direction of the line of sight is indicated,
Figure FDA0002884581830000022
is the angular velocity of rotation of the photovoltaic pod.
CN202110009800.XA 2021-01-05 2021-01-05 Multi-rotor-wing aircraft control system and method Pending CN113721449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110009800.XA CN113721449A (en) 2021-01-05 2021-01-05 Multi-rotor-wing aircraft control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110009800.XA CN113721449A (en) 2021-01-05 2021-01-05 Multi-rotor-wing aircraft control system and method

Publications (1)

Publication Number Publication Date
CN113721449A true CN113721449A (en) 2021-11-30

Family

ID=78672477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110009800.XA Pending CN113721449A (en) 2021-01-05 2021-01-05 Multi-rotor-wing aircraft control system and method

Country Status (1)

Country Link
CN (1) CN113721449A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101645722B1 (en) * 2015-08-19 2016-08-05 아이디어주식회사 Unmanned aerial vehicle having Automatic Tracking and Method of the same
CN107450591A (en) * 2017-08-23 2017-12-08 浙江工业大学 Based on the tall and handsome unmanned plane motion tracking system up to TX2 processors
CN107463181A (en) * 2017-08-30 2017-12-12 南京邮电大学 A kind of quadrotor self-adoptive trace system based on AprilTag
US20180284777A1 (en) * 2015-12-10 2018-10-04 Autel Robotics Co., Ltd. Method, control apparatus, and system for tracking and shooting target
CN110081883A (en) * 2019-04-29 2019-08-02 北京理工大学 Low cost integrated navigation system and method suitable for high speed rolling flight device
CN111860461A (en) * 2020-08-05 2020-10-30 西安应用光学研究所 Automatic zooming method for built-in optical sensor of photoelectric pod

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101645722B1 (en) * 2015-08-19 2016-08-05 아이디어주식회사 Unmanned aerial vehicle having Automatic Tracking and Method of the same
US20180284777A1 (en) * 2015-12-10 2018-10-04 Autel Robotics Co., Ltd. Method, control apparatus, and system for tracking and shooting target
CN107450591A (en) * 2017-08-23 2017-12-08 浙江工业大学 Based on the tall and handsome unmanned plane motion tracking system up to TX2 processors
CN107463181A (en) * 2017-08-30 2017-12-12 南京邮电大学 A kind of quadrotor self-adoptive trace system based on AprilTag
CN110081883A (en) * 2019-04-29 2019-08-02 北京理工大学 Low cost integrated navigation system and method suitable for high speed rolling flight device
CN111860461A (en) * 2020-08-05 2020-10-30 西安应用光学研究所 Automatic zooming method for built-in optical sensor of photoelectric pod

Similar Documents

Publication Publication Date Title
CN111461023B (en) Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar
US20220234733A1 (en) Aerial Vehicle Smart Landing
US20200344464A1 (en) Systems and Methods for Improving Performance of a Robotic Vehicle by Managing On-board Camera Defects
CN109911188B (en) Bridge detection unmanned aerial vehicle system in non-satellite navigation and positioning environment
CN106054929B (en) A kind of unmanned plane based on light stream lands bootstrap technique automatically
US20190068829A1 (en) Systems and Methods for Improving Performance of a Robotic Vehicle by Managing On-board Camera Obstructions
CN110221625B (en) Autonomous landing guiding method for precise position of unmanned aerial vehicle
CN108153334B (en) Visual autonomous return and landing method and system for unmanned helicopter without cooperative target
CN105759829A (en) Laser radar-based mini-sized unmanned plane control method and system
CN106527481A (en) Unmanned aerial vehicle flight control method, device and unmanned aerial vehicle
WO2017116841A1 (en) Unmanned aerial vehicle inspection system
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
CN104808685A (en) Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN101109640A (en) Unmanned aircraft landing navigation system based on vision
CN105352495A (en) Unmanned-plane horizontal-speed control method based on fusion of data of acceleration sensor and optical-flow sensor
CN107783106A (en) Data fusion method between unmanned plane and barrier
CN109460046B (en) Unmanned aerial vehicle natural landmark identification and autonomous landing method
CN107783545A (en) Post disaster relief rotor wing unmanned aerial vehicle obstacle avoidance system based on OODA ring multi-sensor information fusions
CN102190081A (en) Vision-based fixed point robust control method for airship
CN105388908A (en) Machine vision-based unmanned aerial vehicle positioned landing method and system
CN107783119A (en) Apply the Decision fusion method in obstacle avoidance system
CN110104167A (en) A kind of automation search and rescue UAV system and control method using infrared thermal imaging sensor
Desaraju et al. Vision-based Landing Site Evaluation and Trajectory Generation Toward Rooftop Landing.
CN114689030A (en) Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
CN111615677B (en) Unmanned aerial vehicle safety landing method and device, unmanned aerial vehicle and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination