CN113141459A - Unmanned aerial vehicle airborne vision intelligent processing system and method - Google Patents
Unmanned aerial vehicle airborne vision intelligent processing system and method Download PDFInfo
- Publication number
- CN113141459A CN113141459A CN202011111242.XA CN202011111242A CN113141459A CN 113141459 A CN113141459 A CN 113141459A CN 202011111242 A CN202011111242 A CN 202011111242A CN 113141459 A CN113141459 A CN 113141459A
- Authority
- CN
- China
- Prior art keywords
- lens
- unmanned aerial
- aerial vehicle
- image
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 42
- 230000010365 information processing Effects 0.000 claims abstract description 25
- 238000012937 correction Methods 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims description 18
- 238000005259 measurement Methods 0.000 claims description 14
- 238000003672 processing method Methods 0.000 claims description 9
- 230000005693 optoelectronics Effects 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000012986 modification Methods 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 3
- 230000005622 photoelectricity Effects 0.000 claims description 2
- 230000004069 differentiation Effects 0.000 abstract description 3
- 230000010354 integration Effects 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical group C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D47/00—Equipment not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D47/00—Equipment not otherwise provided for
- B64D47/08—Arrangements of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Gyroscopes (AREA)
Abstract
The invention discloses an onboard vision intelligent processing system and method for an unmanned aerial vehicle, wherein the system comprises the unmanned aerial vehicle, a photoelectric pod and an information processing module, the photoelectric pod and the information processing module are carried on the unmanned aerial vehicle, the information processing module comprises an image recognition tracking sub-module, a holder PID (proportion integration differentiation) controller sub-module and a parameter correction sub-module, the parameter correction sub-module acquires detection data of a gyroscope, an accelerometer and a magnetometer, and corrects a lens rotation angle rate parameter output by the image recognition tracking sub-module through the detection data, and the method comprises the following steps: the method comprises the steps of collecting an image, identifying a target in the image, outputting a lens rotation angle rate parameter, correcting the lens rotation angle rate parameter, and controlling the rotation of a photoelectric pod lens through a PID controller. The unmanned aerial vehicle airborne vision intelligent processing system and method can solve or reduce the over-rotation phenomenon, the photoelectric pod lens rotates stably, the lens shaking is greatly reduced, and the shot image is clearer.
Description
Technical Field
The invention relates to an airborne visual intelligent processing system and method for an unmanned aerial vehicle, and belongs to the field of unmanned aerial vehicles.
Background
In recent years, unmanned aerial vehicles have been vigorously developed, have experienced transition from military use to civilization use, and have successfully become a practical tool recognized by the majority of industries and capable of replacing manpower to complete complex work, wherein the continuous tracking of targets by using unmanned aerial vehicles is a common task.
In to the target tracking task, traditional mode is that unmanned aerial vehicle carries the camera through carrying on, for example carries on monocular camera or two mesh cameras, but, the target relative position information that this kind of mode acquireed, need compensate unmanned aerial vehicle's current motion state, just can make relative position information accurate to camera lens is fixed with the unmanned aerial vehicle, and unmanned aerial vehicle's flight gesture can influence the continuous tracking effect of camera to the target, and the target phenomenon of losing appears often.
Still deposit unmanned aerial vehicle and carry on cloud platform among the prior art and shoot the target image, and then obtain the tracking mode of target relative position, however current cloud platform PID control algorithm, when control cloud platform camera lens is rotatory, the phenomenon of overrotation all can appear, for example when the cloud platform is required to rotate from 20 to 40 in reality, under traditional PID control, the cloud platform can be from 20 to overrotation to 45, then circle to 35, again rotate to 40, the phenomenon of overrotation appears many times even, if the target translation rate is too fast this moment, very easily lose the target, and when unmanned aerial vehicle acutely moved, conventional PID control algorithm stability was relatively poor, cause the losing of target easily.
In addition, present unmanned aerial vehicle tracking mode, mostly unmanned aerial vehicle shoot behind the image, transmit the image to the ground station, control unmanned aerial vehicle by the ground station and trail, but this process not only influences the timeliness of system, and the integrality of transmission information also is difficult to guarantee moreover.
Therefore, it is necessary to design an intelligent processing system and method for airborne vision of an unmanned aerial vehicle, which can ensure that the unmanned aerial vehicle efficiently, stably and rapidly tracks a target, enable a worker to clearly understand the state of the unmanned aerial vehicle, and perform manual control when necessary.
Disclosure of Invention
In order to overcome the problems, the inventor of the present invention has conducted intensive research, and on one hand, provides an airborne visual intelligent processing system of an unmanned aerial vehicle, which includes the unmanned aerial vehicle, and a photoelectric pod and an information processing module mounted on the unmanned aerial vehicle.
Further, the photoelectric pod is provided with an inertia measurement unit for detecting attitude information of the photoelectric pod, and the inertia measurement unit comprises a gyroscope, a geomagnetic meter and an accelerometer;
the information processing module comprises an image identification tracking sub-module, a holder PID controller sub-module and a parameter modification sub-module,
the image identification tracking submodule is used for identifying a target in an image, calculating a rotation angle required by moving the target to a photoelectric pod lens at the middle position of the image, and outputting a lens rotation angle rate parameter;
the parameter correction submodule is used for correcting the lens rotation angle rate parameter output by the image recognition tracking submodule;
and the cradle head PID controller submodule receives the corrected lens rotation angle rate parameter and uses the corrected lens rotation angle rate parameter as the input of a PID controller to control the rotation of a motor in the photoelectric pod.
According to the invention, the parameter correction submodule receives the lens rotation angular rate parameter output by the image recognition tracking submodule, acquires detection data of a gyroscope, an accelerometer and a geomagnetism meter, and corrects the lens rotation angular rate parameter through the detection data.
In a preferred embodiment, the unmanned aerial vehicle airborne visual intelligent processing system further comprises a screen projector, wherein the screen projector is connected with the information processing module and transmits the image received by the information processing module to the ground base station.
On the other hand, the invention also provides an airborne visual intelligent processing method of the unmanned aerial vehicle, which comprises the following steps:
s1, collecting an image;
s2, identifying the target in the image and outputting a lens rotation angle rate parameter;
and S3, correcting the rotation angle rate parameter of the lens, and controlling the rotation of the photoelectric pod lens through a PID controller.
In step S1, the capturing the image is to capture the image through the optoelectronic pod and transfer the captured image to the image recognition tracking sub-module.
In step S2, the image is identified by the image identification and tracking submodule, which is provided with a neural network, and the target is identified by the neural network, and the rotation angular rate parameter of the electro-optic pod lens when the target is moved to the middle position of the image is calculated.
In step S3, the correction of the lens rotation angular rate parameter is performed by fusing information detected by the inertial measurement unit, which includes gyroscope detection information, geomagnetism detection information, and speedometer detection information, with the lens rotation angular rate parameter.
In a preferred embodiment, step S3 includes the following sub-steps:
s31, fusing gyroscope detection information to obtain a gyroscope fusion attitude quaternion;
s32, fusing geomagnetic meter detection information and speedometer detection information to obtain a geomagnetic meter and accelerometer fusion attitude quaternion;
and S33, linearly fusing the gyroscope fusion attitude quaternion and the geomagnetism and accelerometer fusion attitude quaternion to finish the correction, and transmitting the corrected attitude quaternion to the PID controller to further realize the steering of the lens.
Further, in step S31, the fused gyroscope attitude quaternion after the fusion of the gyroscopes may be expressed as:
wherein,an attitude quaternion estimation value representing the rotation angle rate of the lens at the time of t-1 is obtained by filtering the attitude quaternion representing the rotation angle rate parameter of the lens through a Kalman filter,is the differential of the attitude quaternion, delta t is the detection period of the gyroscope, the differential of the attitude quaternionCan be obtained by the following formula:
Sωtangular velocity components of the respective axes in the information are detected for the gyroscope.
The invention has the advantages that:
(1) according to the unmanned aerial vehicle airborne vision intelligent processing system and method provided by the invention, the over-rotation phenomenon can be solved or reduced;
(2) according to the unmanned aerial vehicle airborne vision intelligent processing system and method provided by the invention, the photoelectric pod lens rotates stably, the lens shaking is greatly reduced, and the shot image is clearer.
(3) According to the unmanned aerial vehicle airborne vision intelligent processing system and method provided by the invention, image transmission does not occupy the bandwidth of a communication module, communication interference is reduced, and the reliability of the unmanned aerial vehicle is improved;
(4) according to the network communication system and the network communication method, the load of the server is reduced.
Drawings
Fig. 1 shows a schematic structural diagram of an onboard vision intelligent processing system of a unmanned aerial vehicle according to a preferred embodiment of the invention;
fig. 2 is a flow chart of an intelligent processing method for airborne vision of the unmanned aerial vehicle according to a preferred embodiment of the invention.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The invention provides an airborne visual intelligent processing system of an unmanned aerial vehicle, which comprises the unmanned aerial vehicle, a photoelectric pod and an information processing module, wherein the photoelectric pod and the information processing module are carried on the unmanned aerial vehicle.
Unmanned aerial vehicle has flight control unit and communication module, and the flight control unit is used for controlling unmanned aerial vehicle's flight, and communication module is used for communicating with ground basic station for ground basic station can control unmanned aerial vehicle.
The photoelectric pod is provided with a lens and an inertia measurement unit, the lens is used for shooting images, and the inertia measurement unit is used for detecting attitude information of the photoelectric pod and comprises a gyroscope, a geomagnetic meter and an accelerometer.
Further, in the present invention, the inertia measurement unit may be integrated inside the photoelectric pod or mounted outside the photoelectric pod, and is not particularly limited thereto.
The information processing module is electrically connected with the photoelectric pod and the unmanned aerial vehicle flight control unit.
Further, the photoelectric pod is mounted at the bottom of the unmanned aerial vehicle; the information processing module is used for resolving the image shot by the photoelectric pod and controlling the photoelectric pod and the unmanned aerial vehicle according to the resolving result.
In a preferred embodiment, the optoelectronic pod is a two-degree-of-freedom optoelectronic pod with a lens that can be rotated in two mutually perpendicular directions.
In the invention, the lens carried by the photoelectric pod is not particularly limited, and can be a visible light lens, an infrared lens or other lenses, and the person skilled in the art can select the lens according to actual needs.
In a preferred embodiment, the photoelectric pod is connected with the information processing module through a universal USB or a serial port, and the universal quick interface can realize quick replacement of different types of photoelectric pods when different tasks are executed.
According to the invention, the information processing module comprises an image identification tracking sub-module, a pan-tilt-pan PID controller sub-module and a parameter correction sub-module.
The image recognition tracking submodule is used for recognizing a target in an image, calculating a rotation angle required by the photoelectric pod lens for moving the target to the middle position of the image, and outputting a lens rotation angle rate parameter.
The inventor finds that a lens rotation angle rate parameter is directly used as an input of a pan-tilt PID controller, and a serious over-rotation phenomenon occurs, for example, when the pan-tilt is actually required to rotate from 20 ° to 40 °, the pan-tilt is over-rotated from 20 ° to 45 °, then revolved to 35 °, and then rotated to 40 °, even multiple over-rotation phenomena occur, which not only causes a large rotation angle, increases blurred images in a captured image, affects image recognition accuracy, but also easily causes a phenomenon of losing a target if the target moves at a too fast speed.
In the invention, a parameter correction submodule is arranged in an information processing module and is used for correcting the lens rotation angle rate parameter output by the image identification tracking submodule so as to solve or lighten the over-rotation phenomenon.
Specifically, the parameter correction submodule can acquire detection data of a gyroscope, an accelerometer and a geomagnetism meter, receive a lens rotation angular rate parameter output by the image recognition tracking submodule, correct the lens rotation angular rate parameter through the detection data, and output the corrected lens rotation angular rate parameter to the holder PID controller submodule.
And the cradle head PID controller submodule receives the corrected lens rotation angle rate parameter and uses the corrected lens rotation angle rate parameter as the input of the PID controller so as to control the rotation of a motor in the photoelectric pod, thereby realizing the steering of the lens.
Through the detection data with gyroscope, accelerometer and geomagnetism meter and the cloud platform rotation angle of receipt fuse, can make the rotatory stability of photoelectricity nacelle camera lens, the camera lens rocks and reduces by a wide margin, and it is more clear to shoot the image, and the camera lens over-rotation phenomenon obviously reduces, tracks more stably to the target.
In the present invention, the specific hardware structure of the information processing module is not particularly limited, and a high-performance arithmetic processing device, such as Intel NUC, NVIDIA TX2, hua engine ibax-R1000, or the like, is preferably used.
Preferably, unmanned aerial vehicle machine carries vision intelligence processing system still includes throws the screen ware, throw the screen ware and be connected with information processing module, transmit the image that information processing module received to ground basic station for the staff obtains unmanned aerial vehicle's state.
The screen projector is a device capable of wirelessly transmitting image information, and is provided with an independent signal processing transmitting chip, such as RX5808, a FPV-40Ch-RC and the like, a wireless screen projector matched with a NUC is preferably adopted, the image information shot by the unmanned aerial vehicle is transmitted by using the screen projector, compared with the traditional method of directly transmitting the image information by using a communication module, the calculation amount of the information processing module is reduced, the information processing module can more efficiently identify and process the image, meanwhile, the image transmission does not occupy the bandwidth of the communication module, the communication interference is reduced, and the reliability of the unmanned aerial vehicle is improved.
On the other hand, the invention also provides an airborne visual intelligent processing method of the unmanned aerial vehicle, which comprises the following steps:
s1, collecting an image;
s2, identifying the target in the image and outputting a lens rotation angle rate parameter;
and S3, correcting the rotation angle rate parameter of the lens, and controlling the rotation of the photoelectric pod lens through a PID controller.
In step S1, the capturing the image is to capture the image through the optoelectronic pod and transfer the captured image to the image recognition tracking sub-module.
In a preferred embodiment, before the image is captured, a step S0 of selecting photoelectric pods of different lens types according to the task requirement may be further included.
Specifically, when the task environment light is good, the photoelectric pod of the visible light lens is selected; when the task environment light is poor, the photoelectric pod using the infrared lens is used, preferably, the environment light is better that the illumination intensity is greater than two million luxes, and a high-definition image can be obtained by adopting the visible light lens, so that the recognition degree is improved; when the illumination intensity is lower than or equal to two million luxes, an infrared lens is adopted to obtain a relatively clear image with low noise, and the recognition error rate is reduced.
In step S2, the image is identified by the image identification and tracking submodule, a neural network is provided in the image identification and tracking submodule, the target is identified by the neural network, and the rotation angular rate parameter of the photoelectric pod lens when the target is moved to the middle position of the image is calculated.
In the present invention, the specific method for identifying the target in the image and outputting the lens rotation angular rate parameter is not particularly limited, and may be any known method, for example, the methods in patent nos. CN201520131508.5 and CN201710014199.7, and no further description is given here.
In step S3, the correction of the lens rotation angular rate parameter is performed by fusing the information detected by the inertial measurement unit with the lens rotation angular rate parameter.
Further, the information detected by the inertial measurement unit includes gyroscope detection information, geomagnetism meter detection information, and speedometer detection information.
In the invention, the rotation angle rate parameter of the lens of the photoelectric pod is described by adopting an attitude quaternion, the quaternion method is proposed by England mathematician W.R. Haminlton in 1843, and the rotation angle rate parameter of the lens of the photoelectric pod is expressed by adopting a gyroscope coordinate system S, and further, the rotation angle rate parameter of the lens of the photoelectric pod is provided by England mathematician W.R. Haminlton and is a common attitude calculation mode.
Specifically, the method comprises S31, fusing gyroscope detection information, and obtaining a gyroscope fusion attitude quaternion;
s32, fusing geomagnetic meter detection information and speedometer detection information to obtain a geomagnetic meter and accelerometer fusion attitude quaternion;
and S33, linearly fusing the gyroscope fusion attitude quaternion and the geomagnetism and accelerometer fusion attitude quaternion to finish the correction, and transmitting the corrected attitude quaternion to the PID controller to further realize the steering of the lens.
In step S31, at time t, the pose quaternion characterizing the lens rotation angular rate parameter passed by the image recognition and tracking sub-module is:
after the attitude quaternion representing the lens rotation angle rate parameter transmitted by the image recognition and tracking submodule is filtered by the Kalman filter, the attitude quaternion estimated value representing the lens rotation angle rate at the last moment (t-1 moment) can be obtained
In the invention, the gyroscope fusion attitude quaternion is obtained by fusing the gyroscope detection information with the gyroscope detection information, and can be expressed as:
Wherein,Sωtangular velocity components of the respective axes in the information are detected for the gyroscope.
In step S32, kalman filtering processing needs to be performed on the accelerometer detection information, where the accelerometer detection information after the kalman filtering processing is represented as:
wherein,a jacobian matrix representing the derivative of the error function of the accelerometer characterization,
the formula five and the formula six are substituted into the formula four, so that the gradient of the acceleration function can be obtained
The detection information of the geomagnetism isIn combination with the local earth magnetic field vectorThe gradient of the earth magnetic field vector error function can be obtained
The gradient of the acceleration functionGradient of vector error function of earth magnetic fieldThe total error function gradient can be obtained simultaneously
Iteratively graduating the total error function using a gradient descent methodAndcarrying out fusion to obtainAnd fusing a lens rotation angle rate parameter attitude quaternion after the geomagnetism and the accelerometer are fused, namely fusing the geomagnetism and the accelerometer with the attitude quaternion:
wherein, α is an accelerometer noise amplification factor, which is related to the accelerometer hardware used and can be obtained from accelerometer factory specifications or obtained by performing related hardware detection on the accelerometer.
In step S33, the gyroscope-fused-attitude quaternion and the geomagnetism-accelerometer-fused-attitude quaternion are linearly fused, specifically, the linear fusion is performed in the following manner to obtain a modified attitude quaternion
Where γ is a dynamic weight that changes with the motion state of the optoelectronic pod, and can be expressed as:
the quaternion convergence rate of the accelerometer and the quaternion divergence rate of the gyroscope measurement error are beta, and the beta is related to the used gyroscope hardware and can be obtained from the gyroscope factory specifications or obtained by performing related hardware detection on the gyroscope.
According to the inventionObviously, the corrected attitude quaternionAnd the input is transmitted to the PID controller and used as the input of the PID controller, and the PID controller controls the rotation of the photoelectric pod motor according to the input, so that the steering of the lens is realized.
In a preferred embodiment, since the accelerometer quaternion convergence rate is much greater than the gyroscope divergence rate, there is:
substituting the formula into ten and eleven to obtain the corrected attitude quaternionCan also be expressed as:
the formula twelve simplifies the calculation amount of the correction process, and improves the calculation speed and the response sensitivity under the condition that the steering control effect of the lens is the same.
Examples
Example 1
Carrying out a simulation experiment, and tracking a moving target by using an unmanned aerial vehicle carrying a photoelectric pod, an information processing module and a screen projector, wherein the moving target is a small multi-rotor aircraft which rapidly flies around a splayed shape; the photoelectric pod carried on the unmanned aerial vehicle is a two-degree-of-freedom photoelectric pod, the lens adopts a visible light lens, and the information processing module comprises an image recognition tracking sub-module, a holder PID controller sub-module and a parameter correction sub-module.
In step S1, the drone takes an image through the photoelectric pod;
in step S2, the image is identified by the image identification and tracking submodule, and the rotation angular rate parameter of the photoelectric pod lens is calculated when the target is moved to the middle position of the image;
in step S3, the information detected by the inertial measurement unit is fused with the lens rotation angular rate parameter by the parameter correction submodule to realize lens rotation angular rate parameter correction, and the PID controller controls the rotation of the photoelectric pod lens with the corrected lens rotation angular rate parameter as an input. In step S31, a gyroscope fusion attitude quaternion is obtained by fusing gyroscope detection information of formula two:
Wherein,Sωtdetecting angular velocity components of the respective axes in the information for the gyroscope;
in step S32, a geomagnetism and accelerometer fused attitude quaternion is obtained by fusing the geomagnetism detection information and the speedometer detection information by formula nine:
wherein the accelerometer noise amplifiesCoefficient of alpha is 0.000308, gradient of total error functionObtained by the formula eight:
gradient of acceleration functionGradient of vector error function of geomagnetic fieldRespectively expressed as:
wherein
In step S33, the gyroscope-fused-attitude quaternion and the geomagnetism-accelerometer-fused-attitude quaternion are linearly fused by thirteen equations to obtain a corrected attitude quaternion
Wherein the quaternion divergence rate beta of the gyroscope measurement error is 0.0029.
Quaternion of the corrected attitudeThe direction of the lens is transmitted to a PID controller, and then the steering of the lens is realized, wherein the parameters of the PID controller are as follows: kp 60, Ki 0.2, and Kd 8.
Comparative example 1
The simulation experiment in example 1 was repeated except that the lens rotation angular rate parameter was corrected without providing a parameter correction submodule, and the PID controller controlled the rotation of the electro-optic pod lens with the lens rotation angular rate parameter as an input amount without correction.
Experimental example 1
In example 1 and comparative example 1, the target recognition success rate within 10 minutes after the unmanned aerial vehicle finds the target is recorded, and the target recognition rate is the ratio of the number of images with the target recognized by the image recognition tracking sub-module to the total number of images transmitted by the optoelectronic pod.
The results are shown in table one:
watch 1
As can be seen from table one, the ratio of successfully identifying the target in the embodiment 1 is obviously higher than that in the comparative example 1, which shows that by correcting the lens rotation angle rate parameter, the rotation stability of the lens of the optoelectronic pod is improved, the lens shake is greatly reduced, the shot image is clearer, and the image identification and tracking sub-module is more convenient to identify the target therein.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "front", "rear", and the like indicate orientations or positional relationships based on operational states of the present invention, and are only used for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise specifically stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the connection may be direct or indirect via an intermediate medium, and may be a communication between the two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.
Claims (10)
1. The utility model provides an unmanned aerial vehicle machine carries vision intelligence processing system which characterized in that, includes unmanned aerial vehicle and carries on photoelectricity nacelle and information processing module on unmanned aerial vehicle.
2. The unmanned aerial vehicle on-board vision intelligence processing system of claim 1,
the photoelectric pod is provided with an inertial measurement unit for detecting attitude information of the photoelectric pod, and the inertial measurement unit comprises a gyroscope, a geomagnetic meter and an accelerometer;
the information processing module comprises an image identification tracking sub-module, a holder PID controller sub-module and a parameter modification sub-module,
the image identification tracking submodule is used for identifying a target in an image, calculating a rotation angle required by moving the target to a photoelectric pod lens at the middle position of the image, and outputting a lens rotation angle rate parameter;
the parameter correction submodule is used for correcting the lens rotation angle rate parameter output by the image recognition tracking submodule;
and the cradle head PID controller submodule receives the corrected lens rotation angle rate parameter and uses the corrected lens rotation angle rate parameter as the input of a PID controller to control the rotation of a motor in the photoelectric pod.
3. The unmanned aerial vehicle on-board vision intelligence processing system of claim 1,
the parameter correction submodule receives the lens rotation angular rate parameters output by the image recognition tracking submodule, acquires detection data of a gyroscope, an accelerometer and a geomagnetism meter, and corrects the lens rotation angular rate parameters through the detection data.
4. The unmanned aerial vehicle on-board vision intelligence processing system of claim 1,
the unmanned aerial vehicle airborne visual intelligent processing system further comprises a screen projecting device, wherein the screen projecting device is connected with the information processing module and transmits the image received by the information processing module to the ground base station.
5. An unmanned aerial vehicle airborne vision intelligent processing method comprises the following steps:
s1, collecting an image;
s2, identifying the target in the image and outputting a lens rotation angle rate parameter;
and S3, correcting the rotation angle rate parameter of the lens, and controlling the rotation of the photoelectric pod lens through a PID controller.
6. The intelligent processing method for onboard vision of unmanned aerial vehicle as claimed in claim 5,
in step S1, the capturing the image is to capture the image through the optoelectronic pod and transfer the captured image to the image recognition tracking sub-module.
7. The intelligent processing method for onboard vision of unmanned aerial vehicle as claimed in claim 5,
in step S2, the image is identified by the image identification and tracking submodule, which is provided with a neural network, and the target is identified by the neural network, and the rotation angular rate parameter of the electro-optic pod lens when the target is moved to the middle position of the image is calculated.
8. The intelligent processing method for onboard vision of unmanned aerial vehicle as claimed in claim 5,
in step S3, the correction of the lens rotation angular rate parameter is performed by fusing information detected by the inertial measurement unit, which includes gyroscope detection information, geomagnetism detection information, and speedometer detection information, with the lens rotation angular rate parameter.
9. The intelligent processing method for onboard vision of unmanned aerial vehicle according to claim 8,
step S3 includes the following substeps:
s31, fusing gyroscope detection information to obtain a gyroscope fusion attitude quaternion;
s32, fusing geomagnetic meter detection information and speedometer detection information to obtain a geomagnetic meter and accelerometer fusion attitude quaternion;
and S33, linearly fusing the gyroscope fusion attitude quaternion and the geomagnetism and accelerometer fusion attitude quaternion to finish the correction, and transmitting the corrected attitude quaternion to the PID controller to further realize the steering of the lens.
10. The intelligent processing method for onboard vision of unmanned aerial vehicle according to claim 9,
in step S31, the fused gyroscope attitude quaternion after the fusion of the gyroscopes may be expressed as:
wherein,an attitude quaternion estimation value representing the rotation angle rate of the lens at the time of t-1 is obtained by filtering the attitude quaternion representing the rotation angle rate parameter of the lens through a Kalman filter,is the differential of the attitude quaternion, delta t is the detection period of the gyroscope, the differential of the attitude quaternionCan be obtained by the following formula:
Sωtangular velocity components of the respective axes in the information are detected for the gyroscope.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011111242.XA CN113141459B (en) | 2020-10-16 | 2020-10-16 | Unmanned aerial vehicle airborne vision intelligent processing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011111242.XA CN113141459B (en) | 2020-10-16 | 2020-10-16 | Unmanned aerial vehicle airborne vision intelligent processing system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113141459A true CN113141459A (en) | 2021-07-20 |
CN113141459B CN113141459B (en) | 2022-04-05 |
Family
ID=76809740
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011111242.XA Active CN113141459B (en) | 2020-10-16 | 2020-10-16 | Unmanned aerial vehicle airborne vision intelligent processing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113141459B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114593710A (en) * | 2022-03-04 | 2022-06-07 | 沃飞长空科技(成都)有限公司 | Unmanned aerial vehicle measuring method, system, electronic equipment and medium |
PL442945A1 (en) * | 2022-11-24 | 2024-05-27 | Enprom Spółka Z Ograniczoną Odpowiedzialnością | Method of point monitoring and system for point monitoring of objects, especially power poles |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102355574A (en) * | 2011-10-17 | 2012-02-15 | 上海大学 | Image stabilizing method of airborne tripod head moving target autonomous tracking system |
JP2013198118A (en) * | 2012-03-22 | 2013-09-30 | Toshiba Corp | Tracking device |
CN106053874A (en) * | 2015-04-01 | 2016-10-26 | 鹦鹉无人机股份有限公司 | Drone provided with a vertical-view video camera compensated for the instantaneous rotations for estimation of the horizontal speeds |
CN110824453A (en) * | 2020-01-10 | 2020-02-21 | 四川傲势科技有限公司 | Unmanned aerial vehicle target motion estimation method based on image tracking and laser ranging |
-
2020
- 2020-10-16 CN CN202011111242.XA patent/CN113141459B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102355574A (en) * | 2011-10-17 | 2012-02-15 | 上海大学 | Image stabilizing method of airborne tripod head moving target autonomous tracking system |
JP2013198118A (en) * | 2012-03-22 | 2013-09-30 | Toshiba Corp | Tracking device |
CN106053874A (en) * | 2015-04-01 | 2016-10-26 | 鹦鹉无人机股份有限公司 | Drone provided with a vertical-view video camera compensated for the instantaneous rotations for estimation of the horizontal speeds |
CN110824453A (en) * | 2020-01-10 | 2020-02-21 | 四川傲势科技有限公司 | Unmanned aerial vehicle target motion estimation method based on image tracking and laser ranging |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114593710A (en) * | 2022-03-04 | 2022-06-07 | 沃飞长空科技(成都)有限公司 | Unmanned aerial vehicle measuring method, system, electronic equipment and medium |
CN114593710B (en) * | 2022-03-04 | 2024-02-06 | 四川傲势科技有限公司 | Unmanned aerial vehicle measurement method, unmanned aerial vehicle measurement system, electronic equipment and medium |
PL442945A1 (en) * | 2022-11-24 | 2024-05-27 | Enprom Spółka Z Ograniczoną Odpowiedzialnością | Method of point monitoring and system for point monitoring of objects, especially power poles |
Also Published As
Publication number | Publication date |
---|---|
CN113141459B (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108351574B (en) | System, method and apparatus for setting camera parameters | |
CN108769531B (en) | Method for controlling shooting angle of shooting device, control device and remote controller | |
CN108883825B (en) | System and method for unmanned aerial vehicle transport and data acquisition | |
CN106525074B (en) | A kind of compensation method, device, holder and the unmanned plane of holder drift | |
CN106647804B (en) | A kind of automatic detecting method and system | |
WO2018178756A1 (en) | System and method for providing autonomous photography and videography | |
CN106989744A (en) | A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor | |
CN109254587B (en) | Small unmanned aerial vehicle capable of stably hovering under wireless charging condition and control method thereof | |
CN113141459B (en) | Unmanned aerial vehicle airborne vision intelligent processing system and method | |
US20180022472A1 (en) | Autonomous system for taking moving images from a drone, with target tracking and improved target location | |
CN107000840A (en) | Unmanned vehicle and many mesh imaging systems | |
US20220014675A1 (en) | Unmanned aerial vehicle with virtual un-zoomed imaging | |
CN203845021U (en) | Panoramic aerial photographic unit system for aircrafts | |
WO2021043214A1 (en) | Calibration method and device, and unmanned aerial vehicle | |
WO2021217371A1 (en) | Control method and apparatus for movable platform | |
WO2020048365A1 (en) | Flight control method and device for aircraft, and terminal device and flight control system | |
CN113795805A (en) | Flight control method of unmanned aerial vehicle and unmanned aerial vehicle | |
CN114604439B (en) | Aerial photography video image stabilization system for flapping wing flying robot | |
WO2020038720A1 (en) | Apparatus, method and computer program for detecting the form of a deformable object | |
CN112950671A (en) | Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle | |
WO2018024239A1 (en) | Hybrid image stabilization system | |
CN112198903A (en) | Modular multifunctional onboard computer system | |
CN110382358A (en) | Holder attitude rectification method, holder attitude rectification device, holder, clouds terrace system and unmanned plane | |
CN113063401A (en) | Unmanned aerial vehicle aerial survey system | |
CN110209199A (en) | A kind of farmland fire source monitoring UAV system design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |