CN111369621B - Image positioning resolving method for mooring type lift-off platform - Google Patents

Image positioning resolving method for mooring type lift-off platform Download PDF

Info

Publication number
CN111369621B
CN111369621B CN202010165921.9A CN202010165921A CN111369621B CN 111369621 B CN111369621 B CN 111369621B CN 202010165921 A CN202010165921 A CN 202010165921A CN 111369621 B CN111369621 B CN 111369621B
Authority
CN
China
Prior art keywords
image
points
segmentation
point
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010165921.9A
Other languages
Chinese (zh)
Other versions
CN111369621A (en
Inventor
王娇颖
李良福
刘培祯
姜旭
李红光
王洁
张莹
何曦
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian institute of Applied Optics
Original Assignee
Xian institute of Applied Optics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian institute of Applied Optics filed Critical Xian institute of Applied Optics
Priority to CN202010165921.9A priority Critical patent/CN111369621B/en
Publication of CN111369621A publication Critical patent/CN111369621A/en
Application granted granted Critical
Publication of CN111369621B publication Critical patent/CN111369621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a tethered levitation platform image positioning and resolving method. Compared with the prior art, the method comprises a light spot detection stage, a positioning stage and an orientation stage, and the resolving is completed by utilizing the resolving image to image the light spots, so that the accurate positioning and orientation of the mooring lift-off platform are realized. Because the technical scheme does not adopt the GPS or Beidou technology, a plurality of problems caused by the GPS or Beidou technology are completely avoided.

Description

Image positioning resolving method for mooring type lift-off platform
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a tethered levitation platform image positioning and resolving method.
Background
In modern war, the situation of battlefield changes instantly, the chance is vanished slightly, and whether to effectively obtain the situation information of enemy target, own force layout and other situation information in real time is the key to overcome enemy.
The distance of the future scout detection is more and more far, the mast photoelectricity does not meet the development requirement, and the photoelectric system is required to be lifted to a higher distance for scout detection; because the mooring type lift-off platform has the advantages of large coverage area, good low-altitude detection capability, strong viability, high cost effectiveness ratio and the like, the mooring type rotor wing platform overcomes the defects of insufficient height of the mast and short flight time of the rotor wing aircraft, has the functions of maneuvering flight, quick unfolding, folding and unfolding and the like along with the carrier vehicle, can continuously reconnoiter and monitor important regions and airspaces for a long time, and gradually becomes an important component part in national air defense and coastal early warning detection information networks. Image positioning calculation is used as a key part of the mooring type lift-off platform, and has great research value.
Under the general environment, the tethered rotor platform adopts GPS or Beidou technology for positioning, and realizes automatic take-off and landing or follow-up flight of fixed-point positions. However, under the severe environment, the GPS or beidou system cannot work after being interfered, and the system needs a new positioning means to control the fixed-point take-off and landing of the rotor or follow the flight.
Since the GPS system relies too heavily on the fragile space-based satellite system. The satellite is very easy to be interfered, damaged or attacked by the network in wartime, and the safety of the satellite is difficult to be effectively ensured. The GPS can only receive civil code signals transmitted by satellites due to the fact that military codes cannot be obtained, the GPS civil code is low in precision, generally 15-20m, and the anti-jamming capability is poor. The GPS does not have a wide area difference function, and the GPS needs to be enhanced through systems such as WAAS, starFire and the like, so that the wide area difference is realized, and the positioning precision is improved.
The Beidou system positioning mode also has the limitation: (1) The time required by the user for single-point positioning is longer; (2) No matter in a double-receiving single-transmitting or single-receiving double-transmitting positioning working mode, the user machine needs to transmit signals, so that the exposure is easy, and the damage resistance is poor; (3) the number of users is limited; (4) temporarily unsuitable for high mobility carrier applications.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to overcome a plurality of problems caused by adopting a GPS or Beidou technology in the existing image positioning and resolving scheme of the mooring type lift-off platform.
(II) technical scheme
In order to solve the technical problem, the invention provides a tethered levitation platform image positioning and resolving method, which comprises the following steps:
step 1: aiming at a rotor wing unmanned aerial vehicle, a plurality of near infrared light sources are installed at proper positions of a rotor wing, and the outer section shape of an area formed by the installation positions of all the near infrared light sources is trapezoidal, so that a trapezoidal four-point calibration light source is formed;
step 2: after the rotor unmanned aerial vehicle is lifted off, a camera arranged on a ground lifting support monitors the four-point calibration light source, wherein the camera is provided with a narrow-band filter with a corresponding frequency band according to the near-infrared light source, so that the camera acquires a light spot image of the trapezoidal four-point calibration light source;
and step 3: counting maxima V from image pixels in the speckle image max And the statistical mean V of the image pixels mean Determining an image segmentation threshold;
and 4, step 4: performing image binarization segmentation according to an image segmentation threshold value;
and 5: performing region growing on the binary segmentation image;
the basic idea of region growing is to gather pixels with similar properties to form a region set, wherein all pixel points contained in the region set are growing points;
step 6: performing trapezoid fitting according to the growing points; the method specifically comprises the following steps:
step 61: removing the grown interference points aiming at all growth points contained in the region set;
step 62: searching the minimum circumscribed circle of the growth points after the interference elimination;
and step 63: obtaining four growth points on the minimum circumcircle;
step 64: judging whether the pattern formed by the four points meets the conditions that a group of opposite sides are parallel and a group of opposite sides are not parallel; if the condition is met, the trapezoidal fitting is successful, and the next calculation is carried out;
if the condition is not met, the calculation of the next frame of image is exited, and the steps 5-64 are repeated until the trapezoid fitting is successful;
and 7: positioning and orienting the platform according to the fitted trapezoid end points;
performing convex hull detection on the fitted four light spots, wherein the convex hull is a convex set surrounding the outermost layer of the object, the shape of the detected convex hull is trapezoidal, and returning to the continuous convex hull point sequence;
calculating the slopes of four edges of the trapezoid according to the positions of the continuous convex hull point sequences, and determining two parallel edges and two oblique edges of the trapezoid according to the slopes; calculating a diagonal intersection point formed by diagonal points, namely a positioning point of the lift-off platform; and calculating the direction which is vertical to the two parallel edges and points to the short edge, namely the direction of the lift-off platform.
Wherein, rotor unmanned aerial vehicle includes: four rotor unmanned aerial vehicle, six rotor unmanned aerial vehicle, eight rotor unmanned aerial vehicle.
Wherein, the quantity of the near-infrared light sources is selected according to the requirement.
Wherein, the number of the near-infrared light sources is at least 4.
Wherein, the installation parameters of the near-infrared light source are as follows: four near-infrared light sources with the wavelengths of 780nm-920nm are installed at the bottoms of four motor supports of the rotor wing.
Wherein the segmentation parameter a takes a value of 0.7.
In the step 61, the grown interference points are removed by using a RANSAC method.
Wherein, the step 5 is realized specifically as follows: in a segmentation area of a binary segmentation image, taking any one pixel point as a reference point in a frame of image, taking the position of the reference point as a starting point of target growth, and merging pixels with the same or similar properties as the reference point in the neighborhood of the reference point into an area where the reference point is located through similarity judgment of neighborhood pixels; and then, taking the newly combined pixel position in the region as a new reference point to continue the similarity judgment of the neighborhood pixels until the pixels which do not meet the condition can be included, thereby forming a region set of pixel points, wherein all the pixel points contained in the region set are growth points.
Wherein, the step 3 determines the image segmentation threshold value according to the following formula;
setting the acquired light spot image as I and the resolution as w x h, and calculating the maximum value V of the image pixel statistics max And the statistical average value V of the image pixels mean And calculating to obtain an image segmentation threshold T as follows:
T=a*V max +(1-a)*V mean
wherein the value of the segmentation parameter a is between 0 and 1.
The step 4 is to carry out image binarization segmentation according to an image segmentation threshold value through the following formula;
according to the image segmentation threshold value T, carrying out image binarization segmentation to obtain a binarization segmentation image I D (x,y);
Figure BDA0002407449950000041
(III) advantageous effects
Compared with the prior art, the image positioning resolving method for the mooring type levitation platform comprises a light spot detection stage, a positioning stage and an orientation stage, resolving is completed by means of resolving image imaging light spots, and accurate positioning and orientation of the mooring type levitation platform are achieved.
Because the technical scheme does not adopt the GPS or Beidou technology, a plurality of problems caused by the GPS or Beidou technology are completely avoided.
Drawings
FIG. 1 is a schematic view of a tethered lift-off platform.
Fig. 2 is a schematic view of positioning light source installation.
Fig. 3 is a schematic diagram of the system composition.
Fig. 4 is a flow chart of the resolving process.
Fig. 5 is a schematic diagram of an input image. (preferred embodiment: light source point is 4)
Fig. 6 is a light spot detection flowchart.
Fig. 7 is a schematic diagram of convex hull detection.
Fig. 8 is a schematic view of the location points and orientation directions.
Detailed Description
In order to make the objects, contents, and advantages of the present invention more apparent, the following detailed description of the present invention will be made in conjunction with the accompanying drawings and examples.
In order to solve the problems in the prior art, the invention provides a tethered levitation platform image positioning and resolving method, which comprises the following steps:
step 1: as shown in fig. 1-3, for the rotor unmanned aerial vehicle, a plurality of near-infrared light sources are installed at appropriate positions of a rotor, and the outer section shape of an area formed by the installation positions of all the near-infrared light sources is trapezoidal, so that a trapezoidal four-point calibration light source is formed;
step 2: after the rotor unmanned aerial vehicle is lifted off, a camera arranged on a ground lifting support (platform) monitors the four-point calibration light source, wherein the camera is provided with a narrow-band filter with a corresponding frequency band according to the near-infrared light source, so that the camera acquires a light spot image of the trapezoidal four-point calibration light source;
and step 3: as shown in fig. 4-6, the maxima V are counted from the image pixels in the spot image max And the statistical mean V of the image pixels mean Determining an image segmentation threshold;
and 4, step 4: performing image binarization segmentation according to an image segmentation threshold value;
and 5: performing region growing on the binary segmentation image;
the basic idea of region growing is to gather pixels with similar properties to form a region set, wherein all pixel points contained in the region set are growing points; the method specifically comprises the following steps: in a segmentation area of a binary segmentation image, taking any one pixel point as a reference point in a frame of image, taking the position of the reference point as a starting point of target growth, and merging pixels with the same or similar properties as the reference point in the neighborhood of the reference point into an area where the reference point is located through similarity judgment of neighborhood pixels; then, taking the newly combined pixel position in the region as a new reference point to continue the similarity judgment of the neighborhood pixels until no pixel meeting the condition can be included, thereby forming a region set of pixel points, wherein all the pixel points contained in the region set are growth points;
step 6: performing trapezoid fitting according to the growing points; the method specifically comprises the following steps:
step 61: removing the grown interference points by using a RANSAC method aiming at all growth points contained in the region set;
step 62: searching the minimum circumscribed circle of the growing points after interference elimination by the RANSAC method;
and step 63: obtaining four growth points on the minimum circumcircle;
step 64: judging whether the pattern formed by the four points meets the conditions that a group of opposite sides are parallel and a group of opposite sides are not parallel; if the condition is met, the trapezoidal fitting is successful, and the next calculation is carried out;
if the condition is not met, the calculation of the next frame of image is exited, and the steps 5-64 are repeated until the trapezoid fitting is successful;
and 7: positioning and orienting the platform according to the fitted trapezoid end points;
as shown in fig. 7, performing convex hull detection on the four fitted light spots, where the convex hull is a convex set surrounding the outermost layer of the object, and the detected convex hull should be trapezoidal in shape, and returning to the continuous convex hull point sequence;
as shown in fig. 8, the slopes of the four sides of the trapezoid are calculated according to the positions of the continuous convex hull point sequence, and two parallel sides and two oblique sides of the trapezoid are determined according to the slopes; calculating a diagonal intersection point formed by diagonal points, namely a positioning point of the lift-off platform; and calculating the direction which is vertical to the two parallel edges and points to the short edge, namely the direction of the lift-off platform.
The technical scheme is that the video image is processed, the twisting and displacement offset of the calibration light source in the visual field is calculated, the offset is subjected to data processing such as self-adaption and smoothness through a ground control processing display computer, the data processing is input into a flight control unit, the position and the posture of the lift-off platform are adjusted in real time, a closed loop is formed, photoelectric alignment measurement and control are completed, and high-precision aerial positioning and recovery are realized.
Wherein, rotor unmanned aerial vehicle includes: four rotor unmanned aerial vehicle, six rotor unmanned aerial vehicle, eight rotor unmanned aerial vehicle.
Wherein, the quantity of the near-infrared light sources is selected according to the requirement.
Wherein, the quantity of the near-infrared light sources is at least 4.
Wherein, the installation parameters of the near-infrared light source are as follows: four near-infrared light sources with the wavelengths of 780nm-920nm are installed at the bottoms of four motor supports of the rotor wing.
Wherein the segmentation parameter a takes a value of 0.7.
Wherein, in the step 61, the grown interference points are removed by the RANSAC method.
Wherein, the step 5 is realized specifically as follows: in a segmentation area of a binary segmentation image, taking any one pixel point as a reference point in a frame of image, taking the position of the reference point as a starting point of target growth, and merging pixels with the same or similar properties as the reference point in the neighborhood of the reference point into an area where the reference point is located through similarity judgment of neighborhood pixels; and then, taking the newly combined pixel position in the region as a new reference point to continue the similarity judgment of the neighborhood pixels until the pixels which do not meet the condition can be included, thereby forming a region set of pixel points, wherein all the pixel points contained in the region set are growth points.
Wherein, the step 3 determines the image segmentation threshold value according to the following formula;
setting the acquired light spot image as I and the resolution as w x h, and calculating the maximum value V of the image pixel statistics max And the statistical average value V of the image pixels mean And calculating to obtain an image segmentation threshold T as follows:
T=a*V max +(1-a)*V mean
wherein the value of the segmentation parameter a is between 0 and 1.
The step 4 is to carry out image binarization segmentation according to an image segmentation threshold value through the following formula;
carrying out image binarization according to the image segmentation threshold value TObtaining a binary segmentation image I by segmentation D (x,y);
Figure BDA0002407449950000071
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A tethered airborne platform image positioning and resolving method is characterized by comprising the following steps:
step 1: aiming at a rotor wing unmanned aerial vehicle, a plurality of near infrared light sources are installed at proper positions of a rotor wing, and the outer section shape of an area formed by the installation positions of all the near infrared light sources is trapezoidal, so that a trapezoidal four-point calibration light source is formed;
step 2: after the rotor unmanned aerial vehicle is lifted off, a camera arranged on a ground lifting support monitors the four-point calibration light source, wherein the camera is provided with a narrow band filter in a corresponding frequency band according to the near-infrared light source, so that the camera acquires a light spot image of the trapezoidal four-point calibration light source;
and step 3: counting maxima V from image pixels in the speckle image max And the statistical average value V of the image pixels mean Determining an image segmentation threshold;
and 4, step 4: performing image binarization segmentation according to an image segmentation threshold value;
and 5: performing region growing on the binary segmentation image;
the basic idea of region growing is to gather pixels with similar properties to form a region set, wherein all pixel points contained in the region set are growing points;
step 6: performing trapezoid fitting according to the growing points; the method specifically comprises the following steps:
step 61: removing the grown interference points aiming at all growth points contained in the region set;
step 62: searching the minimum circumscribed circle of the growth points after the interference elimination;
and step 63: obtaining four growth points on the minimum circumscribed circle;
step 64: judging whether the pattern formed by the four points meets the conditions that a group of opposite sides are parallel and a group of opposite sides are not parallel; if the condition is met, the trapezoidal fitting is successful, and the next calculation is carried out;
if the condition is not met, the calculation of the next frame of image is exited, and the steps 5-64 are repeated until the trapezoid fitting is successful;
and 7: positioning and orienting the platform according to the fitted trapezoid end point;
performing convex hull detection on the four fitted light spots, wherein the convex hull is a convex set surrounding the outermost layer of the object, the shape of the detected convex hull is trapezoidal, and returning to the continuous convex hull point sequence;
calculating the slopes of four edges of the trapezoid according to the positions of the continuous convex hull point sequences, and determining two parallel edges and two oblique edges of the trapezoid according to the slopes; calculating a diagonal intersection point formed by diagonal points, namely a positioning point of the lift-off platform; and calculating the direction which is vertical to the two parallel edges and points to the short edge, namely the direction of the lift-off platform.
2. The tethered airborne platform image positioning solution of claim 1, wherein the rotorcraft comprises: four rotor unmanned aerial vehicle, six rotor unmanned aerial vehicle, eight rotor unmanned aerial vehicle.
3. The tethered airborne platform image location resolution method of claim 1, wherein the number of near-infrared light sources is selected as needed.
4. The tethered airborne platform image location resolution method of claim 1, wherein the number of near-infrared light sources is at least 4.
5. The tethered airborne platform image positioning solution method of claim 1, wherein the installation parameters of the near-infrared light source are: four near-infrared light sources with the wavelengths of 780nm-920nm are installed at the bottoms of four motor supports of the rotor wing.
6. The tethered airborne platform image positioning solution method of claim 1, wherein the segmentation parameter a takes the value 0.7.
7. The image positioning and resolving method for the tethered airborne platform as described in claim 1, wherein said step 61 eliminates the grown interference points by RANSAC method.
8. The tethered airborne platform image positioning solution method of claim 1, wherein step 5 is embodied as: in a segmentation area of a binary segmentation image, taking any one pixel point as a reference point in a frame of image, taking the position of the reference point as a starting point of target growth, and merging pixels with the same or similar properties as the reference point in the neighborhood of the reference point into an area where the reference point is located through similarity judgment of neighborhood pixels; and then, taking the newly combined pixel position in the region as a new reference point to continue the similarity judgment of the neighborhood pixels until the pixels which do not meet the condition can be included, thereby forming a region set of pixel points, wherein all the pixel points contained in the region set are growth points.
9. The tethered airborne platform image positioning solution of claim 1, wherein step 3 determines the image segmentation threshold according to the following formula;
setting the acquired light spot image as I and the resolution as w x h, and calculating the maximum value V of the image pixel statistics max And the statistical average value V of the image pixels mean And calculating to obtain an image segmentation threshold T as follows:
T=a*V max +(1-a)*V mean
wherein the value of the segmentation parameter a is between 0 and 1.
10. The tethered airborne platform image positioning solution method of claim 1, wherein step 4 performs image binarization segmentation according to an image segmentation threshold value by the following formula;
according to the image segmentation threshold value T, carrying out image binarization segmentation to obtain a binarization segmentation image I D (x,y);
Figure FDA0002407449940000031
CN202010165921.9A 2020-03-11 2020-03-11 Image positioning resolving method for mooring type lift-off platform Active CN111369621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010165921.9A CN111369621B (en) 2020-03-11 2020-03-11 Image positioning resolving method for mooring type lift-off platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010165921.9A CN111369621B (en) 2020-03-11 2020-03-11 Image positioning resolving method for mooring type lift-off platform

Publications (2)

Publication Number Publication Date
CN111369621A CN111369621A (en) 2020-07-03
CN111369621B true CN111369621B (en) 2023-03-24

Family

ID=71210760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010165921.9A Active CN111369621B (en) 2020-03-11 2020-03-11 Image positioning resolving method for mooring type lift-off platform

Country Status (1)

Country Link
CN (1) CN111369621B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112197766B (en) * 2020-09-29 2023-04-28 西安应用光学研究所 Visual gesture measuring device for tethered rotor platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067390A1 (en) * 2015-10-20 2017-04-27 努比亚技术有限公司 Method and terminal for obtaining depth information of low-texture regions in image
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN110196432A (en) * 2019-04-28 2019-09-03 湖南工学院 Deciduous forest tree grade parametric measurement method based on small light spot airborne radar

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067390A1 (en) * 2015-10-20 2017-04-27 努比亚技术有限公司 Method and terminal for obtaining depth information of low-texture regions in image
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN110196432A (en) * 2019-04-28 2019-09-03 湖南工学院 Deciduous forest tree grade parametric measurement method based on small light spot airborne radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自动机运动规律图像特征点的提取方法研究;司文娟等;《科技信息》;20110605(第16期);全文 *

Also Published As

Publication number Publication date
CN111369621A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
US9488630B2 (en) Integrated remote aerial sensing system
CN110221625B (en) Autonomous landing guiding method for precise position of unmanned aerial vehicle
CN110222612B (en) Dynamic target identification and tracking method for autonomous landing of unmanned aerial vehicle
CN108132675B (en) Autonomous path cruising and intelligent obstacle avoidance method for factory inspection unmanned aerial vehicle
CN102081801B (en) Multi-feature adaptive fused ship tracking and track detecting method
CN111179334A (en) Sea surface small-area oil spilling area detection system and detection method based on multi-sensor fusion
CN105527969B (en) A kind of mountain garden belt investigation and monitoring method based on unmanned plane
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
JP2008107941A (en) Monitoring apparatus
CN106197380A (en) Aquatic vegetation monitoring method based on unmanned plane and system
CN106502257A (en) A kind of unmanned plane precisely lands jamproof control method
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
WO2020005152A1 (en) Vessel height detection through video analysis
CN110866483A (en) Dynamic and static combined visual detection and positioning method for foreign matters on airport runway
CN108198417A (en) A kind of road cruising inspection system based on unmanned plane
CN107742276A (en) One kind is based on the quick processing system of the airborne integration of unmanned aerial vehicle remote sensing image and method
CN111369621B (en) Image positioning resolving method for mooring type lift-off platform
CN113406014A (en) Oil spilling monitoring system and method based on multispectral imaging equipment
Briese et al. Vision-based detection of non-cooperative UAVs using frame differencing and temporal filter
CN112014856A (en) Road edge extraction method and device suitable for cross road section
CN114115233A (en) Unmanned aerial vehicle autonomous landing method based on unmanned ship attitude active feedback
CN116363157A (en) Overhead transmission line edge positioning method, system, electronic equipment and medium
CN109765931B (en) Near-infrared video automatic navigation method suitable for breakwater inspection unmanned aerial vehicle
CN110673622A (en) Unmanned aerial vehicle automatic carrier landing guiding method and system based on visual images
CN109584264B (en) Unmanned aerial vehicle vision guiding aerial refueling method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant