CN112232132A - Target identification and positioning method fusing navigation information - Google Patents

Target identification and positioning method fusing navigation information Download PDF

Info

Publication number
CN112232132A
CN112232132A CN202010988347.7A CN202010988347A CN112232132A CN 112232132 A CN112232132 A CN 112232132A CN 202010988347 A CN202010988347 A CN 202010988347A CN 112232132 A CN112232132 A CN 112232132A
Authority
CN
China
Prior art keywords
target
scale
anchor
detection
aircraft
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010988347.7A
Other languages
Chinese (zh)
Inventor
王辉
贾自凯
林德福
宋韬
何绍溟
郑多
范世鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202010988347.7A priority Critical patent/CN112232132A/en
Publication of CN112232132A publication Critical patent/CN112232132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a target identification and positioning method fusing navigation information, which comprises the steps of obtaining the maximum pixel of a target in a view field by combining height information given by an aircraft altimeter, optimizing a global random anchor problem into a random anchor in a small-scale range, and improving the detection efficiency; meanwhile, the actual position of the target can be obtained through calculation by the navigation equipment. The target identification and positioning method fusing navigation information provided by the invention can realize quick and accurate positioning of ground targets, and the detection efficiency is improved by 76.2%.

Description

Target identification and positioning method fusing navigation information
Technical Field
The invention relates to the technical field of computer vision and remote sensing target detection, in particular to a target identification and positioning method fusing navigation information.
Background
In the field of detecting remote sensing targets by aircrafts, along with continuous complex changes of target detection scenes, the target detection algorithm of traditional image processing has larger and larger computation amount and can not meet the requirements gradually. In recent years, machine learning has been rapidly developed, and a trend of combining machine vision and machine learning is becoming more and more advanced. After the convolutional neural network appears, the target detection speed and accuracy are greatly improved, for example, a deep learning-based YOLO algorithm and an SSD algorithm are adopted, the algorithm adopts a direct regression idea, target position information and category information are directly output by using the convolutional network, and a target confidence score is output at the same time. The target detection R-CNN series algorithm based on deep learning enables the detection precision to be further improved, but the detection real-time performance is not as good as that of a YOLO algorithm and an SSD algorithm due to the introduction of a large calculation amount. Currently, the YOLOv3 series of algorithms are trade-offs after balancing two performances of detection speed and detection precision.
One important factor affecting the speed of the convolutional neural network detection algorithm is the problem of setting the size of the anchor in the convolutional network, and researchers need to set different anchor sizes for specific detection targets and detection performances, and the setting of the anchor sizes is different according to different factors, such as gradually reducing the anchor sizes or determining the anchor sizes with the best effect through random experiments.
However, in the actual detection process, if the original anchor setting in the network model is adopted in the training stage, the detection speed and the detection precision are not good enough; if the anchor size is gradually reduced according to the training data set, the calculation amount is large; the anchors in the training stage are determined through random experiments, and have certain contingency and particularity, so that the generalization capability of the detection algorithm is low.
Therefore, how to set the anchor and how to obtain the basis for setting the anchor are the key for optimizing the detection effect, and a method for optimizing the detection effect is urgently needed to improve the detection performance of the aircraft on the remote sensing target.
Disclosure of Invention
In order to overcome the problems, the inventor of the invention makes a keen study and designs a target identification and positioning method fusing navigation information, the method obtains the maximum pixel of a target in a view field by combining the height information given by an aircraft altimeter, optimizes the global random anchor problem into a random anchor in a small scale range, and improves the detection efficiency; meanwhile, the actual position of the target can be obtained by resolving through navigation equipment, and the ground target can be quickly and accurately positioned, so that the method is completed.
Specifically, the present invention aims to provide the following:
in a first aspect, a target identification and positioning method fused with navigation information is provided, the method includes the following steps:
step 1, training to obtain a target detection network;
step 2, obtaining an image to be detected;
step 3, carrying out target detection on the image to be detected;
and 4, positioning the target.
Wherein, step 1 comprises the following substeps:
step 1-1, labeling a training data set;
step 1-2, constructing a detection network;
step 1-3, modifying the anchor scale of the network;
and 1-4, training the network until convergence.
In step 1-3, the anchor scales of different feature layers of the network are modified according to the target scale range of the training set, and are preferably obtained by the following formula:
Figure BDA0002689991150000031
wherein S iskThe size of the original image and the prior frame representing each feature layerA ratio; smaxScale value, S, representing the feature layer of the highest layerminThe scale value of the feature layer at the bottommost layer is represented and specifically set according to the condition of a detection target; m represents the number of characteristic layers; k denotes the next feature layer.
Wherein, step 3 comprises the following substeps:
step 3-1, extracting the features of the image to be detected to obtain a multilayer feature map;
step 3-2, determining the anchor scale of the detection network;
3-3, generating a plurality of target frames on each point of the characteristic diagram according to the determined anchor scale;
and 3-4, obtaining the category and coordinate information of the target to be detected.
Wherein, the step 3-2 comprises the following substeps:
step 3-2-1, determining a target imaging scale according to the camera height information and the camera imaging rule;
and 3-2-2, determining anchors scales in the characteristic diagrams of different layers according to the obtained target imaging scale.
In step 4, the position of the target relative to the aircraft is obtained, the position of the aircraft is obtained according to the aircraft navigation equipment, and then the actual position of the detection target is obtained.
In a second aspect, a target identification and positioning system fusing navigation information is provided, wherein the system comprises an image acquisition unit, an aircraft information acquisition unit, a target detection unit and a target positioning unit,
the image acquisition unit is used for acquiring an image of a target to be detected;
the aircraft information acquisition unit is used for acquiring the height and the position of the aircraft;
the target detection unit is used for acquiring target categories and coordinate information;
the target positioning unit is used for obtaining the actual position of the target.
Wherein the target detection unit comprises a feature extraction subunit and an anchor scale setting subunit,
the characteristic extraction subunit is used for obtaining a multilayer characteristic diagram of the image to be detected;
and the anchor scale setting subunit is used for determining the anchor scales in the characteristic images of different layers according to the target imaging scale.
In a third aspect, a computer-readable storage medium is provided, in which an object recognition and positioning program fusing navigation information is stored, which when executed by a processor causes the processor to execute the steps of the object recognition and positioning method fusing navigation information.
In a fourth aspect, a computer device is provided, which includes a memory and a processor, wherein the memory stores an object recognition and positioning program fusing navigation information, and the program, when executed by the processor, causes the processor to execute the steps of the object recognition and positioning method fusing navigation information.
The invention has the advantages that:
(1) according to the target identification and positioning method fusing navigation information, the maximum pixel of the target in the view field is obtained according to the height of the aircraft, the global random anchor problem is optimized to be a random anchor in a small-scale range, and the detection efficiency is improved by 76.2%;
(2) according to the target identification and positioning method fusing navigation information, the distance between the target and the camera can be obtained according to the sight angle and the height information of the target relative to the camera, and meanwhile, the accurate position of the target is obtained through navigation equipment;
(3) the target identification and positioning method fusing navigation information provided by the invention can realize quick and accurate positioning of ground targets.
Drawings
FIG. 1 shows a schematic window size diagram of the anchor in YOLOv3, where different colors represent different area sizes;
FIG. 2 is a schematic diagram illustrating a training data set labeling objective according to a preferred embodiment of the present invention;
FIG. 3 illustrates a schematic view of a camera imaging according to a preferred embodiment of the present invention;
fig. 4 shows a schematic view of a camera coordinate system according to a preferred embodiment of the invention.
Detailed Description
The present invention will be described in further detail below with reference to preferred embodiments and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The inventor finds that the nature of the anchor is the inverse of the idea of SPP (spatial pyridine output), and the SPP is an output with the same size but with an input resize of different sizes.
For example, for the window size of the anchor in YOLOv3, the three area sizes are 128 each2,2562,5122Then, at each area size, three different length-width ratios (1: 1, 1: 2, 2: 1) are adopted, and 9 anchors with different area sizes can be obtained, as shown in fig. 1.
The setting of the global anchor causes a large amount of calculation, and the detection efficiency is influenced.
Therefore, the target detection algorithm based on the convolutional neural network is adopted, and the global random anchor problem is optimized to be a random anchor within a small scale range, so that the detection efficiency is improved; meanwhile, GPS position information is applied to a target positioning link, so that the ground target is accurately positioned.
The invention provides a target identification and positioning method fusing navigation information, which comprises the steps of training a detection network and identifying and positioning by using the detection network,
preferably, the method comprises the steps of:
step 1, training to obtain a target detection network;
step 2, obtaining an image to be detected;
step 3, carrying out target detection on the image to be detected;
and 4, positioning the target.
The target identification and positioning method of the fusion navigation information of the invention is further described as follows:
step 1, training to obtain a target detection network.
Wherein, step 1 comprises the following substeps:
step 1-1, labeling a training data set.
A target detection algorithm based on deep learning aims at different application scenes and different types of data sets need to be prepared.
Specifically, when the target is labeled, a circumscribed rectangle of the labeled target is shown in fig. 2. Taking the labeling information of one of the labeling files as an example, the contents are as follows:
{"area":169,"bbox":[102,81,13,13],"category_name":"car"}
wherein the area value represents the pixel area of the rectangular frame region; four values after bbox: the first numerical value represents the horizontal pixel coordinate of the upper left corner of the rectangular frame relative to the upper left corner of the picture, and the right side is positive; the second numerical value represents the vertical pixel coordinate of the upper left corner of the rectangular frame relative to the upper left corner of the picture, and the downward direction is positive; the third value represents the rectangular box width; the fourth value represents the rectangular box height; the category _ name represents the target category.
And 1-2, constructing a detection network.
Wherein, a depth residual error network of multilayer convolution, namely a depth convolution network, is constructed. In the present invention, a new feature extraction network is preferably constructed using ResNet101 as a base network.
And 1-3, modifying the anchor scale of the network.
According to a preferred embodiment of the present invention, the anchor scale of different feature layers of the network is modified according to the target scale range of the training set, preferably by the following formula:
Figure BDA0002689991150000071
wherein S iskRepresenting the ratio of the prior frame of each characteristic layer to the size of the original image; smaxScale value, S, representing the feature layer of the highest layerminThe scale value of the feature layer at the bottommost layer is represented and specifically set according to the condition of a detection target; m represents the number of characteristic layers; k denotes the next feature layer.
In the present invention, a set of S is determined based on a target scale range of a training data setmaxAnd SminAs an initial value for training the network.
And 1-4, training the network until convergence.
And training the detection network according to the image class labels by using the labeled training data set for the pre-trained deep residual error network, and updating network parameters until a convergent detection network is obtained.
And 2, obtaining an image to be detected.
In the invention, when the aircraft flies, the image of the target to be detected is obtained through the visual camera. Wherein, the aircraft can be unmanned vehicles, like unmanned aerial vehicle, also can be manned aircraft.
According to a preferred embodiment of the invention, when the aircraft flies, the height and the position of the aircraft are acquired;
preferably, the altitude of the aircraft is obtained by an altimeter and the position of the aircraft is obtained by a navigation device.
And 3, carrying out target detection on the image to be detected.
Wherein, step 3 comprises the following substeps:
and 3-1, performing feature extraction on the image to be detected to obtain a multilayer feature map.
Wherein, the image to be detected is trained to obtain a detection network (including a backbone network and a feature extraction network) in the step 1, so as to obtain a multilayer feature map.
And 3-2, determining the anchor scale of the detection network.
Aiming at the problems that in the prior art, due to the setting of a global anchor, the calculation amount is large, and the generalization capability is low due to the fact that the scale of the anchor is determined through random experiments, the inventor finds that: according to the principle that the target is imaged in the camera to the maximum when the target is positioned right below the visual camera, the imaging pixel range of the target can be obtained according to the actual size range of the target, and then the imaging pixel range can be used as the maximum anchor size in the detection structure, so that the detection speed is obviously improved.
Specifically, step 3-2 includes the following substeps:
and 3-2-1, determining a target imaging scale according to the camera height information and the camera imaging rule.
Wherein, the height information of the camera can be obtained by the altimeter; the imaging rules of the camera are shown in fig. 3.
Specifically, the method comprises the following steps: let two points P on the detection target1And P2The coordinates in the camera coordinate system are respectively P1=[X1 Y1Z1]T,P2=[X2 Y2 Z2]T
The position coordinates of the two points in the image are respectively p1=[u1 v1]T,p2=[u2 v2]T
The camera imaging model is as follows:
Figure BDA0002689991150000081
Figure BDA0002689991150000082
wherein f isx=αf,fyβ f, the units are pixels; f is the focal length of the camera in millimeters; alpha and beta are the number of pixels per millimeter, and the unit is pixel per millimeter.
Point P1And P2The distance projected on the image is obtained by:
Figure BDA0002689991150000083
(Pixel)
Figure BDA0002689991150000084
(Pixel)
When the target is positioned right below the camera, the imaging in the camera is maximum, and the target imaging scale can be obtained according to the actual size range of the target, the camera height obtained by the altimeter and the imaging principle.
And 3-2-2, determining anchors scales in the characteristic diagrams of different layers according to the obtained target imaging scale.
According to a preferred embodiment of the present invention, according to the obtained target imaging scale, the anchor scale range of the different layer feature maps is obtained by the following formula:
Figure BDA0002689991150000091
wherein S iskRepresenting the ratio of the prior frame of each characteristic layer to the size of the original image; smaxAnd SminRespectively representing the scale values of the characteristic layers of the top layer and the bottom layer, and specifically setting according to the detection target condition; m represents the number of characteristic layers; k denotes the next feature layer.
According to the obtained imaging scale of the target to be detected, updating the S determined in the training processmaxAnd SminAnd thus the anchor scale is updated.
And 3-3, generating a plurality of target frames on each point of the feature map according to the determined anchor scale.
In the invention, because the receptive fields corresponding to the original image on different characteristic layers are different, the sizes of the target frames generated on different characteristic layers are different. When generating the target frame, generating a series of concentric target frames on each point on the characteristic layer, using m characteristic layers with different sizes to predict, wherein the scale value of the characteristic layer at the bottommost layer is SminThe scale value of the characteristic layer of the highest layer is SmaxThe other layer being obtained by:
Figure BDA0002689991150000092
Different aspect ratio values are used, i.e. γ (aspect ratio) ═ 1,2,3,1/2,1/3]And thus each anchor is wide
Figure BDA0002689991150000093
Height of
Figure BDA0002689991150000094
When gamma is 1, adding one-scale anchor,
Figure BDA0002689991150000095
and 3-4, obtaining the category and coordinate information of the target to be detected.
In the invention, the optimal target frame position is preferably output through NMS non-maximum value inhibition to obtain the coordinate position of the target center in the view field, namely the center x and y of the target frame and the width and height w and h of the target frame; and simultaneously outputs the object category information.
And 4, positioning the target.
Wherein, step 4 comprises the following substeps:
and 4-1, obtaining the position of the target relative to the aircraft.
In the present invention, as shown in fig. 4, first, a camera coordinate system, an x-axis and a y-axis as coordinate origins are defined, and then the pixel coordinates of the target in the camera coordinate system are (x)0,y0) The coordinate of the center point of the camera system is (x)c,yc)。
Preferably, neglecting the installation error of the center of the camera of the vision camera and the center of the aircraft, can be approximately considered as (x)c,yc) Is the center of the unmanned aerial vehicle,
according to a preferred embodiment of the invention, the pixel of the target is normalized to the pixel coordinates of the aircraft center, and the pixel error is obtained by:
Figure BDA0002689991150000101
Figure BDA0002689991150000102
in a further preferred embodiment, the pixel error in the camera coordinate system is converted into a position error in the x, y direction in the aircraft stability coordinate system by:
Figure BDA0002689991150000103
Figure BDA0002689991150000104
wherein e isx、eyThe deviation of the target position under the stable coordinate system of the machine body is obtained; h is the current altitude of the aircraft, θ1And theta2(the angles of view of the camera in the x and y directions, respectively, are intrinsic parameters of the camera; K (is an empirical coefficient, preferably
Figure BDA0002689991150000105
The directional positioning of the target relative to the aircraft can be obtained by the above steps.
And 4-2, acquiring the actual position of the target.
The method comprises the steps of obtaining the position of the aircraft according to aircraft navigation equipment (such as GPS equipment), calculating to obtain the actual position of a detected target, and completing target positioning.
According to the target identification and positioning method fusing navigation information, the maximum pixel of the target in the view field is obtained through the height information of the camera obtained by the altimeter, so that the global random anchor problem is optimized to the random anchor in a small-scale range, and the detection efficiency is improved; meanwhile, the navigation equipment is applied to a target positioning link, so that the ground target is accurately positioned.
The invention also provides a target identification and positioning system fused with navigation information, which comprises an image acquisition unit, an aircraft information acquisition unit, a target detection unit and a target positioning unit,
the image acquisition unit is used for acquiring an image of a target to be detected;
the aircraft information acquisition unit is used for acquiring the height and the position of the aircraft;
the target detection unit is used for acquiring target categories and coordinate information;
the target positioning unit is used for obtaining the actual position of the target.
According to a preferred embodiment of the invention, the object detection unit comprises a feature extraction subunit and an anchor scale setting subunit,
the characteristic extraction subunit is used for obtaining a multilayer characteristic diagram of the image to be detected;
and the anchor scale setting subunit is used for determining the anchor scales in the characteristic images of different layers according to the target imaging scale.
The invention also provides a computer readable storage medium, which stores a target identification and positioning program fused with navigation information, and when the program is executed by a processor, the program causes the processor to execute the steps of the target identification and positioning method fused with navigation information.
The target identification and positioning method of the fusion navigation information can be realized by means of software and a necessary general hardware platform, wherein the software is stored in a computer readable storage medium (comprising a ROM/RAM, a magnetic disk and an optical disk) and comprises a plurality of instructions for enabling a terminal device (which can be a mobile phone, a computer, a server, a network device and the like) to execute the method of the invention.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores an object identification positioning program fusing the navigation information, and the program causes the processor to execute the steps of the object identification positioning method fusing the navigation information when being executed by the processor.
Examples
The present invention is further described below by way of specific examples, which are merely exemplary and do not limit the scope of the present invention in any way.
Example 1
1. Data set
The objective identification and localization method of the fused navigation information according to the present invention is evaluated by using a VisDrone data set, which contains 263 video clips, 179264 video frames, and 10209 still images. For the target detection task, VisDrone contains 10209 fully labeled static images that can be generalized into 10 categories, with 6471 images for training, 548 images for verification, and 3190 images for testing. The resolution of the image is around 2000 × 1500 pixels.
2. Task description
After training a detection network by using a training data set in a COCO data set and learning of network parameters and anchor scale modification by using the method, detecting and positioning a target by using a test data set, an unmanned aerial vehicle simulation fusion altimeter and GPS equipment; and performing performance evaluation after the test is finished, and comparing the performance evaluation with a method without fusing an altimeter.
The experimental platform was an Nvidia TX2 computer.
3. Results and analysis
The comparative test results are shown in table 1:
TABLE 1
Figure BDA0002689991150000121
Figure BDA0002689991150000131
Wherein, FPS represents the detection speed, namely the number of processed pictures per second;
AP represents the detection precision, namely the area of a region enclosed by the P-R curve and the coordinate axis;
p represents accuracy, R represents recall, and the calculation formula is as follows:
Figure BDA0002689991150000132
Figure BDA0002689991150000133
TP represents that the recovery target is positive and the recovery is correct, FP represents that the negative is wrongly detected as positive, TN represents that the negative is correctly detected, and FN represents that the positive is wrongly detected as negative.
According to the detection result, compared with the method for determining the anchor scale without the altimeter, the target identification and positioning method with the navigation information fusion has the advantages that the detection precision AP is slightly reduced, but the FPS is improved by 76.2%, and the detection speed is obviously improved on the premise of ensuring the detection precision.
The invention has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to be construed in a limiting sense. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the technical solution of the present invention and its embodiments without departing from the spirit and scope of the present invention, which fall within the scope of the present invention.

Claims (10)

1. A target identification and positioning method fusing navigation information is characterized by comprising the following steps:
step 1, training to obtain a target detection network;
step 2, obtaining an image to be detected;
step 3, carrying out target detection on the image to be detected;
and 4, positioning the target.
2. The method according to claim 1, characterized in that step 1 comprises the following sub-steps:
step 1-1, labeling a training data set;
step 1-2, constructing a detection network;
step 1-3, modifying the anchor scale of the network;
and 1-4, training the network until convergence.
3. The method according to claim 2, characterized in that in step 1-3, the anchor scale of different feature layers of the network is modified according to the target scale range of the training set, preferably obtained by the following formula:
Figure FDA0002689991140000011
wherein S iskRepresenting the ratio of the prior frame of each characteristic layer to the size of the original image; smaxScale value, S, representing the feature layer of the highest layerminThe scale value of the feature layer at the bottommost layer is represented and specifically set according to the condition of a detection target; m represents the number of characteristic layers; k denotes the next feature layer.
4. The method according to claim 1, characterized in that step 3 comprises the following sub-steps:
step 3-1, extracting the features of the image to be detected to obtain a multilayer feature map;
step 3-2, determining the anchor scale of the detection network;
3-3, generating a plurality of target frames on each point of the characteristic diagram according to the determined anchor scale;
and 3-4, obtaining the category and coordinate information of the target to be detected.
5. The method according to claim 1, characterized in that step 3-2 comprises the following sub-steps:
step 3-2-1, determining a target imaging scale according to the camera height information and the camera imaging rule;
and 3-2-2, determining anchors scales in the characteristic diagrams of different layers according to the obtained target imaging scale.
6. The method according to claim 1, wherein in step 4, the position of the target relative to the aircraft is obtained first, and then the position of the aircraft itself is obtained according to the aircraft navigation device, thereby obtaining the actual position of the detected target.
7. A target identification and positioning system fused with navigation information is characterized by comprising an image acquisition unit, an aircraft information acquisition unit, a target detection unit and a target positioning unit,
the image acquisition unit is used for acquiring an image of a target to be detected;
the aircraft information acquisition unit is used for acquiring the height and the position of the aircraft;
the target detection unit is used for acquiring target categories and coordinate information;
the target positioning unit is used for obtaining the actual position of the target.
8. The system of claim 7, wherein the target detection unit comprises a feature extraction subunit and an anchor scale setting subunit,
the characteristic extraction subunit is used for obtaining a multilayer characteristic diagram of the image to be detected;
and the anchor scale setting subunit is used for determining the anchor scales in the characteristic images of different layers according to the target imaging scale.
9. A computer-readable storage medium, in which an object recognition localization program fused with navigation information is stored, which program, when executed by a processor, causes the processor to carry out the steps of the object recognition localization method fused with navigation information according to one of claims 1 to 6.
10. Computer device comprising a memory and a processor, characterized in that the memory stores an object recognition localization program fusing navigation information, which program, when executed by the processor, causes the processor to carry out the steps of the object recognition localization method fusing navigation information according to one of claims 1 to 6.
CN202010988347.7A 2020-09-18 2020-09-18 Target identification and positioning method fusing navigation information Pending CN112232132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010988347.7A CN112232132A (en) 2020-09-18 2020-09-18 Target identification and positioning method fusing navigation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010988347.7A CN112232132A (en) 2020-09-18 2020-09-18 Target identification and positioning method fusing navigation information

Publications (1)

Publication Number Publication Date
CN112232132A true CN112232132A (en) 2021-01-15

Family

ID=74107021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010988347.7A Pending CN112232132A (en) 2020-09-18 2020-09-18 Target identification and positioning method fusing navigation information

Country Status (1)

Country Link
CN (1) CN112232132A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023046136A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Feature fusion method, image defogging method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN107727079A (en) * 2017-11-30 2018-02-23 湖北航天飞行器研究所 The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite
CN108681718A (en) * 2018-05-20 2018-10-19 北京工业大学 A kind of accurate detection recognition method of unmanned plane low target
CN109740463A (en) * 2018-12-21 2019-05-10 沈阳建筑大学 A kind of object detection method under vehicle environment
CN111178148A (en) * 2019-12-06 2020-05-19 天津大学 Ground target geographic coordinate positioning method based on unmanned aerial vehicle vision system
WO2020141468A1 (en) * 2019-01-03 2020-07-09 Ehe Innovations Private Limited Method and system for detecting position of a target area in a target subject

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN107727079A (en) * 2017-11-30 2018-02-23 湖北航天飞行器研究所 The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite
CN108681718A (en) * 2018-05-20 2018-10-19 北京工业大学 A kind of accurate detection recognition method of unmanned plane low target
CN109740463A (en) * 2018-12-21 2019-05-10 沈阳建筑大学 A kind of object detection method under vehicle environment
WO2020141468A1 (en) * 2019-01-03 2020-07-09 Ehe Innovations Private Limited Method and system for detecting position of a target area in a target subject
CN111178148A (en) * 2019-12-06 2020-05-19 天津大学 Ground target geographic coordinate positioning method based on unmanned aerial vehicle vision system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘芳等: "基于多尺度特征融合的自适应无人机目标检测", 《光学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023046136A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Feature fusion method, image defogging method and device

Similar Documents

Publication Publication Date Title
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
KR101261409B1 (en) System for recognizing road markings of image
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
CN111241988B (en) Method for detecting and identifying moving target in large scene by combining positioning information
CN113989450B (en) Image processing method, device, electronic equipment and medium
CN109029444A (en) One kind is based on images match and sterically defined indoor navigation system and air navigation aid
CN111912416A (en) Method, device and equipment for positioning equipment
WO2021212477A1 (en) Point cloud data correction method, and related device
CN113936198A (en) Low-beam laser radar and camera fusion method, storage medium and device
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN115861860B (en) Target tracking and positioning method and system for unmanned aerial vehicle
CN105335977A (en) Image pickup system and positioning method of target object
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN117115784A (en) Vehicle detection method and device for target data fusion
CN112232132A (en) Target identification and positioning method fusing navigation information
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
CN116824457A (en) Automatic listing method based on moving target in panoramic video and related device
US11835359B2 (en) Apparatus, method and computer program for generating map
CN112884841B (en) Binocular vision positioning method based on semantic target
CN114037895A (en) Unmanned aerial vehicle pole tower inspection image identification method
CN113298713A (en) On-orbit rapid registration method capable of resisting cloud interference
CN111666959A (en) Vector image matching method and device
CN112598736A (en) Map construction based visual positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination