CN101114337A - Ground buildings recognition positioning method - Google Patents

Ground buildings recognition positioning method Download PDF

Info

Publication number
CN101114337A
CN101114337A CNA2007100529284A CN200710052928A CN101114337A CN 101114337 A CN101114337 A CN 101114337A CN A2007100529284 A CNA2007100529284 A CN A2007100529284A CN 200710052928 A CN200710052928 A CN 200710052928A CN 101114337 A CN101114337 A CN 101114337A
Authority
CN
China
Prior art keywords
identified
interested
region
image
buildings
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007100529284A
Other languages
Chinese (zh)
Other versions
CN100547603C (en
Inventor
张天序
路鹰
杨效余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CNB2007100529284A priority Critical patent/CN100547603C/en
Publication of CN101114337A publication Critical patent/CN101114337A/en
Application granted granted Critical
Publication of CN100547603C publication Critical patent/CN100547603C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A recognizing and positioning method of ground buildings belongs to the imaging automatic target recognition field, which aims at solving the problem of recognition and positioning from different viewpoints and from different scales, and different heights, and to be used in forward looking ground buildings. The invention constructs a ground building standard feature library in advance. The sequence includes: an enhanced image procedure, a background suppression processing procedure, a gray level merge procedure, a feedback and segmentation procedure, a vertical bar feature detection procedure, and a quadratic character matching procedure. The invention further extracts characteristic quantity to match with the standard characteristic, considers recognizing the veins and scene information of the buildings, and recognizes and positions the forward looking ground buildings, aiming at the characteristic of ground buildings, and making use of mathematical morphology to extract structure information of image. The novel method has high precision of recognition, good reliability, and is used in fields such as urban planning, supervision, aircraft contact navigation, collision-avoidance to recognize the forward looking ground buildings of different viewpoints, different scales and different heights.

Description

Ground building identification and positioning method
Technical Field
The invention belongs to the field of imaging automatic target recognition, and particularly relates to a ground building recognition and positioning method for recognizing a ground building by forward looking of an aircraft.
Background
Buildings are important elements of artificial targets, and the identification of the buildings can be used in the fields of city planning, supervision, aircraft navigation, collision avoidance and the like. The ground background is complicated, the sizes and the shapes of various buildings are different, and the imaging height and the imaging angle of the load of the moving platform are variable, so that the identification and the positioning of the ground buildings are difficult tasks.
Currently, the conventional target detection and identification methods can be divided into two types, one is a bottom-up data driving type, and the other is a top-down hypothesis testing driving type. The former does not matter the attribute of the target, firstly, the low-level processing such as general segmentation, marking, feature extraction and the like is carried out on the original input image, and then the feature vector of each marked segmented area is matched with the type of the target to carry out detection, identification and judgment. The latter needs to put forward an assumption about the characteristics possibly existing in the image according to the model of the target, then performs segmentation, marking and characteristic extraction on the interested region purposefully according to the assumption, and then performs fine matching on the target model to detect and identify the target.
Wang Zheshen, li Cuihua in "search and identification of building target based on improved Hough transform", china image graphics, vol.10 No.4 Apr 2005, proposes to detect buildings by vertical lines, record discrete parameter values by improved Hough transform algorithm, and narrow the range of detection angle; and the Euclidean distance is used for approximation so as to save memory space and operation amount and enhance the robustness of noise detection and noise resistance. However, in this method, since most vertical lines of the preliminary detection result are discontinuous line segments, optimization and merging of the results are required. Under the condition of complex background, a large number of discontinuous line segments are generated, so that the calculation amount of optimization and combination is obviously increased, and the real-time requirement cannot be met. Hong Zhi, jiang Qingshan, dong Huailin, etc. in "identification and change detection of a class of building targets under a complex background", a method for identifying and changing a class of regular polygonal buildings under a complex background is proposed in the twelfth national image graphics conference. The core of the method is that the line is used as a basic processing unit, and a multi-level sensing combination process is carried out on the line. And finally, fitting and identifying the building through a strict optimal matching process, and carrying out change detection. The method is mainly limited in that only regular polygon building objects can be identified. It is required that the boundary parameters are known in the case of a non-regular polygon, which is not uncommon in practical applications. Li Chaofeng and Wang Shi are both in the "target recognition method based on morphological Top-Hat operator and knowledge processing", microelectronics and computer, 2005, volume 22, 12, propose to use open Top-Hat operation, close Top-Hat operation, and morphological filtering to perform target recognition. The method has a good effect on identifying the similar-to-circle target on the SPOT satellite remote sensing image with the complex background. However, since Top-Hat operation causes a significant increase in the amount of calculation with an increase in the size of the target and can be used only for downward viewing, this method is not suitable for the case of forward looking ground building identification. None of the above documents considers identifying ground buildings of different viewpoints, different dimensions, and different heights.
Disclosure of Invention
The invention provides a ground building identification and positioning method, which aims to solve the problem of identifying and positioning ground buildings from different view points, different scales and different heights and is used for identifying the ground buildings in forward sight.
The invention discloses a ground building identification and positioning method, which comprises the following steps of pre-constructing a standard feature library: extracting characteristic quantities from three types of characteristic views, namely ground building shape characteristic views with different viewpoints and different scales, scene characteristic views and ground building texture characteristic views; the method comprises the following steps:
(1) An image enhancement step of performing histogram equalization on an original input image;
(2) A background suppression processing step, namely performing morphological enhancement and morphological background suppression on the histogram equalized image;
(3) A gray level merging step, namely merging the gray levels of the images subjected to the background suppression processing to reduce the gray levels of the images;
(4) A feedback segmentation step, namely performing threshold segmentation on the image after gray level combination to obtain a binary image, sequentially performing feature extraction and matching on each interested region of the binary image, performing relationship feature matching between the interested region and the adjacent interested region thereof, performing relationship matching between a plurality of interested regions, performing texture feature matching on the corresponding region of the interested region and performing texture feature matching on the corresponding region of the adjacent interested region of the interested region, and judging the number of the interested regions after each matching;
(5) A vertical bar characteristic detection step, namely converting an original input image into a binary image by using an average gray value of the original input image as a threshold value, detecting the binary image by using a line template, outputting a line image, calculating the length of each vertical bar in the line image, matching each vertical bar according to a high characteristic quantity in a standard characteristic library, and screening out vertical bars meeting conditions;
(6) And (5) a secondary feature matching step, namely comprehensively considering the results of the feedback segmentation step and the vertical bar feature detection step, judging whether vertical bar features exist in the corresponding region of each region of interest reserved in the step (4) in the vertical bar feature detection step (5), reducing the total error of the region of interest if the vertical bar features exist, otherwise, keeping the total error unchanged, matching each region of interest according to the total error, and if the matching is successful, reserving the region of interest and returning to the original input image to position the building.
In the method for identifying and positioning the ground buildings, the pre-constructed standard feature library may sequentially include the following processes:
(1) Calculating each feature quantity:
(1.1) product K of height of ground building to be identified and imaging distance of ground building in scene view h View of the ground building to be identified in the sceneProduct K of medium width and imaging distance thereof w Area factor K of the ground structure to be identified s
Figure A20071005292800111
K hi =h i ×D i i=1,2,3,…
Figure A20071005292800112
K wi =w j ×D i i=1,2,3,…
Figure A20071005292800121
K si =s i /(h i ×w i ),i=1,2,3,…
In the formula, the height h of the building to be identified in the scene view under different viewpoints and different scales i Width w i Area s i Imaging distance D i
(1.2) height characteristic quantity H of building imaging under different viewpoints and different scales i Width characteristic quantity w i Peripheral length characteristic quantity C i Area characteristic quantity S i And shape factor F i
H i =K h /D i ,W i =K w /D i ,S i =K s ×H i ×W i ,C i =2×H i ×W i
Figure A20071005292800122
(1.3) morphological enhancement structural elements and morphological background suppression structural elements under different viewpoints and different scales;
the morphological enhancement structural elements are rectangles with the height and width of 1 pixel multiplied by N pixels under different viewpoints and different scales, and N is a natural number of 3-7;
the morphological background suppression structural element is a width characteristic quantity W under different viewpoints and different scales i And heightMeasurement characteristic quantity H i The formed rectangle;
(1.4) the relation K between the ground building to be identified and the surrounding buildings under different viewpoints and different scales i Relation D between ground buildings to be identified ij And the internal texture characteristics T of the ground building to be identified i And the texture characteristics T of the surrounding scene of the ground building to be identified in
Wherein h is i For the height of the ground building i to be recognized in the view of the scene, h in Height of surrounding buildings of the ground building i to be identified in the scene view;
when the number of the ground buildings to be identified is more than 1,
wherein (x) i ,y i ) For the coordinates of the center of gravity of the ground structure i to be recognized in the scene view, (x) j ,y j ) For the gravity center coordinates of the ground building j to be identified in the scene view, the longitudinal distance weighted value p is 2-5, and the transverse distance weighted value q is 1-3; the minimum distance between a certain ground building to be identified and each neighboring ground building to be identified is the minimum distance of the ground building to be identified;
if the ground building to be identified is mainly characterized by horizontal texture, T i Comprises the following steps:
Figure A20071005292800125
if the ground building to be identified is mainly characterized by vertical texture, T i Comprises the following steps:
Figure A20071005292800131
wherein h is i For the ground building i to be recognized, height in the view of the scene, w i For ground buildings to be identified iWidth in scene view;
when the number of the ground buildings to be identified is more than 1, considering T in
If the surrounding buildings of the ground building to be identified are mainly characterized by horizontal textures, T in Comprises the following steps:
Figure A20071005292800132
if the buildings around the ground building to be identified are mainly characterized by vertical textures, the method comprises the following steps
Figure A20071005292800133
Wherein h is in Height, w, in the scene view for i surrounding buildings to be identified in I the width of the buildings around the ground building to be identified in the scene view;
(2) Storing the various characteristic values into a database to obtain a standard characteristic library of the ground buildings at different viewpoints and different scales; the viewpoint is determined by the imaging height, the imaging distance and the azimuth angle of the imaging point, and the scale is determined by the imaging distance.
In the method for identifying and locating a ground building, the background suppression processing step may sequentially include the following steps:
(1) Performing morphology enhancement, namely selecting a rectangle with the height and width of 1 pixel multiplied by N pixels under different viewpoints and different scales as a morphology enhancement structural element, performing closed operation on a histogram equalized image, reducing texture information inside a building to be identified, and enhancing the contrast of the image, wherein N is a natural number of 3-7;
(2) And (3) morphological background suppression, namely selecting a rectangle formed by width characteristic quantity and height characteristic quantity in the standard characteristic library at different viewpoints and different scales as a morphological background suppression structural element, performing open operation on the morphologically enhanced image, and filtering out buildings or backgrounds which are obviously different from the shape of the building to be identified, so that the gray level of the image is reduced.
The ground building identification and positioning method is characterized by comprising the following steps: and the gray level merging step is to perform histogram statistics on the image subjected to the background suppression processing, judge the occurrence frequency of each gray value according to a threshold, merge the gray value with the frequency smaller than the threshold and the gray value with the nearest neighbor frequency greater than or equal to the threshold, wherein the threshold is an integer of 200-500.
In the method for identifying and positioning the ground buildings, the feedback segmentation step may sequentially include the following steps:
(1) Threshold segmentation, namely performing gray level threshold segmentation on the image after gray level combination by taking the gray level as a threshold to convert the image into a binary image;
(2) Extracting and matching features, marking each block of region in the binary image, and calculating the feature quantity of each marked region: comparing the characteristic quantities of each marked region in the binary image according to the characteristic quantities in the standard characteristic library, calculating the error and the total error of each characteristic quantity, wherein the total error is the sum of the errors of each characteristic quantity, and if the error and the total error of each characteristic quantity of a certain interesting region are within a specified range, successfully matching the interesting region, keeping the successfully matched interesting region, or else, discarding the interesting region;
(3) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(4) Matching the relation characteristics of the interested region and the interested regions adjacent to the interested region, and calculating the relation characteristics K of each interested region and the interested regions adjacent to the interested region i ′:
Figure A20071005292800141
Wherein H i ' is i region of interest height, H in ' Adjacent ROI height of i ROIDegree of interest, dividing each region of interest K i Comparing with corresponding values in a standard feature library, if the error is within a specified range, successfully matching, and reserving the region of interest, otherwise, discarding;
(5) And (2) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(6) And (3) matching the relations among the multiple interested areas, judging whether the number of the buildings to be identified is greater than 1, if not, turning to the process (8), and if so, respectively calculating the distance between the interested areas reserved in the process (4):
Figure A20071005292800151
wherein (X) i ,Y i )、(X j ,Y j ) Respectively obtaining the gravity center coordinates of an interested area i and an interested area j, wherein the longitudinal distance weighted value p is 2-5, and the transverse distance weighted value q is 1-3; the minimum distance between a certain interested region and each adjacent interested region is the minimum distance of the interested region, the minimum distance of each interested region is compared with the corresponding value in the standard feature library, if the error is in the designated range, the matching is successful, the interested region is reserved, otherwise, the interested region is discarded;
(7) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(8) Matching texture features of corresponding regions of the region of interest, finding the corresponding regions of the region of interest reserved in the process (6) in the original input image, calculating the texture features of the corresponding regions, comparing the texture features with the corresponding texture features of the standard feature library, if the error is within a specified range, successfully matching, reserving the region of interest, and otherwise, discarding;
(9) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(10) Matching texture features of corresponding areas of adjacent interesting areas of the interesting areas, judging whether the number of buildings to be identified is larger than 1, if not, turning to a secondary feature matching step, if so, finding the adjacent interesting area corresponding areas of the interesting areas reserved in the process (8) in the original input image, respectively calculating the texture features of the corresponding areas, comparing the texture features with the corresponding texture features of a standard feature library, if the error is within a specified range, successfully matching, reserving the interesting areas, otherwise, abandoning;
(11) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to a secondary feature matching step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the threshold of the gray levels from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
In the method for identifying and positioning a ground building, the step of detecting the vertical bar feature may sequentially include the following processes:
(1) Line detection, namely converting an original input image into a binary image by using the average gray value of the original input image as a threshold value, respectively detecting the binary image by using a vertical line template and a horizontal line template to obtain line images, and merging the line images in two directions into a result line image;
wherein the vertical line template is:
-1 2 -1
-1 2 -1
-1 2 -1
wherein the horizontal line template is:
-1 -1 -1
2 2 2
-1 -1 -1
(2) And (4) screening the lengths of the vertical bars, calculating the length of each vertical bar in the result line image, comparing the length of each vertical bar with the high characteristic quantity in the standard characteristic library, and if the length of each vertical bar is less than the high characteristic quantity in the standard characteristic library, discarding the vertical bars to screen the vertical bars meeting the conditions.
In the method for identifying and positioning the ground buildings, the secondary feature matching step may sequentially include the following processes:
(1) Whether the corresponding region of each region of interest reserved in the feedback segmentation step has vertical bar features in the vertical bar feature detection step or not is determined, if the corresponding region has the vertical bar features, the total error value of the corresponding region is reduced according to a preset weight value, and if the corresponding region does not have the vertical bar features, the total error value is kept unchanged;
(2) Judging whether the total error value of a certain interested area is smaller than a specified threshold, if so, determining the interested area as a building area to be identified, otherwise, discarding the interested area;
(3) And a positioning step, namely corresponding the building region to be identified to a corresponding region on the original input image, marking the center of gravity of the region and completing the positioning of the target.
According to the method, a standard feature library of the building to be identified is constructed in advance, the structure information of the image is extracted by using mathematical morphology according to the characteristics of the ground building, the feature quantity is further extracted to be matched with the standard feature library on the basis, and the texture and scene information of the building to be identified are considered to identify and position the front ground building. The method is successfully used for identifying and positioning the forward-looking ground buildings with complex backgrounds on the helicopter, has high identification precision and good reliability, and is suitable for identifying the forward-looking ground buildings with different viewpoints, different scales and different heights.
Drawings
Fig. 1 is a schematic diagram of identifying and positioning a three-dimensional ground building in front view of a helicopter, wherein the helicopter 1, a viewpoint 2, a field angle 3, an imaging height 4 and a horizontal distance 5 from a target;
FIG. 2 is a schematic flow diagram of the present invention;
FIG. 3 is a schematic diagram of a feedback segmentation process;
FIG. 4 is a view of the shape characteristics of a telecom building from different viewpoints and different scales;
FIG. 5 is a view of a feature of a scene of a telecommunications building, wherein (A) is a view of a feature of a scene imaged at an imaging distance of 6km and an imaging height of 1km in a direction of 45 DEG east of the flight; (B) Imaging a scene feature view with an imaging distance of 6km and an imaging height of 2km in a direction of 45 degrees east to north; (C) Imaging distance is 10km and imaging height is 1km in a 45-degree direction to the north of the east; (D) The method comprises the steps of imaging a scene feature view with an imaging distance of 10km and an imaging height of 2km in a 45-degree direction of east northeast;
FIG. 6 is a textural feature view of an electrical building, wherein (A) is a textural feature view of an imaging distance of 6km and an imaging height of 1km in a 45 ° east-off-north direction of flight; (B) Imaging a texture feature view with an imaging height of 2km at an imaging distance of 6km in a 45-degree direction of east-north flight; (C) The texture feature view with imaging distance of 10km and imaging height of 1km in the direction of 45 degrees north east of the flight is shown; (D) The texture feature view is an imaging distance of 10km and an imaging height of 2km in the east-north 45-degree direction of the flight;
FIG. 7 is an image of an electric communication building flying in a direction of 45 degrees north east with an imaging distance of 6km and an imaging height of 1 km;
FIG. 8 is the morphologically enhanced image of FIG. 7;
fig. 9 is the morphological background suppression image of fig. 8;
FIG. 10 is a binary image obtained by combining the gray levels of FIG. 9 and then performing a first threshold segmentation;
FIG. 11 is a binary image obtained by combining the gray levels of FIG. 9 and then performing a second threshold segmentation;
fig. 12 is a binary image obtained by combining the gray levels of fig. 9 and then performing threshold segmentation for the third time;
FIG. 13 is the binary image of the transformation of FIG. 7;
FIG. 14 is a line image of the results of the line template inspection of FIG. 13;
FIG. 15 is an image of the vertical bar screen results of FIG. 14;
FIG. 16 is an image of telecommunications building object location.
Detailed Description
A schematic diagram of a helicopter for identifying and positioning a three-dimensional ground structure in front view is shown in fig. 1, and the steps of the present invention are described below with reference to fig. 2 by taking a front-view telecom building as an example:
(1) A standard feature library is constructed in advance, feature quantities are extracted from three feature views, namely a telecom building shape feature view, a scene feature view and a texture feature view, of different viewpoints and different scales, the telecom building shape feature view is shown in figure 4, the telecom building scene feature view is shown in figure 5, and the telecom building texture feature view is shown in figure 6.
Constructing a standard feature library of the ground building of the telecommunication building under the condition that the imaging distance is 6km and the imaging height is 1km in the direction of 45 degrees to the north of the flight
(1.1) calculating each feature amount:
①K h 、K w 、K s
product of height of telecommunication building in view of scene and imaging distance thereof
Figure A20071005292800191
K hi =h i ×D i i =1, …,10, calculated as K h =550;
Product of width of telecommunication building in view of scene and imaging distance of telecommunication building
Figure A20071005292800192
K wi =w i ×D i i =1, …,10, calculated as K w =144;
Area factor of telecommunications building
Figure A20071005292800193
K si =s i /(h i ×w i ) I =1, …,10, calculated as K s =0.79
In the formula, the height h of the telecommunication building in the scene view under different viewpoints and different scales i Width w i Area s i Imaging distance D i
(2) Height characteristic quantity h for imaging telecommunication building under the condition that the imaging distance is 6km and the imaging height is 1km in the direction of 45 degrees to the north of east 1 Width characteristic quantity W 1 Peripheral length characteristic quantity C 1 Area characteristic quantity S 1 And shape factor F 1
H 1 =K h /D 1 ,W 1 =K w /D 1 ,S 1 =K s ×H 1 ×W 1 ,C 1 =2×H 1 ×W 1
Figure A20071005292800194
In the formula, the imaging distance D 1 Is 6km
Calculated to obtain H 1 =91 pixel, W 1 =24 pixels, C 1 =230 pixels, S 1 =1725 pixel, F 1 =2.44
(3) The morphological strengthening structural elements and the morphological background inhibiting structural elements are arranged in the direction of 45 degrees to the north of east of flight under the condition that the imaging distance is 6km and the imaging height is 1 km;
the morphology enhancement structure element selects a rectangle with the height and width of 1 pixel multiplied by 5 pixels;
morphological background suppression structural elements the width feature W is generally selected 1 And height characteristic quantity H 1 The rectangle formed by taking the rectangle formed by 0.6 times of the height characteristic quantity and the width characteristic quantity of the telecommunication building as the morphological background inhibiting structural element, namely the morphological background inhibiting structural element is 55 pixels multiplied by 14 pixelsA pixel;
(4) k is measured in the imaging distance of 6km and the imaging height of 1km in the direction of 45 degrees north east of the aircraft i 、D ij 、T i 、 T in
There are no other buildings around the telecommunications building, so the telecommunications building is in relation K to the surrounding buildings 1 =0, since the telecommunications building is the only target to be identified, regardless of the distance D between the ground buildings to be identified ij And the texture characteristics T of the scene around the ground building to be identified in T is calculated by an internal texture feature formula 1 =7.8
(1.2) storing the characteristic values into a database to obtain a standard characteristic library of the telecommunication building under the condition that the imaging distance is 6km and the imaging height is 1km in the direction of 45 degrees north east of the flight.
(2) And an image enhancement step of performing histogram equalization on the original input image, wherein the histogram equalization is used for enhancing the image contrast with a smaller dynamic range and increasing the dynamic range of pixel gray values, so that the effect of enhancing the overall contrast of the image is obtained, and the original input image is shown in fig. 7.
(3) A background suppression processing step, which sequentially comprises the following processes:
and (3.1) morphologically enhancing, namely selecting a rectangle with the size of 1 pixel multiplied by 5 pixels as a morphologically enhancing structural element to perform closed operation on the histogram equalized image, reducing the internal texture information of the building to be identified, and enhancing the image contrast. The results after morphological enhancement are shown in fig. 8.
(3.2) morphological background suppression, using a rectangular morphological structuring element of size 55 pixels by 14 pixels to perform the opening operation on fig. 8. To filter out buildings or backgrounds that are significantly different in shape and size from the telecom building, resulting in a reduced image grey level, the result of which is shown in fig. 9.
(4) And a gray level merging step, namely performing histogram statistics on the image after the background suppression processing, judging the occurrence frequency of each gray value according to a threshold, and merging the gray value with the frequency less than the threshold with the gray value with the nearest neighbor frequency more than or equal to the threshold, wherein the threshold is 300.
(5) Feedback segmentation step
And (5.1) a threshold segmentation step, namely performing gray level threshold segmentation on the image after the gray level combination by taking the gray level as a threshold, and converting the image into a binary image. The results of the first division are shown in fig. 10, the results of the second division are shown in fig. 11, and the results of the third division are shown in fig. 12.
(5.2) a feature extraction step, marking each block of region in the binary image, and calculating the feature quantity of each marked region: area, center of gravity, perimeter, height, width, and form factor. The first segmentation result has a region of interest, area S 1 ' =308 pixels, barycentric coordinates (X) 1 ,Y 1 ) = (239 pixels, 12 pixels) (where X 1 Denotes the ordinate, Y 1 On the abscissa, the same applies below), the circumference C 1 ' =80 pixels, height H 1 ' =33 pixels, width W 1 ' =10 pixels, form factor F 1 ' =1.78. The second segmentation result has two regions of interest, area S 1 ' =474 pixels, S 2 ' =692 pixels, barycentric coordinate (X) 1 ,Y 1 ) = (165 pixels, 158 pixels), (X) 2 ,Y 2 ) = (239 pixels, 14 pixels), circumference C 1 ' =170 pixels, C 2 ' =112 pixels, height H 1 ' =81 pixels on the pixel array,H 2 ' =34 pixels, width W 1 ' =8 pixels, W 2 ' =23 pixels, form factor F 1 ′=4.95,F 2 ' =1.39. The third segmentation result has six interested regions, the area S 1 ' =1343 pixels, S 2 ' =153 pixels, S 3 ' =1038 pixels, S 4 ' =93 pixels, S 5 ' =180 pixels, S 6 ' =140 pixels, barycentric coordinate (X) 1 ,Y 1 ) = (160 pixels, 154 pixels), (X) 2 ,Y 2 ) = (180 pixels, 57 pixels), (X) 3 ,Y 3 ) = (237 pixels, 15 pixels), (X) 4 ,Y 4 ) = (239 pixels, 38 pixels), (X) 5 ,Y 5 ) = (239 pixels, 311 pixels), (X) 6 ,Y 6 ) = 240 pixels, 53 pixels, circumference C 1 ' =186 pixels, C 2 ' =104 pixels, C 3 ' =150 pixels, C 4 ' =64 pixels, C 5 ' =68 pixels, C 6 ' =62 pixels, height H 1 ' =86 pixels, H 2 ' =51 pixels, H 3 ' =40 pixels, H 4 ' =31 pixels, H 5 ' =30 pixels, H 6 ' =28 pixels, width W 1 ' =19 pixels, W 2 ' =3 pixels, W 3 ' =31 pixels, W 4 ' =3 pixels, W 5 ' =6 pixels, W 6 ' =5 pixels, form factor F 1 ′=2.43,F 2 ′=5.65,F 3 ′=1.43, F 4 ′=3.68,F 5 ′=2.13,F 6 ′=2.30。
And (5.3) a characteristic matching step, namely performing characteristic matching on each interested area according to each characteristic quantity in the standard characteristic library, and if matching of a certain interested area is successful, reserving the area as the interested area for next classification. And matching each characteristic quantity extracted after the first segmentation and the second segmentation with each characteristic quantity in a standard characteristic library without reserving an interested area, matching each characteristic quantity extracted after the third segmentation with each characteristic quantity in the standard characteristic library, finding that the first block of area is successfully matched, and recording the area as the interested area for next classification.
And (5.4) judging the number of the interested areas, changing the number of the interested areas reserved in the first segmentation and the second segmentation to the next threshold segmentation, changing the number of the interested areas reserved in the third segmentation to be not less than the number of the buildings to be identified, and changing to the next step.
(5.5) matching the relation characteristics of the interested region and the interested regions adjacent to the interested region, and for the third segmentation of the first interested region reserved in (5.3), calculating the relation characteristics K of the interested region and the interested regions adjacent to the interested region 1 ', calculating K 1 ' =0, match corresponding value in the feature library, match success, keep this region of interest.
And (5.6) judging the number of the interested areas, turning to the next step when the number of the interested areas reserved in the third segmentation is not less than the number of buildings to be identified.
(5.7) the relationships between the plurality of interest areas match, and the telecom building is the only building to be identified, so the relationships between the plurality of interest areas are not considered.
And (5.8) matching the texture features of the corresponding regions of the interested regions, corresponding to the corresponding regions in the original input image according to the first interested region in the third segmented image, determining the horizontal texture type according to the texture features of the telecommunication building, and calculating the texture features according to the following formula.
Wherein H o Corresponding to the height, W, in the original input image region for the region of interest o Calculating T for the width of telecommunications building in the original input image 1 ' =7.81, and the matching is successful with the corresponding value in the feature library, and the region is reserved.
And (5.9) judging the number of the interested areas, turning to the next step when the number of the interested areas reserved in the third segmentation is not less than the number of buildings to be identified.
And (5.10) the telecommunication building is the only building to be identified, so that the texture feature matching of the adjacent interested area corresponding to the interested area is not considered, and the secondary feature matching step is carried out.
(6) Vertical bar characteristic detection step
(6.1) line detection, namely converting the original input image into a binary image by using the average gray value of the original input image as a threshold, wherein the binary image is shown in fig. 13, the binary image in fig. 13 is respectively detected by using a vertical line template and a horizontal line template to obtain line images, and the line images in two directions are combined into a result line image, which is shown in fig. 14;
wherein the vertical line template is:
-1 2 -1
-1 2 -1
-1 2 -1
wherein the horizontal line template is:
-1 -1 -1
2 2 2
-1 -1 -1
(6.2) vertical bar length screening, wherein the length of each vertical bar is calculated in the graph 14, because the bottom of the telecommunication building is provided with a side building, and the top of the telecommunication building is triangular, half of the high-degree characteristic quantity in the standard characteristic library is taken as a threshold to be compared, if the length of the vertical bar is smaller than the threshold, the vertical bar meeting the conditions is screened, and the vertical bar height screening result is shown in the graph 15.
(7) Secondary feature matching step
(7.1) one region of interest reserved in the feedback segmentation has vertical bar characteristics in a region corresponding to the vertical bar characteristic detection, so that the total error value is reduced according to a preset weight value.
And (7.2) determining the region of interest as the region of the telecom building, wherein the total error value of the region of interest is smaller than a specified threshold.
(7.3) a positioning step, namely, corresponding the region of interest determined as the telecom building to the corresponding region on the original input image, marking the gravity center of the region, and completing the positioning of the target, wherein the positioning image is shown in fig. 16.

Claims (7)

1. A ground building identification and positioning method comprises the following steps of pre-constructing a standard feature library: extracting characteristic quantities from three types of characteristic views, namely a ground building shape characteristic view with different viewpoints and different scales, a scene characteristic view and a ground building texture characteristic view; the method comprises the following steps:
(1) An image enhancement step of performing histogram equalization on an original input image;
(2) A background suppression processing step, namely performing morphological enhancement and morphological background suppression on the histogram equalized image;
(3) A gray level merging step, namely merging the gray levels of the images subjected to the background suppression processing to reduce the gray levels of the images;
(4) A feedback segmentation step, namely performing threshold segmentation on the image after gray level combination to obtain a binary image, sequentially performing feature extraction and matching on each interested region of the binary image, performing relationship feature matching between the interested region and the adjacent interested region thereof, performing relationship matching between a plurality of interested regions, performing texture feature matching on the corresponding region of the interested region and performing texture feature matching on the corresponding region of the adjacent interested region of the interested region, and judging the number of the interested regions after each matching;
(5) A vertical bar characteristic detection step, namely converting an original input image into a binary image by using the average gray value of the original input image as a threshold value, detecting the binary image by using a line template, outputting a line image, calculating the length of each vertical bar in the line image, matching each vertical bar according to the high characteristic quantity in a standard characteristic library, and screening out the vertical bars meeting conditions;
(6) And (5) a secondary feature matching step, namely comprehensively considering the results of the feedback segmentation step and the vertical bar feature detection step, judging whether vertical bar features exist in the corresponding region of each region of interest reserved in the step (4) in the vertical bar feature detection step (5), reducing the total error of the region of interest if the vertical bar features exist, otherwise, keeping the total error unchanged, matching each region of interest according to the total error, and if the matching is successful, reserving the region of interest and returning to the original input image to position the building.
2. A method for identifying and locating a ground structure as claimed in claim 1, wherein: the pre-building standard feature library sequence comprises the following processes:
(1) Calculating each feature quantity:
(1.1) product K of height of ground building to be identified and imaging distance of ground building in scene view h And multiplying the product K of the width of the ground building to be identified and the imaging distance of the ground building in the scene view w Area factor K of the ground structure to be identified s
Figure A2007100529280003C1
K hi =h i ×D i i=1,2,3,…
K wi =w i ×D i i=1,2,3,…
Figure A2007100529280003C3
K si =s i /(h i ×w i ),i=1,2,3,…
In the formula, the height h of the building to be identified in the scene view under different viewpoints and different scales i Width w i Area s of i Imaging distance D i
(1.2) different viewpoints and different scalesHeight characteristic quantity H of building imaging under degree i Width characteristic quantity W i Peripheral length characteristic quantity C i Area characteristic quantity S i And shape factor F i
H i =K h /D i ,W i =K w /D i ,S i =K s ×H i ×W i ,C i =2×H i ×W i
Figure A2007100529280003C4
(1.3) morphological enhancement structural elements and morphological background suppression structural elements under different viewpoints and different scales;
the morphological enhancement structural elements are rectangles with the height and width of 1 pixel multiplied by N pixels under different viewpoints and different scales, and N is a natural number of 3-7;
the morphological background suppression structural element is a width characteristic quantity W under different viewpoints and different scales i And height characteristic quantity H i The formed rectangle;
(1.4) the relation K between the ground building to be identified and the surrounding buildings under different viewpoints and different scales i Relation D between ground buildings to be identified ij And the internal texture characteristics T of the ground building to be identified i And the texture characteristics T of the surrounding scene of the ground building to be identified in
Figure A2007100529280003C5
Wherein h is i For the ground building i to be recognized, the height in the view of the scene, h in Height in the scene view for surrounding buildings of the ground building i to be identified;
when the number of the ground buildings to be identified is more than 1,
wherein (x) i ,y i ) For the ground building i barycentric coordinates to be identified, (x) j ,y j ) For the ground building j centre of gravity seat of waiting to discernMarking, wherein the longitudinal distance weighted value p is 2-5, and the transverse distance weighted value q is 1-3; the minimum distance between a certain ground building to be identified and each neighboring ground building to be identified is the minimum distance of the ground building to be identified;
if the ground building to be identified is mainly characterized by horizontal texture, T i Comprises the following steps:
Figure A2007100529280004C2
if the ground building to be identified is mainly characterized by vertical texture, T j Comprises the following steps:
Figure A2007100529280004C3
wherein h is i For the ground building i to be recognized height, w, in the view of the scene i The width of a ground building i to be identified in a scene view;
when the number of the ground buildings to be identified is more than 1, considering T in
If the surrounding buildings of the ground building to be identified are mainly characterized by horizontal textures, T in Comprises the following steps:
Figure A2007100529280004C4
if the buildings around the ground building to be identified are mainly characterized by vertical textures, the method comprises the following steps
Wherein h is in Height, w, in the scene view for i surrounding buildings to be identified in Width of the surrounding buildings of the ground building to be identified in the scene view;
(2) Storing the various characteristic values into a database to obtain a standard characteristic library of the ground buildings at different viewpoints and different scales; the viewpoint is determined by the imaging height, the imaging distance and the azimuth angle of the imaging point, and the scale is determined by the imaging distance.
3. The ground building identification and location method of claim 1 wherein said background suppression processing sequence comprises the following processes:
(1) Performing morphology enhancement, namely selecting a rectangle with the aspect ratio of 1 pixel multiplied by N pixels under different scales of different viewpoints as a morphology enhancement structural element, performing closed operation on a histogram equalization image, reducing texture information inside a building to be identified, and enhancing the contrast of the image, wherein N is a natural number of 3-7;
(2) And (3) morphological background suppression, namely selecting a rectangle formed by width characteristic quantity and height characteristic quantity in the standard characteristic library under different viewpoints and different scales as a morphological background suppression structural element, performing open operation on the morphologically enhanced image, and filtering out buildings or backgrounds which are obviously different from the shape of the building to be identified, so that the gray level of the image is reduced.
4. The ground building identification positioning method of claim 1, characterized in that: and the gray level merging step is to perform histogram statistics on the image subjected to the background suppression processing, judge the occurrence frequency of each gray value according to a threshold, and merge the gray value with the frequency less than the threshold and the gray value with the nearest neighbor frequency more than or equal to the threshold, wherein the threshold is an integer of 200-500.
5. A method as claimed in claim 1, wherein said feedback segmentation step comprises the following steps:
(1) Threshold segmentation, namely performing gray level threshold segmentation on the image after gray level combination by taking the gray level as a threshold to convert the image into a binary image;
(2) Extracting and matching features, marking each block of region in the binary image, and calculating the feature quantity of each marked region: comparing the characteristic quantities of each marked region in the binary image according to the characteristic quantities in the standard characteristic library by using the area, the gravity center, the perimeter, the height, the width and the shape factor, calculating the error and the total error of each characteristic quantity, wherein the total error is the sum of the errors of each characteristic quantity;
(3) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(4) Matching the relation characteristics of the interested region and the interested regions adjacent to the interested region, and calculating the relation characteristics K of each interested region and the interested regions adjacent to the interested region i ′:
Figure A2007100529280006C1
Wherein H i ' is i height of region of interest, H in ' for the height of the interested region adjacent to the interested region i, each interested region K i Comparing with corresponding values in a standard feature library, if the error is within a specified range, successfully matching, and reserving the region of interest, otherwise, discarding;
(5) And (2) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(6) And (3) matching the relations among the multiple interested areas, judging whether the number of the buildings to be identified is greater than 1, if not, turning to the process (8), and if so, respectively calculating the distance between the interested areas reserved in the process (4):
wherein (X) i ,Y i )、(X j ,Y j ) Respectively obtaining the gravity center coordinates of an interested area i and an interested area j, wherein the longitudinal distance weighted value p is 2-5, and the transverse distance weighted value q is 1-3; the minimum distance between a certain interested area and each adjacent interested area is the minimum distance of the interested area, the minimum distance of each interested area is compared with a corresponding value in a standard feature library, if the error is in a specified range, the matching is successful, the interested area is reserved, and if not, the interested area is discarded;
(7) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(8) Matching texture features of corresponding regions of the region of interest, finding the corresponding regions of the region of interest reserved in the process (6) in the original input image, calculating the texture features of the corresponding regions, comparing the texture features with the corresponding texture features of the standard feature library, if the error is within a specified range, successfully matching, reserving the region of interest, and otherwise, discarding;
(9) And (2) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to the next step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the gray level threshold from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
(10) Matching texture features of corresponding areas of adjacent interesting areas of the interesting areas, judging whether the number of buildings to be identified is larger than 1, if not, turning to a secondary feature matching step, if so, finding the adjacent interesting area corresponding areas of the interesting areas reserved in the process (8) in the original input image, respectively calculating the texture features of the corresponding areas, comparing the texture features with the corresponding texture features of a standard feature library, if the error is within a specified range, successfully matching, reserving the interesting areas, otherwise, abandoning;
(11) And (3) judging the number of the interested areas, judging whether the number of the reserved interested areas is not less than the number of the buildings to be identified, if so, turning to a secondary feature matching step, otherwise, judging whether all gray levels are completely segmented, if not, modifying the threshold of the gray levels from large to small, turning to the process (1), and if so, determining that the image does not contain the buildings to be identified.
6. The method as claimed in claim 1, wherein the vertical bar feature detection step comprises the following steps in sequence:
(1) Line detection, namely converting an original input image into a binary image by using the average gray value of the original input image as a threshold value, respectively detecting the binary image by using a vertical line template and a horizontal line template to obtain line images, and merging the line images in two directions into a result line image;
wherein the vertical line template is:
-1 2 -1 -1 2 -1 -1 2 -1
wherein the horizontal line template is:
-1 -1 -1 2 2 2 -1 -1 -1
(2) And (4) screening the lengths of the vertical bars, calculating the length of each vertical bar in the result line image, comparing the length of each vertical bar with the high characteristic quantity in the standard characteristic library, and if the length of each vertical bar is less than the high characteristic quantity in the standard characteristic library, discarding the vertical bars to screen the vertical bars meeting the conditions.
7. The method as claimed in claim 1, wherein the secondary feature matching step comprises the following steps:
(1) Whether the corresponding region of each region of interest reserved in the feedback segmentation step has vertical bar features in the vertical bar feature detection step or not is determined, if the corresponding region has the vertical bar features, the total error value of the corresponding region is reduced according to a preset weight value, and if the corresponding region does not have the vertical bar features, the total error value is kept unchanged;
(2) Judging whether the total error value of a certain interested area is smaller than a specified threshold, if so, determining the interested area as a building area to be identified, otherwise, discarding the interested area;
(3) And a positioning step, namely corresponding the building region to be identified to a corresponding region on the original input image, marking the center of gravity of the region and completing the positioning of the target.
CNB2007100529284A 2007-08-08 2007-08-08 A kind of above ground structure recognition positioning method Expired - Fee Related CN100547603C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100529284A CN100547603C (en) 2007-08-08 2007-08-08 A kind of above ground structure recognition positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100529284A CN100547603C (en) 2007-08-08 2007-08-08 A kind of above ground structure recognition positioning method

Publications (2)

Publication Number Publication Date
CN101114337A true CN101114337A (en) 2008-01-30
CN100547603C CN100547603C (en) 2009-10-07

Family

ID=39022670

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100529284A Expired - Fee Related CN100547603C (en) 2007-08-08 2007-08-08 A kind of above ground structure recognition positioning method

Country Status (1)

Country Link
CN (1) CN100547603C (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101532841B (en) * 2008-12-30 2010-09-08 华中科技大学 Method for navigating and positioning aerocraft based on landmark capturing and tracking
CN101846513A (en) * 2010-06-17 2010-09-29 中国人民解放军信息工程大学 Sign image recognition and center coordinate extraction method
CN101726297B (en) * 2009-12-18 2011-11-30 华中科技大学 Plane landmark selection and reference map preparation method for front-view navigation guidance
CN103679129A (en) * 2012-09-21 2014-03-26 中兴通讯股份有限公司 Method and device for identifying object in image
CN103714541A (en) * 2013-12-24 2014-04-09 华中科技大学 Method for identifying and positioning building through mountain body contour area constraint
CN108292140A (en) * 2015-12-09 2018-07-17 深圳市大疆创新科技有限公司 System and method for making a return voyage automatically
CN109163718A (en) * 2018-09-11 2019-01-08 江苏航空职业技术学院 A kind of unmanned plane autonomous navigation method towards groups of building
CN109691162A (en) * 2016-09-08 2019-04-26 华为技术有限公司 A kind of network site method and device for planning
WO2019095681A1 (en) * 2017-11-16 2019-05-23 珊口(上海)智能科技有限公司 Positioning method and system, and suitable robot
CN112348884A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Positioning method, terminal device and server
WO2021027676A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Visual positioning method, terminal, and server

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101532841B (en) * 2008-12-30 2010-09-08 华中科技大学 Method for navigating and positioning aerocraft based on landmark capturing and tracking
CN101726297B (en) * 2009-12-18 2011-11-30 华中科技大学 Plane landmark selection and reference map preparation method for front-view navigation guidance
CN101846513A (en) * 2010-06-17 2010-09-29 中国人民解放军信息工程大学 Sign image recognition and center coordinate extraction method
CN103679129A (en) * 2012-09-21 2014-03-26 中兴通讯股份有限公司 Method and device for identifying object in image
WO2014044158A1 (en) * 2012-09-21 2014-03-27 中兴通讯股份有限公司 Identification method and device for target object in image
CN103714541A (en) * 2013-12-24 2014-04-09 华中科技大学 Method for identifying and positioning building through mountain body contour area constraint
US20150248579A1 (en) * 2013-12-24 2015-09-03 Huazhong University Of Science And Technology Method for identifying and positioning building using outline region restraint of mountain
US9454692B2 (en) * 2013-12-24 2016-09-27 Huazhong University Of Science And Technology Method for identifying and positioning building using outline region restraint of mountain
CN108292140A (en) * 2015-12-09 2018-07-17 深圳市大疆创新科技有限公司 System and method for making a return voyage automatically
CN108292140B (en) * 2015-12-09 2022-03-15 深圳市大疆创新科技有限公司 System and method for automatic return voyage
US11300413B2 (en) 2015-12-09 2022-04-12 SZ DJI Technology Co., Ltd. Systems and methods for auto-return
US11879737B2 (en) 2015-12-09 2024-01-23 SZ DJI Technology Co., Ltd. Systems and methods for auto-return
CN109691162A (en) * 2016-09-08 2019-04-26 华为技术有限公司 A kind of network site method and device for planning
WO2019095681A1 (en) * 2017-11-16 2019-05-23 珊口(上海)智能科技有限公司 Positioning method and system, and suitable robot
US11099577B2 (en) 2017-11-16 2021-08-24 Ankobot (Shanghai) Smart Technologies Co., Ltd. Localization method and system, and robot using the same
CN109163718A (en) * 2018-09-11 2019-01-08 江苏航空职业技术学院 A kind of unmanned plane autonomous navigation method towards groups of building
CN112348884A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Positioning method, terminal device and server
WO2021027676A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Visual positioning method, terminal, and server

Also Published As

Publication number Publication date
CN100547603C (en) 2009-10-07

Similar Documents

Publication Publication Date Title
CN101114337A (en) Ground buildings recognition positioning method
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
CN110097536B (en) Hexagonal bolt looseness detection method based on deep learning and Hough transform
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN109446895B (en) Pedestrian identification method based on human head features
CN110070567B (en) Ground laser point cloud registration method
CN104063711B (en) A kind of corridor end point fast algorithm of detecting based on K means methods
CN108305260B (en) Method, device and equipment for detecting angular points in image
CN103714541A (en) Method for identifying and positioning building through mountain body contour area constraint
CN105976376B (en) High-resolution SAR image target detection method based on component model
CN112396619B (en) Small particle segmentation method based on semantic segmentation and internally complex composition
CN110399820B (en) Visual recognition analysis method for roadside scene of highway
CN107092871A (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN111783721B (en) Lane line extraction method of laser point cloud and electronic equipment
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN113177456B (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
WO2018000252A1 (en) Oceanic background modelling and restraining method and system for high-resolution remote sensing oceanic image
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN113362385A (en) Cargo volume measuring method and device based on depth image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091007

Termination date: 20170808