CN116740615A - Method for detecting small targets of apron by fusing depth information - Google Patents

Method for detecting small targets of apron by fusing depth information Download PDF

Info

Publication number
CN116740615A
CN116740615A CN202310805592.3A CN202310805592A CN116740615A CN 116740615 A CN116740615 A CN 116740615A CN 202310805592 A CN202310805592 A CN 202310805592A CN 116740615 A CN116740615 A CN 116740615A
Authority
CN
China
Prior art keywords
target
small
image
apron
small target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310805592.3A
Other languages
Chinese (zh)
Inventor
樊治国
夏克江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Gaozhong Information Technology Co ltd
Original Assignee
Qingdao Gaozhong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Gaozhong Information Technology Co ltd filed Critical Qingdao Gaozhong Information Technology Co ltd
Priority to CN202310805592.3A priority Critical patent/CN116740615A/en
Publication of CN116740615A publication Critical patent/CN116740615A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a small target of an apron by fusing depth information, which specifically comprises the following steps: s1, measuring small target sizes such as a wheel guard, a reflecting cone and the like in advance; s2, shooting left and right images by using a binocular camera, and acquiring a parallax image of a current scene through a binocular stereo matching algorithm; s3, calculating a corresponding U-V parallax map according to the parallax map, and acquiring small targets such as a wheel guard, a reflecting cone and the like on the apron on the U-V parallax map; s4, extracting a target image on the corresponding RGB image according to the parallax image small target detection result; s5, inputting the target image into a trained image classification model to obtain a target class result; according to the invention, the small target area is extracted through binocular stereoscopic vision, so that the problem of detection of small targets such as a wheel guard, a reflecting cone and the like is effectively solved; by training a simple classifier network, the input small target images are classified, and category information is output.

Description

Method for detecting small targets of apron by fusing depth information
Technical Field
The invention relates to the technical field of target detection methods, in particular to a method for detecting a small target of an apron by fusing depth information.
Background
The patent of automatic identification method for placing a reflecting cone on an air park (application number CN201811228974. X) predicts the standard position of the reflecting cone on a picture to be detected by using an image registration method based on the determination that the spatial relationship between the reflecting cone and the relevant part of an airplane is relatively fixed. The method has obvious defects: in a monitoring image, a reflecting cone belongs to a small target, and characteristic points of the small target are difficult to extract by an image registration method, so that accuracy of the method is difficult to guarantee.
The patent "automatic identification method of the wheel gear arrangement specification on the civil aircraft parking apron" (application number CN 202110295767.1) discloses an automatic identification method of the wheel gear arrangement specification on the civil aircraft parking apron, which requires that the wheel gear is manufactured to have a specified reflective mark and color, then a deep learning model is adopted to detect the position of the wheel gear, and whether the wheel gear exists or not is detected at the position of the wheel gear. The method has obvious defects: firstly, the method requires that the wheel guard is made to have specified reflective marks and colors, so that the workload of airport equipment maintenance personnel and the airport operation cost are increased; secondly, the target detection technology based on the neural network is easily affected by weather, illumination, target size and the like, and the detection accuracy is difficult to guarantee.
Disclosure of Invention
The invention aims to solve the technical problem of providing an apron small target detection method integrating depth information, which solves the problems of accuracy and recall rate of the existing target detection algorithm in solving the problem of small target detection.
The method for detecting the small target of the apron by fusing depth information is realized by the following technical scheme: the method specifically comprises the following steps:
s1, measuring small target sizes such as a wheel guard, a reflecting cone and the like in advance;
s2, shooting left and right images by using a binocular camera, and acquiring a parallax image of a current scene through a binocular stereo matching algorithm;
s3, calculating a corresponding U-V parallax map according to the parallax map, and acquiring small targets such as a wheel guard, a reflecting cone and the like on the apron on the U-V parallax map;
s4, extracting a target image on the corresponding RGB image according to the parallax image small target detection result;
s5, inputting the target image into a trained image classification model to obtain a target class result.
As an preferable technical scheme, on the premise of the step S2, a dense parallax map of the apron area is obtained by a binocular stereoscopic vision technology.
Step S3, extracting the position of the parking apron pavement from the V disparity map; detecting all targets of the tarmac in the U-disparity map.
As a preferable technical scheme, step S4, according to the target position detected in the UV parallax map, combines with a preset small target size standard, filters all targets to obtain small target information meeting the preset small target information, acquires a corresponding position screenshot in the RGB image, and stores the small target screenshot.
As a preferable technical scheme, step S5 is to send the small target screenshot into a corresponding image classification network, output target class information and obtain a final small target detection result.
The beneficial effects of the invention are as follows:
1. the small target area is extracted through binocular stereoscopic vision, so that the problem of detection of small targets such as a wheel guard, a reflecting cone and the like is effectively solved;
2. further, by training a simple classifier network, classifying the input small target image and outputting class information;
3. the detection method of the apron wheel block and the reflection cone which are integrated with the depth information greatly improves the detection rate and the accuracy of small targets of the apron. Meanwhile, as the stereoscopic vision technology is introduced to extract the small target area, the operation efficiency of the algorithm is improved, and the training complexity of the algorithm model is reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for detecting a small target of an apron by fusing depth information;
FIG. 2 is an original diagram of a method for detecting a small target of an apron by fusing depth information;
fig. 3 is a parallax diagram of the method for detecting a small target of an apron by fusing depth information;
fig. 4 is a UV parallax diagram of the tarmac small target detection method of the present invention incorporating depth information;
FIG. 5 is a schematic diagram of the invention for extracting small objects in a UV disparity map;
FIG. 6 is a small target image location;
fig. 7 is a small object class.
Detailed Description
All of the features disclosed in this specification, or all of the steps in a method or process disclosed, may be combined in any combination, except for mutually exclusive features and/or steps.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.
In the description of the present invention, it should be understood that the terms "one end," "the other end," "the outer side," "the upper," "the inner side," "the horizontal," "coaxial," "the center," "the end," "the length," "the outer end," and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, merely to facilitate description of the present invention and simplify the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, in the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Terms such as "upper," "lower," and the like used herein to refer to a spatially relative position are used for ease of description to describe one element or feature's relationship to another element or feature as illustrated in the figures. The term spatially relative position may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In the present invention, unless explicitly specified and limited otherwise, the terms "disposed," "coupled," "connected," "plugged," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
As shown in fig. 1 to 7, the method for detecting the small target of the apron by fusing depth information specifically comprises the following steps:
s1, measuring small target sizes such as a wheel guard, a reflecting cone and the like in advance;
s2, shooting left and right images by using a binocular camera, and acquiring a parallax image of a current scene through a binocular stereo matching algorithm;
s3, calculating a corresponding U-V parallax map according to the parallax map, and acquiring small targets such as a wheel guard, a reflecting cone and the like on the apron on the U-V parallax map;
s4, extracting a target image on the corresponding RGB image according to the parallax image small target detection result;
s5, inputting the target image into a trained image classification model to obtain a target class result.
In this embodiment, on the premise of step S2, a dense parallax map of the apron area is obtained by binocular stereoscopic vision technology.
In the embodiment, step S3, extracting the position of the parking apron pavement from the V parallax map; and combining the extracted tarmac pavement information, and acquiring all targets on the tarmac in the U parallax map. The large-scale light-emitting device comprises small targets such as a wheel guard, a light reflecting cone, an operator and the like, and large targets such as an airplane, a special vehicle and a gallery bridge and the like.
As shown in fig. 4 and 5, in the embodiment, step S4, according to the target position detected in the UV parallax map, a preset small target size standard is combined, small target information meeting the preset is obtained by filtering in all targets, a corresponding position screenshot is obtained in the RGB image, and the small target screenshot is saved;
and sending the small target screenshot to a corresponding image classification network, and outputting target class information to obtain a final small target detection result.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any changes or substitutions that do not undergo the inventive effort should be construed as falling within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope defined by the claims.

Claims (5)

1. A method for detecting a small target of an apron by fusing depth information is characterized by comprising the following steps of: the method specifically comprises the following steps:
s1, measuring small target sizes such as a wheel guard, a reflecting cone and the like in advance;
s2, shooting left and right images by using a binocular camera, and acquiring a parallax image of a current scene through a binocular stereo matching algorithm;
s3, calculating a corresponding U-V parallax map according to the parallax map, and acquiring small targets such as a wheel guard, a reflecting cone and the like on the apron on the U-V parallax map;
s4, extracting a target image on the corresponding RGB image according to the parallax image small target detection result;
s5, inputting the target image into a trained image classification model to obtain a target class result.
2. The method for detecting a small tarmac target by fusing depth information according to claim 1, wherein the method comprises the steps of: and on the premise of the step S2, acquiring a dense parallax map of the apron area through a binocular stereoscopic vision technology.
3. The method for detecting a small tarmac target by fusing depth information according to claim 1, wherein the method comprises the steps of: step S3, extracting the position of the parking apron pavement from the V parallax map; detecting all targets of the tarmac in the U-disparity map.
4. The method for detecting a small tarmac target by fusing depth information according to claim 1, wherein the method comprises the steps of: and S4, filtering all targets according to the target positions detected in the UV parallax map and the preset small target size standard to obtain small target information conforming to the preset small target information, acquiring corresponding position screenshots in the RGB image, and storing the small target screenshots.
5. The method for detecting a small tarmac target by fusing depth information according to claim 1, wherein the method comprises the steps of: and S5, sending the small target screenshot into a corresponding image classification network, and outputting target class information to obtain a final small target detection result.
CN202310805592.3A 2023-07-03 2023-07-03 Method for detecting small targets of apron by fusing depth information Pending CN116740615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310805592.3A CN116740615A (en) 2023-07-03 2023-07-03 Method for detecting small targets of apron by fusing depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310805592.3A CN116740615A (en) 2023-07-03 2023-07-03 Method for detecting small targets of apron by fusing depth information

Publications (1)

Publication Number Publication Date
CN116740615A true CN116740615A (en) 2023-09-12

Family

ID=87901086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310805592.3A Pending CN116740615A (en) 2023-07-03 2023-07-03 Method for detecting small targets of apron by fusing depth information

Country Status (1)

Country Link
CN (1) CN116740615A (en)

Similar Documents

Publication Publication Date Title
CN110197589B (en) Deep learning-based red light violation detection method
CN105313782B (en) Vehicle travel assist system and its method
CN107274695B (en) Intelligent lighting system, intelligent vehicle and vehicle driving assisting system and method thereof
CN105654732A (en) Road monitoring system and method based on depth image
CN104951775A (en) Video technology based secure and smart recognition method for railway crossing protection zone
CN103236181B (en) Traffic signal lamp state monitoring system and method based on machine vision
CN109484935A (en) A kind of lift car monitoring method, apparatus and system
CN107506760A (en) Traffic signals detection method and system based on GPS location and visual pattern processing
CN104091168B (en) Power line automatic extraction and positioning method based on unmanned aerial vehicle image
CN105242232B (en) Outdoor electrical energy meter fault self-verifying method
CN110866483B (en) Dynamic and static combined visual detection and positioning method for airport runway foreign matter
CN102435174A (en) Method and device for detecting barrier based on hybrid binocular vision
CN107784844B (en) Intelligent traffic signal lamp system and road environment detection method thereof
CN109348179B (en) Road monitoring and detecting system and method based on artificial intelligence
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN102867417A (en) Taxi anti-forgery system and taxi anti-forgery method
CN105844245A (en) Fake face detecting method and system for realizing same
CN107578012A (en) A kind of drive assist system based on clustering algorithm selection sensitizing range
CN107993470A (en) Count down traffic signal lamp condition detection method and the monitoring system based on the method
CN103914682A (en) Vehicle license plate recognition method and system
CN109255279A (en) A kind of method and system of road traffic sign detection identification
CN109297978B (en) Binocular imaging-based power line unmanned aerial vehicle inspection and defect intelligent diagnosis system
CN104713526A (en) Method for detecting types of foreign matters on power transmission line
CN105825160A (en) Positioning device based on image recognition and positioning method thereof
CN105303827A (en) Traffic violation image obtaining device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination