CN111027522B - Bird detection positioning system based on deep learning - Google Patents

Bird detection positioning system based on deep learning Download PDF

Info

Publication number
CN111027522B
CN111027522B CN201911397378.9A CN201911397378A CN111027522B CN 111027522 B CN111027522 B CN 111027522B CN 201911397378 A CN201911397378 A CN 201911397378A CN 111027522 B CN111027522 B CN 111027522B
Authority
CN
China
Prior art keywords
bird
module
image
pixel coordinates
birds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911397378.9A
Other languages
Chinese (zh)
Other versions
CN111027522A (en
Inventor
袁洪跃
侯学渊
罗元泰
殷姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WOOTION Tech CO Ltd
Original Assignee
WOOTION Tech CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WOOTION Tech CO Ltd filed Critical WOOTION Tech CO Ltd
Priority to CN201911397378.9A priority Critical patent/CN111027522B/en
Publication of CN111027522A publication Critical patent/CN111027522A/en
Application granted granted Critical
Publication of CN111027522B publication Critical patent/CN111027522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application relates to the technical field of bird detection, in particular to a bird detection positioning system based on deep learning, which comprises a target detection module, wherein the target detection module is used for inputting an acquisition image into a training model for preprocessing and outputting bird pixel coordinates of birds in the acquisition image, and the bird detection subsystem is used for transmitting the acquisition image with the birds and the bird pixel coordinates of the birds to the positioning subsystem; the positioning subsystem acquires the magnification and pixel information in the acquired image, acquires a pre-stored conversion coefficient according to the magnification, converts the conversion coefficient into a deflection angle corresponding to the next pixel of each magnification, acquires the center pixel coordinate of the center point of the acquired image according to the pixel information, pre-stores the actual coordinate of the center point of the image in the actual environment, and converts birds from the position in the acquired image to the position in the actual environment according to the parameters. The application improves the accuracy of detecting birds of smaller body types.

Description

Bird detection positioning system based on deep learning
Technical Field
The application relates to the technical field of bird detection, in particular to a bird detection positioning system based on deep learning.
Background
Bird strikes are collisions of birds with flying artificial aircraft, high-speed trains, automobiles, etc., causing damage to the aircraft or the vehicle body. The take-off and landing process of the aircraft is the stage most prone to bird strike, for example, when an aircraft takes off or lands, the aircraft is prone to bird strike events in airports and nearby airspaces, the aircraft or the vehicle after bird strike is prone to damage on internal parts, public transportation safety of the aircraft or the vehicle is seriously affected, and therefore it is very important to detect, locate and then drive birds in the places.
The existing bird detection and positioning method is carried out through images obtained by infrared or laser radars, however, different birds have different body types and different sizes on the images, when the body types of the birds are smaller, the pixels of the birds on the images are very low, and the birds are very difficult to detect and position through the images.
Disclosure of Invention
The application aims to provide a bird detection positioning system based on deep learning, which aims to solve the problem that small-size birds are difficult to detect and position.
Bird detection positioning system based on degree of depth study in this scheme includes:
the bird detection subsystem comprises a target detection module with a training model, wherein the target detection module is used for inputting an acquired image into the training model for preprocessing and outputting bird pixel coordinates of birds in the acquired image, and the bird detection subsystem sends the acquired image with the birds and the bird pixel coordinates of the birds to the positioning subsystem;
the positioning subsystem acquires the magnification factor and pixel information in the acquired image, acquires a pre-stored conversion coefficient according to the magnification factor, wherein the conversion coefficient is a deflection angle corresponding to the next pixel of each magnification factor, acquires the center pixel coordinate of the center point of the acquired image according to the pixel information, pre-stores the actual coordinate of the center point of the image in the actual environment, and converts the bird from the position in the acquired image to the position in the actual environment according to the bird pixel coordinate, the center pixel coordinate, the conversion coefficient and the actual coordinate.
The beneficial effect of this scheme is:
the target detection module of the bird detection subsystem is used for identifying the pixel coordinates of birds in the acquired image, then the pixel coordinates, the central pixel coordinates, the conversion coefficient and the actual coordinates of the birds in the acquired image are converted into positions in the actual environment, the birds are identified after the acquired image is acquired, the pixel coordinate system and the actual coordinate system are established, the positions of the birds in the acquired image are converted into the actual positions, the birds are positioned after being detected, the bird position positioning is more timely, meanwhile, the target detection module is used for acquiring the bird pixel coordinates, the birds with smaller volumes are positioned in a detection mode, and the accuracy of bird positioning is improved.
Further, the target detection module comprises a preprocessing unit, and the preprocessing unit performs overturn and noise addition on the preset image when learning the training model.
The beneficial effects are that: when the training model is learned, the preset image is preprocessed, the preset image is overturned and noise is added, so that the processed acquired image is closer to the actual acquired image, and the effectiveness of subsequent processing of the actual acquired image is improved.
Further, the bird detection subsystem comprises a processing module, the target detection module further comprises a cutting unit and a coordinate unit, the cutting unit cuts a preset image into a plurality of tiles in the training process of the training module, the coordinate unit carries out parallel processing on the tiles to obtain preset pixel coordinates, the cutting unit cuts an acquired image into a plurality of square sub-images and sends the square sub-images to the coordinate unit, the coordinate unit outputs bird pixel coordinates of birds in the acquired image after the parallel processing is completed, and the processing module integrates the pixel coordinates and sends the integrated pixel coordinates to the positioning subsystem.
The beneficial effects are that: when learning the training model, cutting out the preset image, cutting out the acquired image, processing the acquired image in parallel by a coordinate unit, and outputting pixel coordinates after the parallel processing, so that birds with smaller volumes at the positioning position in the acquired image can be conveniently positioned, the processing speed is high, and the instantaneity of bird positioning is improved.
Further, the positioning subsystem comprises a conversion module and a coefficient module, the coefficient module is pre-stored with a conversion coefficient corresponding to the magnification, a central pixel coordinate and an actual coordinate, the processing module sends the searched bird pixel coordinate to the conversion module, and the conversion module converts the position of the bird in the acquired image to the position in the actual environment according to the bird pixel coordinate, the central pixel coordinate and the conversion coefficient.
The beneficial effects are that: and storing the conversion coefficient into a coefficient module, converting pixel coordinates of birds in the acquired image into an actual environment through the conversion module, reducing the calculated amount of the conversion coefficient, and positioning the birds in real time.
Further, the adjacent subgraphs have overlapping of a preset range.
The beneficial effects are that: and a certain overlapping amount is preset when the acquired image is cut, so that the characteristic loss caused by cutting is reduced.
Further, the conversion module subtracts the bird pixel coordinates from the center pixel coordinates and multiplies the bird pixel coordinates by the conversion coefficient to obtain a movement variable.
The beneficial effects are that: the bird pixel coordinates of the acquired image are subtracted from the center pixel coordinates to obtain a mobile variable, and the bird positioning accuracy is improved on the basis of the fixed center pixel coordinates.
Further, the movement variable is a position variable of the bird relative to the center point of the image, and the movement variable comprises a horizontal angle and a vertical angle of the bird in the actual environment.
The beneficial effects are that: and the moving variable is calibrated through the angles of the horizontal direction and the vertical direction, and the bird positions determined by the two dimensions are more prepared.
Further, the positioning subsystem comprises a positioning module, the conversion module sends the moving variable and the actual coordinates to the positioning module, and the positioning module adds the moving variable and the actual coordinates to position the bird in the actual environment.
The beneficial effects are that: the positioning module is used for positioning the bird position in practice according to the mobile variable and the actual coordinate, so that the bird position can be conveniently positioned.
Drawings
FIG. 1 is a logic block diagram of an embodiment of a bird detection positioning system based on deep learning of the present application;
FIG. 2 is a flow chart of an implementation process of an embodiment of the bird detection positioning system based on deep learning of the present application.
Detailed Description
Further details are provided below with reference to the specific embodiments.
The bird detection positioning system based on deep learning comprises a collecting subsystem, a bird detection subsystem and a positioning subsystem, wherein the collecting subsystem is used for collecting image data of detected birds, the collecting subsystem can use an existing camera for collecting the image data, the image data collected by the collecting subsystem is sent to the bird detection subsystem, and the bird detection subsystem is used for sending the processed data to the positioning subsystem.
The bird detection subsystem comprises a target detection module with a training model and a processing module, the target detection module comprises a preprocessing unit, a cutting unit and a coordinate unit, the preprocessing unit performs overturning and noise adding on a preset image when the training model is learned, the preprocessing unit can perform overturning and noise adding on an acquired image through the existing algorithm, the cutting unit cuts the preset image into a plurality of image blocks in the training process of the training module, the adjacent sub-images are kept to have overlapping of preset ranges when cut, the preset ranges can be set to 10%, the coordinate unit performs parallel processing on the image blocks to obtain preset pixel coordinates, then the processing result of the image blocks is output, the processing module fuses the output result through an NMS algorithm, the operation is repeated in such a way, the output result is closest to the pixel coordinates of the birds in the preset image, and the output result is optimized to obtain the training model.
When the training model is obtained and then the actual acquired picture is input into the training module to be processed, the cutting unit cuts the acquired picture into a plurality of square sub-pictures and sends the square sub-pictures to the coordinate unit, the coordinate unit carries out parallel processing on the sub-pictures to obtain an output result, the overlapping of preset ranges between adjacent sub-pictures is kept when the sub-pictures are cut, the preset ranges can be set to 10%, the processing module carries out NMS algorithm fusion on the output result to obtain types and pixel coordinates, the processing module sends the pixel coordinates to the positioning subsystem, the processing module can use the existing processor of I7-620LM model, for example, the upper left corner of the acquired picture is used as an initial point, the initial pixel sitting of the initial point is marked as (0, 0), and the processing module records bird pixel coordinates in the acquired picture by identifying the pixel points, and the bird pixel coordinates are based on the bird center point, for example, the bird pixel coordinates are (480,270).
The processing module sends the magnification and pixel information of the collected image to the positioning subsystem, the positioning subsystem comprises a conversion module, a coefficient module and a positioning module, the coefficient module is pre-stored with a conversion coefficient corresponding to the magnification, a central pixel coordinate and an actual coordinate of an image central point in an actual environment, the conversion module obtains the conversion coefficient corresponding to the magnification when sending the collected image and sends the conversion coefficient and parameters in the coefficient module to the conversion module together, the conversion module converts the position of birds in the collected image to the position in the actual environment according to the bird pixel coordinate, the central pixel coordinate and the conversion coefficient, the conversion coefficient is a deflection angle corresponding to one pixel under each magnification, for example, 1920 x 1080 the central pixel coordinate of the image is (960, 540), the conversion coefficient is recorded as C, the conversion module subtracts the bird pixel coordinate and the central pixel coordinate to (-480, -270), the movement variable can be (- (480 x C) and the movement variable is a position variable of birds relative to the image central point, the movement variable comprises the horizontal and vertical angles of birds in the actual environment, and the movement variable is added to the actual environment by the conversion module, and the movement variable is obtained by the conversion module.
As shown in fig. 2, the specific implementation process is as follows:
before actual collection, a training model is learned firstly, a target detection module learns a preset image, a preprocessing unit is used for overturning and noise adding preprocessing the preset image, the pre-stored image is more similar to the image which is actually collected, the subsequent processing result is improved, a cutting unit is used for cutting the preset image to obtain a plurality of image blocks, finally, a coordinate unit is used for carrying out parallel processing on the image blocks to obtain an output result of pixel coordinates, a processing module is used for fusing the output result to obtain the pixel coordinates of birds in the preset image, the operation is repeated in such a way, the output result is optimal when the output result is closest to the pixel coordinates of birds in the preset image, the training model is obtained, the sub-image in the preprocessing cutting process is equivalent to the training input size of the model, namely, the adjacent image blocks have overlapping of a preset range, and therefore, the characteristic loss caused by the rest operation during model training is avoided.
In actual bird detection, birds in the range of 50 meters are detected by taking bird detection in Ningxia as an example, when image data acquisition of bird detection is carried out through an acquisition subsystem, a TCP protocol is used for data transmission between the acquisition subsystem and the bird detection subsystem, data transmission is carried out by taking WIFI as a medium, after the bird detection subsystem receives the data, the received data are image memory data at a camera end, a processing module firstly puts the data into a byte array and then converts the byte array into a mat object, so that an image is obtained and decoded, the image data are obtained, the video frame rate is 25FPS, two video channels (2 x 4096 x 1800) are arranged in a panoramic part, and a hanging cabin is an independent channel (1920).
The processing module acquires the acquired image and then performs clipping pretreatment by a clipping unit, in this embodiment, clipping and optimizing are performed on a dark 53 network when a YOLOv3 algorithm is used for object detection, real-time performance of object detection is improved, bird objects are detected by utilizing multi-scale features, object classification replaces softmax by using Logistic, a result detected by a neural network is screened, then pixel coordinates of an object detection frame, namely a coordinate unit and an output result of the pixel coordinates are obtained through processing, and because one bird is easily clipped into two sub-images during clipping, the processing module fuses the output results by an NMS algorithm to obtain unique pixel coordinates, converts the pixel coordinates into a horizontal angle and a pitching angle under a robot coordinate system (namely actual environment coordinates), and sends a bird positioning result to a subsequent bird driving module in a ROSTOPIC mode.
When detecting birds, the sub-image detected by the single Zhang Dai is generated into a plurality of pictures with the same size as the input size of the model (the generated plurality of pictures can be obtained by removing overlapped parts of the plurality of pictures and then splicing the images, the generated plurality of pictures are detected in parallel in a batch mode by a processing module, and finally all detection results are processed by an NMS algorithm, so that the characteristic loss caused by resize is avoided, and the instantaneity is improved.
In the bird detection subsystem, the model outputs results including a category (bird) and bird pixel coordinates of each bird in the image, wherein the category can be a pre-stored type, and in order to meet the bird driving requirement, the pixel coordinates need to be converted into robot coordinates (namely, actual environment coordinates where the acquisition subsystem is located).
Under the condition that the magnification of the shooting collected image is determined, a certain coefficient relation (namely, how many deflection angles one pixel represents and a conversion coefficient) exists between the pixels and the angles in the same camera image, and the conversion coefficient is obtained through camera calibration; the angle of the image center position (actual coordinates) in the robot coordinates is known, and then the horizontal and pitch angles in the robot coordinates can be calculated by the pixel coordinates of each bird in the image, thereby guiding the bird repellent.
According to the embodiment, the acquired image is cut into a plurality of subgraphs and then processed in parallel, so that the real-time performance of bird detection is improved, meanwhile, the pixel coordinates of birds are converted into the actual coordinate axes according to the magnification factors, and the positioning accuracy of the bird detection is improved.
The foregoing is merely exemplary embodiments of the present application, and specific structures and features that are well known in the art are not described in detail herein. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present application, and these should also be considered as the scope of the present application, which does not affect the effect of the implementation of the present application and the utility of the patent. The protection scope of the present application is subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (4)

1. Bird detection positioning system based on degree of depth study, its characterized in that includes:
the bird detection subsystem comprises a target detection module with a training model, wherein the target detection module is used for inputting an acquired image into the training model for preprocessing and outputting bird pixel coordinates of birds in the acquired image, and the bird detection subsystem sends the acquired image with the birds and the bird pixel coordinates of the birds to the positioning subsystem;
the positioning subsystem acquires amplification factors and pixel information in an acquired image, the positioning subsystem acquires prestored conversion coefficients according to the amplification factors, the conversion coefficients are deflection angles corresponding to the next pixel of each amplification factor, the positioning subsystem acquires center pixel coordinates of a center point of the acquired image according to the pixel information, the positioning subsystem prestores actual coordinates of the center point of the image in an actual environment, the positioning subsystem converts the position of birds in the acquired image to the position in the actual environment according to the bird pixel coordinates, the center pixel coordinates, the conversion coefficients and the actual coordinates, the positioning subsystem comprises a conversion module and a coefficient module, the coefficient module prestores conversion coefficients corresponding to the amplification factors, the center pixel coordinates and the actual coordinates, the processing module transmits the searched bird pixel coordinates to the conversion module, the conversion module converts the position of the bird in the acquired image to the position in the actual environment according to the bird pixel coordinates, the center pixel coordinates and the conversion coefficients, and particularly comprises the conversion module subtracts the conversion coefficients from the center pixel coordinates to the conversion coefficients, and the conversion coefficients are multiplied by the conversion module to obtain the position of the bird coordinates in the actual environment, and the positioning module moves the positioning module and the positioning module to obtain the position of the bird coordinates in the actual environment;
the target detection module further comprises a cutting unit and a coordinate unit, the cutting unit cuts the preset image into a plurality of image blocks in the training process of the training module, the coordinate unit carries out parallel processing on the image blocks to obtain preset pixel coordinates, the cutting unit cuts the acquired image into a plurality of square sub-images and sends the square sub-images to the coordinate unit, the coordinate unit outputs bird pixel coordinates of birds in the acquired image after the parallel processing is completed, and the processing module integrates the pixel coordinates and sends the integrated pixel coordinates to the positioning subsystem.
2. The deep learning based bird detection positioning system of claim 1, wherein: the target detection module comprises a preprocessing unit, and the preprocessing unit performs overturn and noise addition on a preset image when learning a training model.
3. The deep learning based bird detection positioning system of claim 1, wherein: and overlapping preset ranges are arranged between the adjacent subgraphs.
4. The deep learning based bird detection positioning system of claim 1, wherein: the movement variable is a position variable of the bird relative to the center point of the image, and the movement variable comprises a horizontal angle and a vertical angle of the bird in the actual environment.
CN201911397378.9A 2019-12-30 2019-12-30 Bird detection positioning system based on deep learning Active CN111027522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911397378.9A CN111027522B (en) 2019-12-30 2019-12-30 Bird detection positioning system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911397378.9A CN111027522B (en) 2019-12-30 2019-12-30 Bird detection positioning system based on deep learning

Publications (2)

Publication Number Publication Date
CN111027522A CN111027522A (en) 2020-04-17
CN111027522B true CN111027522B (en) 2023-09-01

Family

ID=70199876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911397378.9A Active CN111027522B (en) 2019-12-30 2019-12-30 Bird detection positioning system based on deep learning

Country Status (1)

Country Link
CN (1) CN111027522B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598738B (en) * 2020-12-25 2024-03-19 南京大学 Character positioning method based on deep learning
CN115278525B (en) * 2022-08-09 2023-03-28 成都航空职业技术学院 Method and system for simplifying cluster moving object continuous space-time positioning data

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030484A (en) * 2002-06-28 2004-01-29 Mitsubishi Heavy Ind Ltd Traffic information providing system
CN101697007A (en) * 2008-11-28 2010-04-21 北京航空航天大学 Radar image-based flyer target identifying and tracking method
CN102150653A (en) * 2011-03-11 2011-08-17 湖南继善高科技有限公司 Movable airfield avian detection and directional anti-bird device
CN103260008A (en) * 2012-11-21 2013-08-21 上海申瑞电网控制系统有限公司 Projection converting method from image position to actual position
CN103686074A (en) * 2013-11-20 2014-03-26 南京熊猫电子股份有限公司 Method for positioning mobile object in video monitoring
KR20160002510A (en) * 2014-06-30 2016-01-08 건국대학교 산학협력단 Coordinate Calculation Acquisition Device using Stereo Image and Method Thereof
CN105807332A (en) * 2016-04-28 2016-07-27 长春奥普光电技术股份有限公司 Bird detection system for airport
CN105989588A (en) * 2015-02-05 2016-10-05 上海隶首信息技术有限公司 Irregular-shaped material cutting image correction method and system
CN106373159A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Simplified unmanned aerial vehicle multi-target location method
CN106599897A (en) * 2016-12-09 2017-04-26 广州供电局有限公司 Machine vision-based pointer type meter reading recognition method and device
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN107273799A (en) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 A kind of indoor orientation method and alignment system
CN108450455A (en) * 2018-03-12 2018-08-28 武林霄 Bird-repeller system based on communication positioning
CN108710126A (en) * 2018-03-14 2018-10-26 上海鹰觉科技有限公司 Automation detection expulsion goal approach and its system
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
CN109754434A (en) * 2018-12-27 2019-05-14 歌尔科技有限公司 Camera calibration method, apparatus, user equipment and storage medium
CN109788208A (en) * 2019-01-30 2019-05-21 华通科技有限公司 Target identification method and system based on multiple groups focal length images source
CN109799760A (en) * 2019-01-30 2019-05-24 华通科技有限公司 The bird-repellent robots control system and control method of power industry
CN110054089A (en) * 2019-04-29 2019-07-26 北京航天自动控制研究所 A kind of tyre crane machine vision system for automatically correcting and method for correcting error
CN110276363A (en) * 2018-03-15 2019-09-24 北京大学深圳研究生院 A kind of birds small target detecting method based on density map estimation
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5108605B2 (en) * 2008-04-23 2012-12-26 三洋電機株式会社 Driving support system and vehicle

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030484A (en) * 2002-06-28 2004-01-29 Mitsubishi Heavy Ind Ltd Traffic information providing system
CN101697007A (en) * 2008-11-28 2010-04-21 北京航空航天大学 Radar image-based flyer target identifying and tracking method
CN102150653A (en) * 2011-03-11 2011-08-17 湖南继善高科技有限公司 Movable airfield avian detection and directional anti-bird device
CN103260008A (en) * 2012-11-21 2013-08-21 上海申瑞电网控制系统有限公司 Projection converting method from image position to actual position
CN103686074A (en) * 2013-11-20 2014-03-26 南京熊猫电子股份有限公司 Method for positioning mobile object in video monitoring
KR20160002510A (en) * 2014-06-30 2016-01-08 건국대학교 산학협력단 Coordinate Calculation Acquisition Device using Stereo Image and Method Thereof
CN105989588A (en) * 2015-02-05 2016-10-05 上海隶首信息技术有限公司 Irregular-shaped material cutting image correction method and system
CN105807332A (en) * 2016-04-28 2016-07-27 长春奥普光电技术股份有限公司 Bird detection system for airport
CN106373159A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Simplified unmanned aerial vehicle multi-target location method
CN106599897A (en) * 2016-12-09 2017-04-26 广州供电局有限公司 Machine vision-based pointer type meter reading recognition method and device
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN107273799A (en) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 A kind of indoor orientation method and alignment system
CN108450455A (en) * 2018-03-12 2018-08-28 武林霄 Bird-repeller system based on communication positioning
CN108710126A (en) * 2018-03-14 2018-10-26 上海鹰觉科技有限公司 Automation detection expulsion goal approach and its system
CN110276363A (en) * 2018-03-15 2019-09-24 北京大学深圳研究生院 A kind of birds small target detecting method based on density map estimation
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
CN109754434A (en) * 2018-12-27 2019-05-14 歌尔科技有限公司 Camera calibration method, apparatus, user equipment and storage medium
CN109788208A (en) * 2019-01-30 2019-05-21 华通科技有限公司 Target identification method and system based on multiple groups focal length images source
CN109799760A (en) * 2019-01-30 2019-05-24 华通科技有限公司 The bird-repellent robots control system and control method of power industry
CN110054089A (en) * 2019-04-29 2019-07-26 北京航天自动控制研究所 A kind of tyre crane machine vision system for automatically correcting and method for correcting error
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
弹丸立靶坐标的图像处理算法研究;张先叶;高俊钗;李亚强;;科学技术与工程(第11期);全文 *

Also Published As

Publication number Publication date
CN111027522A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
US20210279444A1 (en) Systems and methods for depth map sampling
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
CN110703800A (en) Unmanned aerial vehicle-based intelligent identification method and system for electric power facilities
WO2020146491A3 (en) Using light detection and ranging (lidar) to train camera and imaging radar deep learning networks
CN111027522B (en) Bird detection positioning system based on deep learning
US10007836B2 (en) Bird detection device, bird detection system, bird detection method, and program extracting a difference between the corrected images
CN110059558A (en) A kind of orchard barrier real-time detection method based on improvement SSD network
CN110570454B (en) Method and device for detecting foreign matter invasion
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN112101088A (en) Automatic unmanned aerial vehicle power inspection method, device and system
CN110956137A (en) Point cloud data target detection method, system and medium
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN111158013A (en) Multi-algorithm fusion bird detection system
CN109143167B (en) Obstacle information acquisition device and method
CN111652067A (en) Unmanned aerial vehicle identification method based on image detection
CN113160209A (en) Target marking method and target identification method for building facade damage detection
CN113793385A (en) Method and device for positioning fish head and fish tail
CN109087515A (en) Unmanned plane expressway road conditions cruise method and system
CN115912183B (en) Ecological measure inspection method and system for high-voltage transmission line and readable storage medium
CN109799844B (en) Dynamic target tracking system and method for pan-tilt camera
CN108563986B (en) Method and system for judging posture of telegraph pole in jolt area based on long-distance shooting image
CN103996187B (en) To-ground moving target photoelectric detection system, and data processing method and image processing method thereof
US11557133B1 (en) Automatic license plate recognition
CN112493228B (en) Laser bird repelling method and system based on three-dimensional information estimation
CN206649533U (en) Vehicle-mounted pattern recognition device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant