CN112767433A - Automatic deviation rectifying, segmenting and identifying method for image of inspection robot - Google Patents

Automatic deviation rectifying, segmenting and identifying method for image of inspection robot Download PDF

Info

Publication number
CN112767433A
CN112767433A CN202110273210.8A CN202110273210A CN112767433A CN 112767433 A CN112767433 A CN 112767433A CN 202110273210 A CN202110273210 A CN 202110273210A CN 112767433 A CN112767433 A CN 112767433A
Authority
CN
China
Prior art keywords
original image
image
neural network
network model
inspection robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110273210.8A
Other languages
Chinese (zh)
Inventor
谢超善
王东芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xuanma Zhineng Technology Co ltd
Original Assignee
Beijing Xuanma Zhineng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xuanma Zhineng Technology Co ltd filed Critical Beijing Xuanma Zhineng Technology Co ltd
Priority to CN202110273210.8A priority Critical patent/CN112767433A/en
Publication of CN112767433A publication Critical patent/CN112767433A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an automatic deviation rectifying, segmenting and identifying method for an image of an inspection robot, which comprises the steps of obtaining an original image of an original image to be rectified after binarization according to a given threshold value, searching the binarized image according to a given width-height ratio, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, calculating the inclination angle of the original image, and rotationally correcting the original image to be rectified to a horizontal position according to the inclination angle of the original image. The invention ensures that the camera only needs to take a picture of the whole object once, then cuts the picture according to the rule, cuts the picture into the pictures to be filed, and then identifies the pictures one by one. The method can greatly improve the shooting efficiency, unify the shooting angle and light of the picture, improve the shooting quality and finally improve the image recognition rate.

Description

Automatic deviation rectifying, segmenting and identifying method for image of inspection robot
Technical Field
The invention mainly relates to the technical field of robots, in particular to an automatic deviation rectifying, segmenting and identifying method for an image of an inspection robot.
Background
The inspection robot is used for inspection work and takes a mobile robot as a carrier, a visible light camera, an infrared thermal imager and other detection instruments as a load system.
According to the inspection robot, the inspection robot system and the inspection method of the inspection robot provided in patent document CN201910345507.3, the product includes: the driving unit is used for driving the inspection robot to move in the monitoring area; the monitoring unit is used for acquiring at least one environmental data in the inspection range of the inspection robot; the image acquisition unit is used for acquiring image information in the inspection range; the control unit comprises a memory and a processor, wherein the memory is used for storing the alarm threshold value of the environment data; the processor is respectively and electrically connected with the driving unit, the monitoring unit and the image acquisition unit. The invention also correspondingly provides an inspection robot system and an inspection method of the inspection robot. The product can not only quickly find abnormal points in a monitoring area, but also monitor the environment at the abnormal points.
The traditional inspection robot adopts a visible light camera to shoot objects in the automatic inspection process, the shot pictures are transmitted back to a background machine and are identified by an image identification algorithm in a background system, the problems that the pictures are inclined, cannot be automatically divided and the like exist, each part of the shot objects needs to be shot respectively, the shooting workload is large, the working efficiency is low, and the situation that the objects cannot be aligned often exists.
Disclosure of Invention
The invention mainly provides an automatic deviation rectifying, segmenting and identifying method for an image of an inspection robot, which is used for solving the technical problems in the background technology.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an automatic deviation rectifying method for an image of an inspection robot comprises the following steps:
s1, acquiring an original image of an original image to be rectified after binarization according to a given threshold value;
s2, searching the binary image according to a given width-height ratio, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, and calculating the inclination angle of the original image;
and S3, rotationally correcting the original image to be corrected to a horizontal position according to the inclination angle of the original image.
Further, according to a given width-to-height ratio, retrieving the binary image, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, and calculating the inclination angle of the original image, the method comprises the following steps:
s21, obtaining accurate object boundary positioning of the original image to be corrected according to an energy function of an Active Contour function of OpenCV for defining a curve in a closed area
S22, generating a minimum external rectangle on the original image with the redefined boundary according to a MineAreaRect function of OpenCV
And S23, calculating the inclination angle of the minimum external rectangle generated in the S22 as the inclination angle of the original image.
Further, according to a given threshold value, obtaining an original image of the original image to be rectified after binarization, including:
and determining a threshold value of the corrected original image according to an adaptive threshold function of the OpenCV, and obtaining the binarized original image.
According to the automatic deviation rectifying method for the image of the inspection robot, an automatic segmentation method for the image of the inspection robot is further provided, and the automatic deviation rectifying method comprises the following steps:
s1, acquiring an image subjected to deviation rectification as an original image to be segmented,
s2, obtaining a weighted undirected graph of the original image to be segmented according to the original image to be segmented, and obtaining weighted undirected graph nodes according to pixels of the original image to be segmented
And S3, acquiring a foreground mark and a background mark of the weighted undirected graph according to a NormalizedCut method.
And S4, correcting the original image to be segmented according to the foreground mark and the background mark to obtain the segmented image.
Further, according to the normalizeddut method, acquiring the foreground mark and the background mark of the weighted undirected graph includes:
and generating a label map according to the foreground label and the background label.
According to the inspection robot image automatic deviation rectifying method and the inspection robot image automatic deviation rectifying method, the inspection robot image automatic identification method further comprises the following steps:
s1, acquiring a segmented image as an original image to be identified and an infrared light image corresponding to the original image to be identified;
s2, identifying a corresponding target from the infrared light image according to a pre-trained depth target identification neural network model, and determining the corresponding position of the corresponding target in the original image to be identified;
and S3, determining the position of the target in the original image to be recognized according to the trained deep target recognition neural network model.
Further, according to a pre-trained deep target recognition neural network model, recognizing a corresponding target from the infrared light image, and determining the corresponding position of the corresponding target in the original image to be recognized, wherein the method comprises the steps of
And training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a convolutional network.
Further, identifying a corresponding target from the infrared light image according to a pre-trained deep target identification neural network model, and determining a corresponding position of the corresponding target in the original image to be identified, including:
and training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a hole-carrying convolutional network.
Further, identifying a corresponding target from the infrared light image according to a pre-trained deep target identification neural network model, and determining a corresponding position of the corresponding target in the original image to be identified, including:
and training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a global pyramid pooling network.
Compared with the prior art, the invention has the beneficial effects that:
the invention ensures that the camera only needs to take a picture of the whole object once, then cuts the picture according to the rule, cuts the picture into the pictures to be filed, and then identifies the pictures one by one. The method can greatly improve the shooting efficiency, unify the shooting angle and light of the picture, improve the shooting quality and finally improve the image recognition rate.
The present invention will be explained in detail below with reference to specific examples.
Detailed Description
The present invention will be described more fully hereinafter with reference to examples for the purpose of facilitating an understanding of the invention, but the invention may be embodied in different forms and is not limited to the examples described herein, which are provided for the purpose of making the disclosure more thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may be present, and when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present, as the terms "vertical", "horizontal", "left", "right" and the like are used herein for descriptive purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, and the knowledge of the terms used herein in the specification of the present invention is for the purpose of describing particular embodiments and is not intended to limit the present invention, and the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In a preferred embodiment of the present invention, an automatic deviation rectifying method for an image of an inspection robot includes:
s1, acquiring an original image of an original image to be rectified after binarization according to a given threshold value;
s2, searching the binary image according to a given width-height ratio, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, and calculating the inclination angle of the original image;
and S3, rotationally correcting the original image to be corrected to a horizontal position according to the inclination angle of the original image.
It should be noted that, in this embodiment, the inclination angle of the main object profile of the photo obtained by the robot inspection is automatically measured, and the inclined photo is automatically corrected, so that the next photo segmentation is facilitated.
Specifically, in another preferred embodiment of the present invention, the retrieving is performed on the binarized image according to a given aspect ratio to find out the minimum point of the original image at the upper left corner and the minimum point at the upper right corner, and the calculating the inclination angle of the original image includes the following steps:
s21, obtaining accurate object boundary positioning of an original image to be corrected according to an energy function of an Active Contour function of OpenCV, wherein the energy function is defined in a closed area curve;
s22, generating a minimum external rectangle on the original image with the redefined boundary according to a MineAreaRect function of OpenCV;
s23, calculating the inclination angle of the minimum external rectangle generated in the S22 as the inclination angle of the original image;
it should be noted that, in this embodiment, a minimum external rectangle is generated on the original image of which the boundary is redefined and the inclination angle of the minimum external rectangle is generated according to the accurate object boundary positioning of the original image to be corrected, and is used as the inclination angle of the original image, and the inclination angle of the original image is calculated;
specifically, in another preferred embodiment of the present invention, the obtaining an original image of an original image to be rectified after binarization according to a given threshold includes:
determining a threshold value of the original image for rectification according to an adaptive threshold function of the OpenCV, and obtaining the original image after binarization;
in this embodiment, the noise is reduced as much as possible, and finally the image background and the text portion are represented as a binary image by 0 and 1, respectively.
According to the embodiment, the automatic segmentation method for the image of the inspection robot comprises the following steps:
s1, acquiring an image subjected to deviation rectification processing as an original image to be segmented;
s2, obtaining a weighted undirected graph of the original image to be segmented according to the original image to be segmented, and obtaining weighted undirected graph nodes according to pixels of the original image to be segmented;
s3, acquiring a foreground mark and a background mark of the weighted undirected graph according to a NormalizedCut method;
and S4, correcting the original image to be segmented according to the foreground mark and the background mark to obtain the segmented image.
It should be noted that, in this embodiment, by obtaining the foreground mark and the background mark of the weighted undirected graph and obtaining the segmented image according to the foreground mark and the background mark, it is prevented that the character boundaries are mixed up when the background of the image is disordered, which results in the positioning error of the character.
Specifically, in another preferred embodiment of the present invention, the obtaining of the foreground mark and the background mark of the weighted undirected graph according to the normalizeddut method includes:
generating a label map according to the foreground label and the background label;
it should be noted that, in this embodiment, comparison in the subsequent picture recognition is facilitated.
According to the embodiment, the automatic segmentation method for the image of the inspection robot comprises the following steps:
s1, acquiring a segmented image as an original image to be identified and an infrared light image corresponding to the original image to be identified;
s2, identifying a corresponding target from the infrared light image according to a pre-trained depth target identification neural network model, and determining the corresponding position of the corresponding target in the original image to be identified;
and S3, determining the position of the target in the original image to be recognized according to the trained deep target recognition neural network model.
It should be noted that, in this embodiment, the trained neural network model compares the corresponding target in the infrared image with the pixel in the original image to be recognized, so as to recognize the corresponding target in the original image, facilitate the subsequent unification of the shooting angle and light, improve the shooting quality, and finally improve the image recognition rate.
Specifically, in another preferred embodiment of the present invention, according to a pre-trained deep target recognition neural network model, recognizing a corresponding target from an infrared light image, and determining a corresponding position of the corresponding target in an original image to be recognized, includes
Training an untrained neural network model according to a server data training set to obtain a trained neural network model, wherein a direct connection layer of the neural network model is replaced by a convolutional network;
in this embodiment, the network can accept pictures of any size and output a division map of the same size as the original image.
Specifically, in another preferred embodiment of the present invention, identifying a corresponding target from an infrared light image according to a pre-trained deep target recognition neural network model, and determining a corresponding position of the corresponding target in an original image to be identified includes:
and training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a hole-carrying convolutional network.
It should be noted that, in this embodiment, several holes are inserted in the middle of the ordinary convolution kernel, so that the global information of more images can be grasped when the feature maps are reduced to the same multiple.
Specifically, in another preferred embodiment of the present invention, identifying a corresponding target from an infrared light image according to a pre-trained deep target recognition neural network model, and determining a corresponding position of the corresponding target in an original image to be identified includes:
training an untrained neural network model according to a server data training set to obtain a trained neural network model, wherein a direct connection layer of the neural network model is replaced by a global pyramid pooling network;
it should be noted that, in this embodiment, the feature map is scaled to several different sizes, so that the features have better global and multi-scale information, thereby improving the accuracy.
The specific operation mode of the invention is as follows:
performing image deviation correction processing, namely acquiring an original image of an original image to be subjected to deviation correction after binarization according to a given threshold value, searching the binarized image according to a given width-height ratio, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, calculating the inclination angle of the original image, and rotationally correcting the original image to be subjected to deviation correction to a horizontal position according to the inclination angle of the original image;
performing picture segmentation processing, acquiring an image subjected to deviation rectification processing as an original image to be segmented, then acquiring a weighted undirected graph of the original image to be segmented according to the original image to be segmented, acquiring weighted undirected graph nodes according to pixels of the original image to be segmented, acquiring foreground marks and background marks of the weighted undirected graph according to a NormalizedCut method, and then correcting the original image to be segmented according to the foreground marks and the background marks to obtain a segmented image;
the method comprises the steps of carrying out picture identification processing, firstly obtaining a segmented image as an original image to be identified and an infrared light image corresponding to the original image to be identified, then identifying a neural network model according to a depth target trained in advance, identifying a corresponding target from the infrared light image, determining the corresponding position of the corresponding target in the original image to be identified, and then determining the position of the target in the original image to be identified according to the trained depth target identification neural network model.
The present invention has been described for illustrative purposes, and it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description, as such modifications are intended to be included within the scope of the present invention.

Claims (9)

1. An automatic deviation rectifying method for an image of an inspection robot is characterized by comprising the following steps:
s1, acquiring an original image of an original image to be rectified after binarization according to a given threshold value;
s2, searching the binary image according to a given width-height ratio, finding out a minimum point of the original image at the upper left corner and a minimum point of the original image at the upper right corner, and calculating the inclination angle of the original image;
and S3, rotationally correcting the original image to be corrected to a horizontal position according to the inclination angle of the original image.
2. The automatic deviation rectifying method for the inspection robot images according to claim 1, wherein the binary images are searched according to the given width-to-height ratio, the minimum point of the original image at the upper left corner and the minimum point of the original image at the upper right corner are found, and the inclination angle of the original image is calculated, including the following steps:
s21, obtaining accurate object boundary positioning of an original image to be corrected according to an energy function of an Active Contour function of OpenCV, wherein the energy function is defined in a closed area curve;
s22, generating a minimum external rectangle on the original image with the redefined boundary according to a MineAreaRect function of OpenCV;
and S23, calculating the inclination angle of the minimum external rectangle generated in the S22 as the inclination angle of the original image.
3. The automatic deviation rectifying method for the inspection robot images according to claim 1, wherein the step of obtaining the binarized original image of the original image to be rectified according to a given threshold value comprises the following steps:
and determining a threshold value of the corrected original image according to an adaptive threshold function of the OpenCV, and obtaining the binarized original image.
4. The automatic segmentation method for the inspection robot images is characterized by comprising the following steps of:
s1, acquiring an image subjected to deviation rectification as an original image to be segmented,
s2, obtaining a weighted undirected graph of the original image to be segmented according to the original image to be segmented, and obtaining weighted undirected graph nodes according to pixels of the original image to be segmented
And S3, acquiring a foreground mark and a background mark of the weighted undirected graph according to a NormalizedCut method.
And S4, correcting the original image to be segmented according to the foreground mark and the background mark to obtain the segmented image.
5. The automatic segmentation method for the inspection robot image according to claim 4, wherein the obtaining of the foreground mark and the background mark of the weighted undirected graph according to a NormalizedCut method includes:
and generating a label map according to the foreground label and the background label.
6. The automatic inspection robot image recognition method according to claims 1-5, wherein the method includes:
s1, acquiring a segmented image as an original image to be identified and an infrared light image corresponding to the original image to be identified;
s2, identifying a corresponding target from the infrared light image according to a pre-trained depth target identification neural network model, and determining the corresponding position of the corresponding target in the original image to be identified;
and S3, determining the position of the target in the original image to be recognized according to the trained deep target recognition neural network model.
7. The inspection robot image automatic identification method according to claim 6, wherein identifying the corresponding target from the infrared image according to a pre-trained deep target recognition neural network model, determining the corresponding position of the corresponding target in the original image to be identified comprises
And training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a convolutional network.
8. The inspection robot image automatic identification method according to claim 7, wherein identifying a corresponding target from the infrared light image according to a pre-trained deep target recognition neural network model, determining a corresponding position of the corresponding target in an original image to be identified comprises:
and training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a hole-carrying convolutional network.
9. The inspection robot image automatic identification method according to claim 8, wherein identifying a corresponding target from the infrared light image according to a pre-trained deep target recognition neural network model, determining a corresponding position of the corresponding target in an original image to be identified comprises:
and training an untrained neural network model according to the server data training set to obtain the trained neural network model, wherein the direct connection layer of the neural network model is replaced by a global pyramid pooling network.
CN202110273210.8A 2021-03-15 2021-03-15 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot Pending CN112767433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110273210.8A CN112767433A (en) 2021-03-15 2021-03-15 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110273210.8A CN112767433A (en) 2021-03-15 2021-03-15 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot

Publications (1)

Publication Number Publication Date
CN112767433A true CN112767433A (en) 2021-05-07

Family

ID=75691345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110273210.8A Pending CN112767433A (en) 2021-03-15 2021-03-15 Automatic deviation rectifying, segmenting and identifying method for image of inspection robot

Country Status (1)

Country Link
CN (1) CN112767433A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797619A (en) * 2023-02-10 2023-03-14 南京天创电子技术有限公司 Deviation rectifying method suitable for image positioning of inspection robot instrument

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN110097054A (en) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 A kind of text image method for correcting error based on image projection transformation
CN110850723A (en) * 2019-12-02 2020-02-28 西安科技大学 Fault diagnosis and positioning method based on transformer substation inspection robot system
CN112131936A (en) * 2020-08-13 2020-12-25 华瑞新智科技(北京)有限公司 Inspection robot image identification method and inspection robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN110097054A (en) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 A kind of text image method for correcting error based on image projection transformation
CN110850723A (en) * 2019-12-02 2020-02-28 西安科技大学 Fault diagnosis and positioning method based on transformer substation inspection robot system
CN112131936A (en) * 2020-08-13 2020-12-25 华瑞新智科技(北京)有限公司 Inspection robot image identification method and inspection robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
从林: "面向图像分割的谱聚类算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 01, pages 4 *
张代兵等: "融合地面多传感器信息引导无人机着陆", 《国防科技大学学报》, vol. 40, no. 01, pages 151 - 156 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797619A (en) * 2023-02-10 2023-03-14 南京天创电子技术有限公司 Deviation rectifying method suitable for image positioning of inspection robot instrument

Similar Documents

Publication Publication Date Title
EP3740897B1 (en) License plate reader using optical character recognition on plural detected regions
US10558844B2 (en) Lightweight 3D vision camera with intelligent segmentation engine for machine vision and auto identification
CN110543867A (en) crowd density estimation system and method under condition of multiple cameras
CN111445517A (en) Robot vision end positioning method and device and computer readable storage medium
KR20070016018A (en) apparatus and method for extracting human face in a image
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN112085024A (en) Tank surface character recognition method
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN111461036B (en) Real-time pedestrian detection method using background modeling to enhance data
CN113989604B (en) Tire DOT information identification method based on end-to-end deep learning
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN105825168A (en) Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN111695373A (en) Zebra crossing positioning method, system, medium and device
Zhang et al. Scale-adaptive NN-based similarity for robust template matching
CN112767433A (en) Automatic deviation rectifying, segmenting and identifying method for image of inspection robot
CN114235815A (en) Method for detecting surface defects of outdoor electrical equipment of converter station based on scene filtering
CN114067128A (en) SLAM loop detection method based on semantic features
CN109993715A (en) A kind of robot vision image preprocessing system and image processing method
CN108074264A (en) A kind of classification multi-vision visual localization method, system and device
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
Harish et al. New features for webcam proctoring using python and opencv
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
CN115797397A (en) Method and system for robot to autonomously follow target person in all weather
CN210442821U (en) Face recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination