CN110136186B - Detection target matching method for mobile robot target ranging - Google Patents

Detection target matching method for mobile robot target ranging Download PDF

Info

Publication number
CN110136186B
CN110136186B CN201910389574.5A CN201910389574A CN110136186B CN 110136186 B CN110136186 B CN 110136186B CN 201910389574 A CN201910389574 A CN 201910389574A CN 110136186 B CN110136186 B CN 110136186B
Authority
CN
China
Prior art keywords
detection target
target
detection
deep learning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910389574.5A
Other languages
Chinese (zh)
Other versions
CN110136186A (en
Inventor
许德章
王毅恒
汪步云
汪志红
许曙
王智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN201910389574.5A priority Critical patent/CN110136186B/en
Publication of CN110136186A publication Critical patent/CN110136186A/en
Application granted granted Critical
Publication of CN110136186B publication Critical patent/CN110136186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a detection target matching method for mobile robot target ranging, which comprises the steps of firstly collecting images by using a binocular camera, detecting left and right images by using a trained deep learning training model, randomly selecting a detection target in the left image, carrying out type recognition on the detection target in the right image, determining that the detection target is the same object in an actual space if the detection target is recognized, calculating the depth distance of the object by using a parallax method, judging whether the detection target is the last detection target, taking the next detection target to repeat the steps if the detection target is not recognized, and directly skipping to the judging step if the detection target is not recognized. The method and the device can realize automatic matching of the detection target in the left image and the right image after the images are obtained by the binocular camera and target detection processing is carried out, have rapidity of deep learning target detection, can realize automatic matching of the detection target, and are convenient for processing equipment to automatically measure the depth of the detection target.

Description

Detection target matching method for mobile robot target ranging
Technical Field
The invention relates to the field of robots, in particular to a detection target matching method for mobile robot target ranging.
Background
In recent years, with the rapid development of scientific technology, mobile robots have been applied to various fields of production and life, and become one of the most active and potential technologies. Among them, the detection of obstacles is a hot issue in mobile robot research. In order to realize the navigation function, the robot needs to detect the relative distance between the robot and an object which obstructs the driving in the environment before avoiding obstacles. At present, common detection methods are as follows: ultrasonic ranging, laser pulse ranging, infrared ranging, optical ranging, stereoscopic ranging, and the like. The distance between the measured object and the sensor is calculated by measuring the time difference between the emission and the return of the emission source by means of ultrasonic, laser, infrared and other devices, which is called as an active method. The active method is convenient and rapid in distance measurement and simple in calculation, so that the method is widely applied to real-time control. However, the emitting and receiving devices are expensive and high in cost, environmental problems such as reflection, noise, cross and the like are difficult to avoid, and the active method has no general applicability. In contrast, the visual sensor has many advantages such as abundant information and wide detection distance, and is more and more widely applied to mobile robot navigation, especially to obstacle detection.
Meanwhile, the target detection based on the deep learning has been widely applied to a plurality of fields such as industrial production, autonomous driving, video monitoring, image retrieval, man-machine interaction and the like for a long time, and the target detection technology based on the deep learning can efficiently carry out real-time image target detection. And the binocular vision can obtain the depth of the matching points in the left and right images through a parallax method. However, in practical engineering applications, when the mobile robot detects an obstacle, it is not necessary to measure the depth distance information of any point on the obstacle, and no matter which distance measurement method is used, the depth information of any point on the obstacle cannot be measured comprehensively, so that only the depth distance information of a representative part on the obstacle needs to be measured, and reference information can be provided for the mobile robot to judge the position of the obstacle. Therefore, the center point of the detection target identification frame detected by the deep learning algorithm can be used as a representative part of the whole obstacle, and the depth of the center point of the detection target identification frame can be measured and calculated by matching the identification frames of the same detection target in the left image and the right image by using a parallax method. The invention discloses a detection target matching method for mobile robot target ranging, which is a key problem on how to match recognition frames of the same detection target in a left image and a right image.
Patents [ CN 109544633A; 201710867746.6 discloses a method for measuring distance of a target, which utilizes a monocular camera to shoot and obtain a traffic target, reads the internal parameter and the external parameter of the monocular camera, reads the size of the traffic target from a preset standard, and then calculates the distance between the traffic target and a reference target. The patent method does not mention how to select and determine the traffic target, and if the new scene, the new image and the target ranging under the condition of a plurality of image targets are not applicable.
Patents [ CN 109212540A; 201811062793.4 discloses a distance measurement method based on a laser radar system, which comprises the steps of receiving distance measurement data obtained by measuring a plurality of laser radars of the laser radar system, establishing a three-dimensional coordinate model, and determining the distance between target unmanned equipment and each obstacle.
Patents [ CN 109029363A; 201810562681.9 discloses a target ranging method based on deep learning, which comprises the steps of establishing target databases at different distances, building a target ranging model, designing a loss function of the target ranging model, designing a training method of the target ranging model, and testing the trained target ranging model. And converting the target ranging problem into a regression problem and integrating the regression problem into a target detection algorithm model, thereby realizing target detection and target ranging in one algorithm model. However, in patent claim 3, it is mentioned that the obtained single frame image is manually calibrated by using a calibration tool, and the calibration contents include: the system comprises the coordinates (x, y) of the center point of a boundary frame of a detected target, the width w and the height h of the boundary frame of the detected target, the category information c of the boundary frame of the detected target and the distance information L of the detected target from a camera. The calibration content needs to be manually completed, if the data volume of the target database is large, manual calibration is unrealistic, the fact that the task volume of the ImageNet for labeling the image data set is large is enough to explain, and the manual establishment of the database in the aspects of deep learning and artificial intelligence consumes manpower, material resources and financial resources; meanwhile, the depth information calculation of the image obtained by using the common monocular camera is not as accurate as the depth information calculation of the binocular camera.
Disclosure of Invention
In order to solve the problems mentioned in the background art, the present invention provides a detected target matching method for ranging a mobile robot target.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
a detection target matching method for mobile robot target ranging comprises the following steps:
(1) acquiring an image by using a binocular camera;
(2) establishing and training a deep learning training model, and simultaneously carrying out target detection on the left image and the right image according to the trained deep learning training model;
(3) obtaining detection targets A1, A2, A3, …, An-1 and An in the left image and detection targets B1, B2, B3, …, Bm-1 and Bm in the right image, randomly selecting one detection target in the left image, and marking the detection target as Ai;
(4) carrying out type identification on the detection target Ai and the detection targets B1, B2, B3, …, Bm-1 and Bm in the right image, and directly jumping to the step (7) if the detection target with the same type as the detection target Ai is not identified; if a detection target with the same type as the detection target Ai is identified, performing the step (5);
(5) one or more detection targets with the same type as the detection targets Ai are selected, if one detection target is selected, the direct matching is successful, if more than one detection target is selected, the detection target is selected from the detection targets Ai which are closer to the boundary frame center point coordinate of the detection target Ai by comparing the distance between the boundary frame center point coordinate of the detection target Ai and the boundary frame center point coordinate of each detection target, and the detection target matched with the detection target Ai is marked as Bj;
(6) after the detection target Ai in the left image and the detection target Bj in the right image are successfully matched, determining that the detection target Ai and the detection target Bj are the same object in the actual space, recording the object as Ci, calculating the depth distance of the object Ci by a parallax method, and recording the depth distance as Zi;
(7) and (5) judging whether i is more than or equal to n at the moment, if i is more than or equal to n, ending the algorithm flow, if i is not more than or equal to n, assigning i +1 to i, and jumping to the step (4) again to continue execution until i is more than or equal to n.
In the step (3), the initial value of i in the detected target Ai is 1 and is a positive integer.
As a further improvement of the invention, the building and training steps of the deep learning training model in the step (2) are as follows:
collecting the image data collected in the step (1) to form an image data set;
secondly, selecting a mature deep learning frame in the prior art;
thirdly, training the image data set in the step (1) by using the selected deep learning frame;
and (IV) obtaining a deep learning training model for processing the acquired image data.
As another improvement of the invention, the building and training steps of the deep learning training model in the step (2) are as follows:
(A) collecting the image data collected in the step (1) to form an image data set;
(B) establishing a deep learning neural network framework;
(C) training a deep learning neural network frame in advance to obtain a self deep learning frame;
(D) training the image data set in the step (1) by using the trained deep learning framework;
(E) a deep learning training model for processing the acquired image data is obtained.
The invention has the beneficial effects that:
the method and the device can realize automatic matching of the detection target in the left image and the right image after the images are obtained by the binocular camera and target detection processing is carried out, have rapidity of deep learning target detection, can realize automatic matching of the detection target, and are convenient for processing equipment to automatically measure the depth of the detection target.
Drawings
The invention is further illustrated with reference to the following figures and examples:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flow chart of the deep learning training model building process of the present invention;
FIG. 3 is a schematic diagram of an original image obtained by a binocular camera according to the present invention;
FIG. 4 is a diagram illustrating the results of target detection according to the present invention;
FIG. 5 is a schematic diagram of a left image detection target according to the present invention;
FIG. 6 is a schematic diagram of a right image target of the present invention.
Detailed Description
In order to make the technical means, the creation features, the achievement purposes and the effects of the invention easy to understand, the invention is further explained in the following with the accompanying drawings and the embodiments.
A detection target matching method for mobile robot target ranging comprises the following steps:
(1) first, a binocular camera is connected to a notebook computer, the notebook computer is mounted on a mobile robot, and then the camera acquires a real-time image in front of the robot, as shown in fig. 3.
(2) And establishing and training a deep learning training model, and simultaneously carrying out target detection on the left image and the right image according to the trained deep learning training model.
The deep learning training model has two modes, specifically as follows:
the first method is as follows:
collecting the image data collected in the step (1) to form an image data set;
secondly, selecting a mature deep learning framework in the prior art;
thirdly, training the image data set in the step (1) by using the selected deep learning frame;
and (IV) obtaining a deep learning training model for processing the acquired image data.
The second method comprises the following steps:
(A) collecting the image data collected in the step (1) to form an image data set;
(B) establishing a deep learning neural network framework;
(C) training a deep learning neural network frame in advance to obtain a self deep learning frame;
(D) training the image data set in the step (1) by using the trained deep learning framework;
(E) a deep learning training model for processing the acquired image data is obtained.
When the composition is used specifically, one of the compositions can be selected according to actual conditions.
(3) Obtaining detection targets in the left image as A1, A2, A3 and A4 and detection targets in the right image as B1, B2, B3 and B4, randomly selecting one detection target in the left image, marking the detection target as Ai, and firstly making i equal to 1.
(4) The detection target a1 is subjected to type recognition with the detection targets B1, B2, B3, and B4 in the right image, and if the detection target of the same type as the detection target a1 is not recognized, the process directly proceeds to step (7), and if the detection target of the same type as the detection target a1 is recognized, the process proceeds to step (7).
(5) As shown in fig. 5 and 6, two targets B1 and B3 of the same type as the target a1 are recognized, and therefore it is necessary to compare the coordinates of the center point of the boundary box of the target a1 with the coordinates of the center point of the boundary box of the targets B1 and B3 and select a target which is closer to the target a 1.
As shown in fig. 5 and fig. 6, the distance d between the coordinates of the center point of the boundary box of a1 (430,381) and the coordinates of the center point of the right image detection target B1, B3 (416.5,378.5) and (623,348) is found, d | a1-B1| ═ 13.73, d | a1-B3| ═ 204.68, and d | a1-B1| < d | a1-B3|, so that the center point of the boundary box of B1 is closer to the center point of the boundary box of a1, and therefore the detection target a1 in the left image is successfully matched with the right image detection target B1.
(6) After the detection target A1 and the detection target B1 are successfully matched, the detection target A1 in the left image and the detection target B1 in the right image are determined to be the same object C1 in the actual space, and the object C1 is calculated through a parallax methodDepth distance z1, binocular Camera baseline Length b in this example line 120mm, the calibrated focal length f is 731px, according to the formula z ═ f × b line Where z is an object depth distance, diff is parallax, that is, a horizontal coordinate difference of matching points in the binocular image, and parallax diff is a positive number, and parallax diff between a center point of the detection target a1 in the left image and a center point of the right image detection target B1 is 430-416.5-13.5, so z 1-6497.78 mm can be obtained.
(7) Judging whether i is more than or equal to n at the moment, and if i is more than or equal to n, ending the algorithm flow; and (4) if i is not larger than n, assigning i +1 to i, and jumping to the step (4) again to continue execution until i is larger than n.
At this time, i is 1, n is 4, i is not greater than or equal to n, so i +1 is assigned to i, where new i is 2, and the algorithm flow jumps to step (4) to continue execution.
(8) As shown in fig. 5 and 6, only one detection target B2 of the same kind as the detection target a2 is recognized, so the boundary box center point of B2 is closer to the boundary box center point of a2, and therefore the detection target a2 in the left image and the right image detection target B2 are successfully matched.
(9) As in step (6), the detection target a2 in the left image and the right image detection target B2 are determined to be the same object C2 in the real space, and z2 is calculated to be 3189.82mm by the parallax method.
(10) And (7) repeating the step (7), wherein i is 2, n is 4, and i is not more than or equal to n, so that i +1 is assigned to i, the new i is 3, and the algorithm flow jumps to the step (4) to continue execution.
(11) As shown in fig. 5 and 6, two detected targets B1 and B3 of the same kind as the detected target a3 are recognized.
As shown in fig. 5 and fig. 6, the distance d between the coordinates of the center point of the boundary box A3 (649.5,344.5) and the coordinates of the center point of the boundary box B1 and B3 (416.5,378.5) and B632,348, d | A3-B1| -235.47, d | A3-B2| -17.85, and d | A3-B3| < d | A3-B1|, is found, so that the center point of the boundary box B3 is closer to the center point of the boundary box A3, and therefore the detection target A3 in the left image is successfully matched with the right image detection target B3.
(12) As in step (6), the detection target a3 in the left image and the right image detection target B3 are determined to be the same object C3 in the real space, and z3 is calculated to be 5012.57mm by the parallax method.
(13) And (7) repeating the step, wherein i is 3, n is 4, and i is not more than or equal to n, so that i +1 is assigned to i, the new i is 4, and the algorithm flow jumps to the step (4) to continue execution.
(14) As shown in fig. 5 and 6, only one detection target B4 of the same kind as the detection target a4 is recognized, so the boundary box center point of B4 is closer to the boundary box center point of a4, and therefore the detection target a4 in the left image and the right image detection target B4 are successfully matched.
(15) As in step (6), the detection target a4 in the left image and the right image detection target B4 are determined to be the same object C4 in the actual space, and z4 is calculated to 17544mm by the parallax method.
(16) And (7) repeating the step, wherein i is 4, n is 4, and i is equal to n, and ending the algorithm flow.
The initial value of i is 1 and is a positive integer; in addition, the coordinate values of the center points of the bounding boxes in the above embodiments are coordinate values in units of pixel px, and the origin of the coordinates is at the upper left corner of fig. 5 and 6.
The foregoing shows and describes the general principles, principal features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A detection target matching method for mobile robot target ranging is characterized in that: the method comprises the following steps:
(1) acquiring an image by using a binocular camera;
(2) establishing and training a deep learning training model, and simultaneously carrying out target detection on the left image and the right image according to the trained deep learning training model;
(3) obtaining detection targets A1, A2, A3, …, An-1 and An in the left image and detection targets B1, B2, B3, …, Bm-1 and Bm in the right image, randomly selecting one detection target in the left image, and marking the detection target as Ai;
(4) performing type recognition on the detection target Ai and detection targets B1, B2, B3, …, Bm-1 and Bm in the right image, directly jumping to the step (7) if the detection target with the same type as the detection target Ai is not recognized, and performing the step (5) if the detection target with the same type as the detection target Ai is recognized;
(5) one or more detection targets with the same type as the detection targets Ai are selected, if one detection target is selected, the direct matching is successful, if more than one detection target is selected, the detection target is selected from the detection targets Ai which are closer to the boundary frame center point coordinate of the detection target Ai by comparing the distance between the boundary frame center point coordinate of the detection target Ai and the boundary frame center point coordinate of each detection target, and the detection target matched with the detection target Ai is marked as Bj;
(6) after the detection target Ai in the left image and the detection target Bj in the right image are successfully matched, determining that the detection target Ai and the detection target Bj are the same object in the actual space, marking the object as Ci, calculating the depth distance of the object Ci by a parallax method, and marking the object as Zi;
(7) and (4) judging whether i is larger than or equal to n at the moment, if i is larger than or equal to n, ending the algorithm flow, and if i is not larger than or equal to n, assigning i +1 to i, and jumping to the step (4) again to continue execution until i is larger than or equal to n.
2. The detected target matching method for mobile robot target ranging according to claim 1, wherein: in the step (3), the initial value of i in the detection target Ai is 1 and is a positive integer.
3. The detected target matching method for mobile robot target ranging according to claim 1, wherein: the establishing and training steps of the deep learning training model in the step (2) are as follows:
collecting the image data collected in the step (1) to form an image data set;
secondly, selecting a mature deep learning framework in the prior art;
thirdly, training the image data set in the step (1) by using the selected deep learning frame;
and (IV) obtaining a deep learning training model for processing the acquired image data.
4. The detected target matching method for mobile robot target ranging according to claim 1, wherein: the establishing and training steps of the deep learning training model in the step (2) are as follows:
(A) collecting the image data collected in the step (1) to form an image data set;
(B) establishing a deep learning neural network framework;
(C) training a deep learning neural network frame in advance to obtain a self deep learning frame;
(D) training the image data set in the step (1) by using the trained deep learning framework;
(E) a deep learning training model for processing the acquired image data is obtained.
CN201910389574.5A 2019-05-10 2019-05-10 Detection target matching method for mobile robot target ranging Active CN110136186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910389574.5A CN110136186B (en) 2019-05-10 2019-05-10 Detection target matching method for mobile robot target ranging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910389574.5A CN110136186B (en) 2019-05-10 2019-05-10 Detection target matching method for mobile robot target ranging

Publications (2)

Publication Number Publication Date
CN110136186A CN110136186A (en) 2019-08-16
CN110136186B true CN110136186B (en) 2022-09-16

Family

ID=67573358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910389574.5A Active CN110136186B (en) 2019-05-10 2019-05-10 Detection target matching method for mobile robot target ranging

Country Status (1)

Country Link
CN (1) CN110136186B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111990930B (en) * 2020-08-28 2022-05-20 北京石头创新科技有限公司 Distance measuring method, distance measuring device, robot and storage medium
CN112489186B (en) * 2020-10-28 2023-06-27 中汽数据(天津)有限公司 Automatic driving binocular data sensing method
CN114018268B (en) * 2021-11-05 2024-06-28 上海景吾智能科技有限公司 Indoor mobile robot navigation method
CN114049394B (en) * 2021-11-23 2022-06-21 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101516040A (en) * 2008-02-20 2009-08-26 深圳华为通信技术有限公司 Video matching method, device and system
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
WO2018098915A1 (en) * 2016-11-29 2018-06-07 深圳市元征科技股份有限公司 Control method of cleaning robot, and cleaning robot
CN109029363A (en) * 2018-06-04 2018-12-18 泉州装备制造研究所 A kind of target ranging method based on deep learning
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101516040A (en) * 2008-02-20 2009-08-26 深圳华为通信技术有限公司 Video matching method, device and system
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
WO2018098915A1 (en) * 2016-11-29 2018-06-07 深圳市元征科技股份有限公司 Control method of cleaning robot, and cleaning robot
CN109029363A (en) * 2018-06-04 2018-12-18 泉州装备制造研究所 A kind of target ranging method based on deep learning
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的工件测量研究;潘琪等;《智能计算机与应用》;20180228(第01期);全文 *
改进Fast-RCNN的双目视觉车辆检测方法;张琦等;《应用光学》;20181115(第06期);全文 *

Also Published As

Publication number Publication date
CN110136186A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN109949372B (en) Laser radar and vision combined calibration method
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
Kang et al. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation
CN113160327A (en) Method and system for realizing point cloud completion
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN111239768A (en) Method for automatically constructing map and searching inspection target by electric power inspection robot
CN114089330B (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN115731545A (en) Cable tunnel inspection method and device based on fusion perception
CN114359865A (en) Obstacle detection method and related device
CN111696147B (en) Depth estimation method based on improved YOLOv3 model
CN115542338B (en) Laser radar data learning method based on point cloud spatial distribution mapping
CN117058236A (en) Target identification positioning method based on multi-vision system self-switching
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
CN114792417A (en) Model training method, image recognition method, device, equipment and storage medium
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN114155258A (en) Detection method for highway construction enclosed area
CN112200856A (en) Visual ranging method based on event camera
CN112598738A (en) Figure positioning method based on deep learning
Tousi et al. A new approach to estimate depth of cars using a monocular image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant