CN112540382A - Laser navigation AGV auxiliary positioning method based on visual identification detection - Google Patents

Laser navigation AGV auxiliary positioning method based on visual identification detection Download PDF

Info

Publication number
CN112540382A
CN112540382A CN201910844981.0A CN201910844981A CN112540382A CN 112540382 A CN112540382 A CN 112540382A CN 201910844981 A CN201910844981 A CN 201910844981A CN 112540382 A CN112540382 A CN 112540382A
Authority
CN
China
Prior art keywords
agv
visual
positioning method
distance
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910844981.0A
Other languages
Chinese (zh)
Other versions
CN112540382B (en
Inventor
周军
罗川
吴迪
皇攀凌
陈庆伟
李建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Original Assignee
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University, Shandong Alesmart Intelligent Technology Co Ltd filed Critical Shandong University
Priority to CN201910844981.0A priority Critical patent/CN112540382B/en
Publication of CN112540382A publication Critical patent/CN112540382A/en
Application granted granted Critical
Publication of CN112540382B publication Critical patent/CN112540382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

A laser navigation AGV auxiliary positioning method based on visual identification detection comprises the steps of training an image set of a designated road sign to form an image identification model; identifying the appointed road signs through a visual sensor in a specified position range, detecting the distance between the AGV and the appointed road signs on two sides, and calculating the sum of the distances; and feeding back the information acquired from the visual sensor to the laser SLAM system to realize the positioning of the absolute position of the AGV, thereby realizing the correction of the accumulated error of laser SLAM navigation, improving the positioning precision and improving the working efficiency.

Description

Laser navigation AGV auxiliary positioning method based on visual identification detection
Technical Field
The invention relates to a laser navigation AGV auxiliary positioning method based on visual identification detection, and belongs to the technical field of visual identification detection.
Background
In industrial application of SLAM autonomous navigation of an indoor mobile robot, a single 2D laser radar sensor is mostly adopted in the conventional SLAM navigation mode, laser particles are emitted in a two-dimensional plane through a laser emitter, surrounding environment depth information is returned through particle flight time, and then an original map database is compared to determine the position of the original map database. The method has the disadvantages that the detected information amount is less, the uncertainty is higher in the scene positioning with similar characteristics, the mismatching of the outline is easy to occur, and the method is difficult to apply in the industry with high environment repetition rate.
Chinese patent document (publication No. CN109752725A) discloses a low-speed commercial robot, a positioning navigation method and a positioning navigation system, SLAM adopts 2D laser positioning to build a map, the precision is higher, but because the information quantity collected by 2D laser is less, the defect exists in scene identification and detection with similar texture information, in order to improve the defect, when the positioning accuracy of similar scenes is poor, the invention adopts a vision sensor to assist in extracting abundant texture information to identify objects, and the precision of 2D laser repeated positioning is improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the laser navigation AGV auxiliary positioning method based on visual identification detection, which can detect the position relation between an industrial robot and a road sign object, thereby realizing the correction of the laser SLAM navigation accumulated error, improving the positioning precision and improving the working efficiency.
The technical scheme of the invention is as follows:
a laser navigation AGV auxiliary positioning method based on visual identification detection comprises the following steps:
step (1), arranging visual sensors on two sides of an AGV, and calibrating internal and external parameters of the visual sensors;
training an image set of a designated road sign to generate an image recognition model;
step (3), the AGV is operated, when the AGV reaches the range of the designated position, the visual sensor is started to identify the designated road sign, after the road sign is identified, the distance between the AGV and the designated road signs on two sides is detected and is recorded as d1 and d2 respectively, and the sum of the distance between the AGV and the designated road sign on two sides is calculated: d1+ d2, and setting a distance detection threshold T according to the industrial environment;
when d is less than T, performing the step (4), otherwise, returning to the step (3), and enabling the AGV to continue to move to perform landmark identification;
step (4), according to the information obtained from the vision sensor, feeding back to the laser SLAM system, comparing a map database, wherein the map database refers to an industrial environment map which is established in advance when the AGV is taught, the method clears the accumulated error of the sensor attached to the AGV, a plurality of sensors (odometer, gyroscope, laser radar and the like) attached to the AGV are limited in precision and have errors, the accumulated error can be generated after long-time movement, the clearing refers to realizing the relocation of the absolute position of the AGV, when the AGV reaches the vicinity of a specified road sign, the AGV laser SLAM system scans the surrounding information and outputs the absolute pose (position and rotation) of the AGV, but not continuously using the accumulated pose generated after long-time operation of various sensors, which is equivalent to indirectly eliminating the accumulated error of the sensor, and carrying out the next operation according to the absolute pose, AGV auxiliary positioning of the system is realized; the information acquired from the vision sensor includes landmark information and distance information.
Preferably, in the step (1), internal and external parameters of the vision sensor are calibrated, the internal parameters include a camera focal length and distortion, and the external parameters include rotation and translation from a world coordinate system to a camera coordinate system; the adopted vision sensor is a depth camera, and manual calibration is carried out through a camera SDK packet. The camera SDK package provides a software package for the camera's official network.
Preferably, in the step (2), the trained landmark is a fixed object, the image of the landmark at an omnidirectional angle is collected and used as a training sample set, the designated landmark is trained by a CNN convolutional neural network method, and an image recognition model is generated and used as a basis for subsequent comparison.
Further preferably, in the step (2), different labels are set for different road signs when the image recognition model is trained. The designated landmark can be identified in subsequent identification, and the positioning accuracy of the final absolute position is improved.
Preferably, in the step (3), when the AGV reaches the designated position range, a visual sensor is started, the surrounding environment is photographed, feature points in the surrounding environment are extracted, and the landmark is identified by comparing the image identification model trained in the step (2).
Preferably, in the step (3), the AGV is further provided with an infrared emitter, the vision sensor respectively calculates distances from the vision sensor to the designated road signs on both sides through triangulation, and calculates a sum of the distances, and the infrared emitter projects infrared rays for distance measurement and correction, so that distances d1 and d2 with higher accuracy are obtained.
According to the method, the specific object in the advancing process is identified through vision, the identification information is transmitted to the laser navigation system, the positioning accuracy of navigation is further improved, positioning is not directly achieved through visual detection, a drawing is not built through visual measurement, the visual identification is an auxiliary means, the defect that the laser detection accuracy is low is overcome in a similar scene, the identification information is transmitted to the system, and final positioning and drawing are completed through laser.
The invention has the beneficial effects that:
the invention relates to a laser navigation AGV auxiliary positioning method based on visual recognition detection. Based on the information obtained by vision, the laser SLAM system scans the surrounding environment information, compares the image recognition model and realizes the positioning of the absolute position, thereby realizing the correction of the accumulated error of laser SLAM navigation, improving the positioning precision and improving the working efficiency. The road sign object is identified through vision, the distance between the AGV and the road sign is detected, the auxiliary positioning of the AGV is realized through the information transmission systems, and the positioning precision is improved.
Drawings
FIG. 1 is a flow chart of a laser navigation AGV auxiliary positioning method based on visual identification detection;
FIG. 2 is a schematic diagram of an AGV auxiliary positioning based on laser navigation of visual recognition detection;
FIG. 3 is a chessboard diagram of the calibration of the internal and external parameters of the vision sensor;
FIG. 4 is a schematic diagram of visual sensor ranging;
the system comprises a road sign 1, a laser radar 2, a depth camera 3, an AGV 4, an image sensor 5, an infrared transmitter 6 and a depth map 7.
Detailed Description
The present invention will be further described by way of examples, but not limited thereto, with reference to the accompanying drawings.
Example 1:
a laser navigation AGV auxiliary positioning method based on visual identification detection comprises the following steps, as shown in FIG. 1:
step (1), arranging visual sensors on two sides of an AGV, and calibrating internal and external parameters of the visual sensors; the intrinsic parameters comprise camera focal length and distortion, and the extrinsic parameters comprise rotation and translation from a world coordinate system to a camera coordinate system; the adopted visual sensor is a depth camera, fig. 3 is a chessboard diagram of camera calibration internal and external parameters, as shown in fig. 3, the camera is started, so that the whole chessboard diagram of fig. 3 can be manually calibrated in the visual field range of the camera through a camera SDK (software development kit) package, an SDK built-in calibration program is opened for calibration, the output internal and external parameters are stored and written into a camera configuration file, and the calibration process is completed. The camera SDK package provides a software package for the camera's official network.
Step (2), training an image set of the designated road signs, wherein the trained road signs are fixed objects, shooting the designated road signs from all angles respectively, collecting images of the designated road signs at all angles, setting a classifier by taking the images as a training sample set, namely creating a file to indicate which road signs are of one type, training the designated road signs by a CNN convolutional neural network method, identifying and classifying the image set, and generating an image identification model: firstly, storing a road sign image set to be trained in a folder, converting the road sign image set into a file form which can be processed by Caffe by using Caffe software and using the file form as input, compiling a neural network structure model (mainly comprising an input layer, a convolution layer, a pooling layer, a full-link layer and an output layer), training the converted image file according to a configured network structure, finally generating a trained image model file, and identifying subsequent road signs according to the file.
When the image recognition model is trained, different labels are set for different road signs, for example, the table label is 0, the chair label is 1, and the trash can label is 3 (for example only). After the model is trained according to the labels, the appointed road sign can be recognized in subsequent recognition, and the positioning accuracy of the final absolute position is improved.
And (3) operating the AGV, starting a visual sensor when the AGV reaches a specified position range as shown in the figure 2, shooting the surrounding environment, extracting characteristic points in the surrounding environment, comparing the image recognition model trained in the step (2), and recognizing the road sign.
Detecting the distance between the AGV and the specified road signs on two sides, and calculating the sum of the distances between the AGV and the specified road signs:
fig. 4 is a schematic diagram of the distance measurement of a vision sensor, as shown in fig. 4. After the designated road sign is accurately identified, the camera acquires depth information through triangulation, the infrared emitter projects infrared distance measurement to correct, distances d1 and d2 with higher precision are obtained, and the sum of the distances is calculated as follows: d-d 1+ d 2.
Setting a distance detection threshold T according to an industrial environment; and (3) when d is less than T, returning to the laser SLAM system by a flag bit which is set in software, when the program runs and detects the flag bit, performing the step (4) by the system, otherwise, returning to the step (3), and continuously moving the AGV to perform road sign identification.
Step (4), according to the information obtained from the vision sensor, including road sign information and distance information; the laser SLAM system compares the map database according to the information fed back by the vision sensor to realize the positioning of the absolute position; and clearing the motion accumulated error of the sensor attached to the AGV, and realizing the AGV auxiliary positioning of the system.

Claims (6)

1. A laser navigation AGV auxiliary positioning method based on visual identification detection is characterized by comprising the following steps:
step (1), arranging visual sensors on two sides of an AGV, and calibrating internal and external parameters of the visual sensors;
training an image set of a designated road sign to generate an image recognition model;
and (3) running the AGV, starting a vision sensor to identify the appointed road sign when the AGV reaches the appointed position range, detecting the distance between the AGV and the appointed road signs at two sides, recording the distance as d1 and d2 respectively, and calculating the sum of the distances between the AGV and the appointed road signs: d1+ d2, setting a distance detection threshold T;
when d is less than T, performing the step (4), otherwise, returning to the step (3), and enabling the AGV to continue to move to perform landmark identification;
feeding back information acquired from the vision sensor to a laser SLAM system, comparing a map database, wherein the map database is an industrial environment map established in advance when the AGV is taught, clearing accumulated errors of the AGV and the sensor, realizing repositioning of the absolute position of the AGV and realizing AGV auxiliary positioning of the system; the information acquired from the vision sensor includes landmark information and distance information.
2. The laser navigation AGV assisting positioning method based on visual recognition detection according to claim 1, wherein in the step (1), internal and external parameters of the visual sensor are calibrated, the internal parameters comprise camera focal length and distortion, and the external parameters comprise rotation and translation from a world coordinate system to a camera coordinate system; the adopted vision sensor is a depth camera, and manual calibration is carried out through a camera SDK packet.
3. The visual identification detection-based laser navigation AGV auxiliary positioning method according to claim 1, wherein in step (2), the trained road sign is a fixed object, an image of the road sign at an omnidirectional angle is collected and used as a training sample set, and a convolutional neural network method is used to train the designated road sign to generate an image identification model.
4. The laser navigation AGV assisting positioning method based on visual recognition detection as claimed in claim 3, wherein in step (2), different labels are set for different road signs when training the image recognition model.
5. The laser navigation AGV auxiliary positioning method based on visual identification detection according to claim 1, wherein in step (3), when the AGV reaches a specified position range, a visual sensor is started, the surrounding environment is photographed, feature points in the surrounding environment are extracted, and the landmark is identified by comparing the image identification model trained in step (2).
6. The AGV positioning method according to claim 1, wherein in step (3), the AGV further comprises an infrared emitter, the vision sensor calculates the distance from the vision sensor to the designated road signs at two sides by triangulation and calculates the sum of the distance between the vision sensor and the designated road signs, and the infrared emitter projects infrared rays for distance measurement and correction to obtain the distances d1 and d2 with higher accuracy.
CN201910844981.0A 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection Active CN112540382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910844981.0A CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910844981.0A CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Publications (2)

Publication Number Publication Date
CN112540382A true CN112540382A (en) 2021-03-23
CN112540382B CN112540382B (en) 2024-02-13

Family

ID=75012170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910844981.0A Active CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Country Status (1)

Country Link
CN (1) CN112540382B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113447950A (en) * 2021-06-30 2021-09-28 湖南牛顺科技有限公司 AGV positioning navigation system and method
CN114252013A (en) * 2021-12-22 2022-03-29 深圳市天昕朗科技有限公司 AGV visual identification accurate positioning system based on wired communication mode

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040168148A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for landmark generation for visual simultaneous localization and mapping
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040168148A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for landmark generation for visual simultaneous localization and mapping
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
朱凯;刘华峰;夏青元;: "基于单目视觉的同时定位与建图算法研究综述", 计算机应用研究, no. 01 *
王永力;胡旭晓;兰国清;承永宏;: "基于视觉SLAM和人工标记码的定位与导航方法研究", 成组技术与生产现代化, vol. 35, no. 04 *
王龙辉;杨光;尹芳;丑武胜;: "基于Kinect2.0的三维视觉同步定位与地图构建", 中国体视学与图像分析, no. 03 *
许俊勇;王景川;陈卫东;: "基于全景视觉的移动机器人同步定位与地图创建研究", 机器人, no. 04 *
骆燕燕;陈龙;: "融合视觉信息的激光定位与建图", 工业控制计算机, no. 12 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113447950A (en) * 2021-06-30 2021-09-28 湖南牛顺科技有限公司 AGV positioning navigation system and method
CN114252013A (en) * 2021-12-22 2022-03-29 深圳市天昕朗科技有限公司 AGV visual identification accurate positioning system based on wired communication mode
CN114252013B (en) * 2021-12-22 2024-03-22 深圳市天昕朗科技有限公司 AGV visual identification accurate positioning system based on wired communication mode

Also Published As

Publication number Publication date
CN112540382B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN108764187B (en) Method, device, equipment, storage medium and acquisition entity for extracting lane line
CN111340797B (en) Laser radar and binocular camera data fusion detection method and system
CN108229366B (en) Deep learning vehicle-mounted obstacle detection method based on radar and image data fusion
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
WO2022156175A1 (en) Detection method, system, and device based on fusion of image and point cloud information, and storage medium
CN108388641B (en) Traffic facility map generation method and system based on deep learning
CN111060924B (en) SLAM and target tracking method
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN110738121A (en) front vehicle detection method and detection system
CN110197106A (en) Object designation system and method
CN113936198A (en) Low-beam laser radar and camera fusion method, storage medium and device
CN112540382B (en) Laser navigation AGV auxiliary positioning method based on visual identification detection
CN108106617A (en) A kind of unmanned plane automatic obstacle-avoiding method
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN116518984B (en) Vehicle road co-location system and method for underground coal mine auxiliary transportation robot
CN116050277A (en) Underground coal mine scene reality capturing sensing and simulating method and equipment
CN115439621A (en) Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot
US20220292747A1 (en) Method and system for performing gtl with advanced sensor data and camera image
CN111145203B (en) Lane line extraction method and device
Chang et al. Versatile multi-lidar accurate self-calibration system based on pose graph optimization
CN114419259A (en) Visual positioning method and system based on physical model imaging simulation
CN114370871A (en) Close coupling optimization method for visible light positioning and laser radar inertial odometer
CN114359861A (en) Intelligent vehicle obstacle recognition deep learning method based on vision and laser radar
Dong et al. Semantic Lidar Odometry and Mapping for Mobile Robots Using RangeNet++
Yin et al. Added the odometry optimized SLAM loop closure detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant