CN112540382B - Laser navigation AGV auxiliary positioning method based on visual identification detection - Google Patents

Laser navigation AGV auxiliary positioning method based on visual identification detection Download PDF

Info

Publication number
CN112540382B
CN112540382B CN201910844981.0A CN201910844981A CN112540382B CN 112540382 B CN112540382 B CN 112540382B CN 201910844981 A CN201910844981 A CN 201910844981A CN 112540382 B CN112540382 B CN 112540382B
Authority
CN
China
Prior art keywords
agv
visual
specified
image
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910844981.0A
Other languages
Chinese (zh)
Other versions
CN112540382A (en
Inventor
周军
罗川
吴迪
皇攀凌
陈庆伟
李建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Original Assignee
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University, Shandong Alesmart Intelligent Technology Co Ltd filed Critical Shandong University
Priority to CN201910844981.0A priority Critical patent/CN112540382B/en
Publication of CN112540382A publication Critical patent/CN112540382A/en
Application granted granted Critical
Publication of CN112540382B publication Critical patent/CN112540382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

A laser navigation AGV auxiliary positioning method based on visual identification detection comprises the steps of training an image set of a specified road sign to form an image identification model; identifying the appointed road signs through a visual sensor in a specified position range, detecting the distance between the AGV and the appointed road signs at two sides, and calculating the sum of the distances; according to the information obtained from the vision sensor, the information is fed back to the laser SLAM system, so that the positioning of the absolute position of the AGV is realized, the correction of the accumulated error of the laser SLAM navigation is realized, the positioning precision is improved, and the working efficiency is improved.

Description

Laser navigation AGV auxiliary positioning method based on visual identification detection
Technical Field
The invention relates to a laser navigation AGV auxiliary positioning method based on visual identification detection, and belongs to the technical field of visual identification detection.
Background
In industrial application of SLAM autonomous navigation of an indoor mobile robot, a single 2D laser radar sensor is mostly adopted in the existing SLAM navigation mode, laser particles are emitted in a two-dimensional plane through a laser emitter, surrounding environment depth information is returned through particle flight time, and then an original map database is compared to determine the position of the laser particles. The method has the defects that the detected information quantity is less, the method has higher uncertainty in scene positioning with similar characteristics, the false matching of the contours is easy to occur, and the method is difficult to be applied to the industry with high environment repetition rate.
The Chinese patent document (publication No. CN 109752725A) discloses a low-speed commercial robot, a positioning navigation method and a positioning navigation system, SLAM adopts 2D laser positioning and mapping, and has higher precision, but because the information quantity collected by 2D laser is less, scene recognition detection similar to texture information is insufficient, in order to improve the defect, when the similar scene positioning accuracy is poor, the vision sensor is adopted to assist in extracting abundant texture information recognition objects, and the precision of repeated positioning of 2D laser is improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the laser navigation AGV auxiliary positioning method based on visual identification detection, which can detect the position relationship between an industrial robot and a road sign object, thereby realizing the correction of the laser SLAM navigation accumulated error, improving the positioning precision and improving the working efficiency.
The technical scheme of the invention is as follows:
a laser navigation AGV auxiliary positioning method based on visual identification detection comprises the following steps:
setting visual sensors on two sides of the AGV, and calibrating internal and external parameters of the visual sensors;
training an image set of a specified road sign to generate an image recognition model;
step (3), operating the AGV, when the AGV reaches the specified position range, starting the visual sensor to identify the specified road sign, detecting the distance between the AGV and the specified road sign at two sides after identifying the road sign, respectively marking as d1 and d2, and calculating the sum of the distances of the AGV and the specified road sign: d=d1+d2, and a distance detection threshold T is set according to the industrial environment;
when d is less than T, carrying out the step (4), otherwise returning to the step (3), and continuing the movement of the AGV to carry out the road sign recognition;
step (4), feeding back to a laser SLAM system according to information obtained from a visual sensor, comparing a map database, wherein the map database refers to an industrial environment map which is established in advance by an AGV during teaching, the method is to clear the accumulated errors of sensors attached to the AGV about the running condition of the AGV after the industrial environment map is established in advance, a plurality of sensors (an odometer, a gyroscope, a laser radar and the like) attached to the AGV are limited by precision, the accumulated errors are unavoidable, long-time movement can generate the accumulated errors, the clear refers to realizing the repositioning of the absolute position of the AGV, and when the AGV reaches the vicinity of a specified road mark, the AGV laser SLAM system scans surrounding information and outputs the absolute pose (position and rotation) of the AGV instead of continuously using the pose accumulated by long-time running of various sensors, so that the accumulated errors of the sensors are eliminated indirectly, and the next step of operation is carried out on the pose, and the AGV of the system is realized; the information acquired from the vision sensor includes landmark information and distance information.
Preferably, in the step (1), the internal and external parameters of the vision sensor are calibrated, the internal parameters include focal length and distortion of the camera, and the external parameters include rotation and translation from the world coordinate system to the camera coordinate system; the adopted vision sensor is a depth camera, and manual calibration is carried out through a camera SDK package. The camera SDK package provides a software package for the camera official network.
Preferably, in the step (2), the trained landmark is a fixed object, an image of the landmark at an omnibearing angle is collected and used as a training sample set, and the specified landmark is trained by a CNN convolutional neural network method to generate an image recognition model as a basis for subsequent comparison.
Further preferably, in the step (2), different labels are set for different road signs when training the image recognition model. In the subsequent recognition, the appointed road sign can be recognized, and the positioning accuracy of the final absolute position is improved.
Preferably, in the step (3), when the AGV reaches the specified position range, a visual sensor is turned on, a surrounding environment is photographed, feature points in the surrounding environment are extracted, and the road mark is identified by comparing with the image identification model trained in the step (2).
Preferably, in the step (3), an infrared emitter is further arranged on the AGV, the visual sensor calculates distances from the visual sensor to the specified road marks on two sides respectively through triangulation, and calculates a sum of the distances, and the infrared emitter projects infrared ranging to correct the distance, so that distances d1 and d2 with higher precision are obtained.
The method and the device for identifying the specific object in the advancing process through vision, the identification information is transmitted to the laser navigation system, so that the positioning accuracy of navigation is improved, positioning is directly achieved through visual detection instead of visual measurement, image building is not achieved through visual measurement, visual identification is an auxiliary means, the defect of low laser detection accuracy is overcome in similar scenes, the identification information is transmitted to the system, and final positioning and image building are achieved through laser.
The invention has the beneficial effects that:
according to the laser navigation AGV auxiliary positioning method based on visual identification and detection, firstly, a camera SDK package is adopted to manually calibrate internal and external parameters of a camera, an image identification model is generated through training, specified road signs are identified through a visual sensor in a specified position range, the distance between the specified road signs and two sides of the specified road signs is detected, and information acquired by the visual sensor is fed back to a laser SLAM system. Based on the information obtained by vision, the laser SLAM system scans surrounding environment information, and compares the image recognition model to realize the positioning of the absolute position, so that the correction of the accumulated error of the laser SLAM navigation is realized, the positioning precision is improved, and the working efficiency is improved. The road sign object is identified through vision, the distance between the AGV and the road sign is detected, and the AGV is assisted to be positioned by the information transmission system, so that the positioning accuracy is improved.
Drawings
FIG. 1 is a flow chart of a laser navigation AGV assisted positioning method based on visual identification detection;
FIG. 2 is a schematic diagram of laser guided AGV assisted positioning based on visual identification detection;
FIG. 3 is a diagram of a calibration chessboard of the internal and external parameters of the visual sensor;
FIG. 4 is a schematic diagram of a vision sensor ranging;
1, road sign, 2, laser radar, 3, depth camera, 4, AGV,5, image sensor, 6, infrared transmitter, 7, depth map.
Detailed Description
The invention will now be further illustrated by way of example, but not by way of limitation, with reference to the accompanying drawings.
Example 1:
a laser navigation AGV auxiliary positioning method based on visual identification detection comprises the following steps as shown in FIG. 1:
setting visual sensors on two sides of the AGV, and calibrating internal and external parameters of the visual sensors; the internal parameters include camera focal length and distortion, and the external parameters include rotation and translation between the world coordinate system to the camera coordinate system; the adopted vision sensor is a depth camera, fig. 3 is a camera calibration internal and external parameter chessboard diagram, as shown in fig. 3, the camera is started, the whole calibration chessboard diagram of fig. 3 can be manually calibrated in the camera visual field range through a camera SDK package, an SDK built-in calibration program is opened for calibration, and the output internal and external parameters are stored and written into a camera configuration file to complete the calibration process. The camera SDK package provides a software package for the camera official network.
Step (2), training an image set of specified landmarks, wherein the trained landmarks are fixed objects, shooting the specified landmarks from each angle respectively, collecting images of the specified landmarks at all angles, taking the images as a training sample set, setting a classifier, namely creating a file which indicates which landmarks are of one type, training the specified landmarks by a CNN convolutional neural network method, identifying and classifying the image set, and generating an image identification model: firstly, a road sign image set to be trained is stored in a folder, the road sign image set to be trained is converted into a file form which can be processed by Caffe by using Caffe software and is used as input, a neural network structure model (mainly comprising an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer) is written, the converted image file is trained according to a configured network structure, a trained image model file is finally generated, and the subsequent road signs can be identified according to the file.
When the image recognition model is trained, different labels are set for different road signs, such as a table label of 0, a chair label of 1, and a garbage can label of 3 (only by way of example). After training the model according to the labels, the appointed road sign can be identified in the subsequent identification, and the positioning accuracy of the final absolute position is improved.
And (3) operating the AGV, as shown in fig. 2, starting a visual sensor when the AGV reaches a specified position range, shooting the surrounding environment, extracting characteristic points in the surrounding environment, and comparing the trained image recognition model in the step (2) to recognize the road mark.
Detecting the distance between the AGV and the specified road signs on two sides, and calculating the sum of the distance and the distance:
fig. 4 is a schematic diagram of vision sensor ranging, as shown in fig. 4. After the specified road sign is accurately identified, the camera acquires depth information through triangulation, the infrared transmitter projects infrared ranging for correction, distances d1 and d2 with higher precision are obtained, and the sum of the distances is calculated: d=d1+d2.
Setting a distance detection threshold T according to an industrial environment; when d is smaller than T, returning to a flag bit of the laser SLAM system, wherein the flag bit is a flag bit set in software, when the program runs and detects the flag bit, the system performs step (4), otherwise, returning to step (3), and continuing to move by the AGV to perform road sign recognition.
Step (4), according to the information obtained from the vision sensor, comprising road sign information and distance information; the laser SLAM system compares the map database according to the information fed back by the vision sensor to realize the positioning of the absolute position; and eliminating the motion accumulated error of a sensor attached to the AGV, and realizing the AGV auxiliary positioning of the system.

Claims (4)

1. A laser navigation AGV auxiliary positioning method based on visual identification detection is characterized by comprising the following steps:
setting visual sensors on two sides of the AGV, and calibrating internal and external parameters of the visual sensors;
training an image set of a specified road sign to generate an image recognition model;
step (3), operating the AGV, when the AGV reaches the specified position range, starting a visual sensor to identify the specified road signs, detecting the distance between the AGV and the specified road signs on two sides, respectively marking as d1 and d2, and calculating the sum of the distances of the two: d=d1+d2, a distance detection threshold T is set;
when d is less than T, carrying out the step (4), otherwise returning to the step (3), and continuing the movement of the AGV to carry out the road sign recognition;
step (4), feeding back to a laser SLAM system according to information obtained from a visual sensor, comparing a map database, wherein the map database refers to an industrial environment map established in advance by an AGV during teaching, clearing accumulated errors of sensors attached to the AGV, realizing repositioning of an absolute position of the AGV, and realizing AGV auxiliary positioning of the system; the information acquired from the vision sensor includes road sign information and distance information;
in the step (1), calibrating internal and external parameters of the vision sensor, wherein the internal parameters comprise camera focal length and distortion, and the external parameters comprise rotation and translation from a world coordinate system to a camera coordinate system; the adopted vision sensor is a depth camera, and manual calibration is carried out through a camera SDK package;
in the step (3), an infrared emitter is further arranged on the AGV, the visual sensor calculates the distances from the visual sensor to the appointed road signs on the two sides respectively through triangulation, the sum of the distances is calculated, the infrared emitter projects infrared ranging to correct, and distances d1 and d2 with higher precision are obtained.
2. The laser navigation AGV auxiliary positioning method according to claim 1, wherein in the step (2), the trained landmark is a fixed object, the image of the landmark in all-dimensional angle is collected, the image is used as a training sample set, and the specified landmark is trained by a convolutional neural network method, so as to generate the image recognition model.
3. The method of claim 1, wherein in the step (2), different labels are set for different road signs when training the image recognition model.
4. The laser navigation AGV auxiliary positioning method based on visual identification and detection according to claim 1, wherein in the step (3), when the AGV reaches the specified position range, a visual sensor is turned on, the surrounding environment is photographed, feature points in the surrounding environment are extracted, and the road mark is identified by comparing with the image identification model trained in the step (2).
CN201910844981.0A 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection Active CN112540382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910844981.0A CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910844981.0A CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Publications (2)

Publication Number Publication Date
CN112540382A CN112540382A (en) 2021-03-23
CN112540382B true CN112540382B (en) 2024-02-13

Family

ID=75012170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910844981.0A Active CN112540382B (en) 2019-09-07 2019-09-07 Laser navigation AGV auxiliary positioning method based on visual identification detection

Country Status (1)

Country Link
CN (1) CN112540382B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113447950A (en) * 2021-06-30 2021-09-28 湖南牛顺科技有限公司 AGV positioning navigation system and method
CN114252013B (en) * 2021-12-22 2024-03-22 深圳市天昕朗科技有限公司 AGV visual identification accurate positioning system based on wired communication mode

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003300959A1 (en) * 2002-12-17 2004-07-22 Evolution Robotics, Inc. Systems and methods for visual simultaneous localization and mapping

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
基于Kinect2.0的三维视觉同步定位与地图构建;王龙辉;杨光;尹芳;丑武胜;;中国体视学与图像分析(第03期);全文 *
基于全景视觉的移动机器人同步定位与地图创建研究;许俊勇;王景川;陈卫东;;机器人(第04期);全文 *
基于单目视觉的同时定位与建图算法研究综述;朱凯;刘华峰;夏青元;;计算机应用研究(第01期);全文 *
基于视觉SLAM和人工标记码的定位与导航方法研究;王永力;胡旭晓;兰国清;承永宏;;成组技术与生产现代化;第35卷(第04期);全文 *
朱凯 ; 刘华峰 ; 夏青元 ; .基于单目视觉的同时定位与建图算法研究综述.计算机应用研究.2017,(第01期),全文. *
王龙辉 ; 杨光 ; 尹芳 ; 丑武胜 ; .基于Kinect2.0的三维视觉同步定位与地图构建.中国体视学与图像分析.2017,(第03期),全文. *
融合视觉信息的激光定位与建图;骆燕燕;陈龙;;工业控制计算机(第12期);全文 *
许俊勇 ; 王景川 ; 陈卫东 ; .基于全景视觉的移动机器人同步定位与地图创建研究.机器人.2008,(第04期),全文. *
骆燕燕 ; 陈龙 ; .融合视觉信息的激光定位与建图.工业控制计算机.2017,(第12期),全文. *

Also Published As

Publication number Publication date
CN112540382A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2022156175A1 (en) Detection method, system, and device based on fusion of image and point cloud information, and storage medium
CN111340797B (en) Laser radar and binocular camera data fusion detection method and system
CN109685066B (en) Mine target detection and identification method based on deep convolutional neural network
CN108764187B (en) Method, device, equipment, storage medium and acquisition entity for extracting lane line
CN108229366B (en) Deep learning vehicle-mounted obstacle detection method based on radar and image data fusion
CN108388641B (en) Traffic facility map generation method and system based on deep learning
CN108226938B (en) AGV trolley positioning system and method
KR102420476B1 (en) Apparatus and method for estimating location of vehicle and computer recordable medium storing computer program thereof
CN106767399A (en) The non-contact measurement method of the logistics measurement of cargo found range based on binocular stereo vision and dot laser
CN110031829B (en) Target accurate distance measurement method based on monocular vision
CN110738121A (en) front vehicle detection method and detection system
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
CN109186606A (en) A kind of robot composition and air navigation aid based on SLAM and image information
CN106908064B (en) Indoor night vision navigation method based on Kinect2 sensor
CN104880160B (en) Two-dimensional-laser real-time detection method of workpiece surface profile
Momeni-k et al. Height estimation from a single camera view
CN112540382B (en) Laser navigation AGV auxiliary positioning method based on visual identification detection
CN111060924A (en) SLAM and target tracking method
CN110197106A (en) Object designation system and method
CN112861748B (en) Traffic light detection system and method in automatic driving
CN108106617A (en) A kind of unmanned plane automatic obstacle-avoiding method
CN112101160B (en) Binocular semantic SLAM method for automatic driving scene
CN113936198A (en) Low-beam laser radar and camera fusion method, storage medium and device
CN116518984B (en) Vehicle road co-location system and method for underground coal mine auxiliary transportation robot
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant