CN116339326A - Autonomous charging positioning method and system based on stereoscopic camera - Google Patents

Autonomous charging positioning method and system based on stereoscopic camera Download PDF

Info

Publication number
CN116339326A
CN116339326A CN202310231506.2A CN202310231506A CN116339326A CN 116339326 A CN116339326 A CN 116339326A CN 202310231506 A CN202310231506 A CN 202310231506A CN 116339326 A CN116339326 A CN 116339326A
Authority
CN
China
Prior art keywords
positioning
dimensional code
stereo camera
pose
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310231506.2A
Other languages
Chinese (zh)
Inventor
高晓峰
陈剑
苏士伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Tianze Robot Technology Co ltd
Original Assignee
Jiangsu Tianze Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Tianze Robot Technology Co ltd filed Critical Jiangsu Tianze Robot Technology Co ltd
Priority to CN202310231506.2A priority Critical patent/CN116339326A/en
Publication of CN116339326A publication Critical patent/CN116339326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J7/00Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
    • H02J7/0042Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries characterised by the mechanical construction
    • H02J7/0045Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries characterised by the mechanical construction concerning the insertion or the connection of the batteries
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an autonomous charging positioning method based on a stereo camera, which adopts the stereo camera to perform positioning, and comprises the following specific steps: s1, acquiring data, acquiring a depth image and an RGB image through a stereo camera, and calibrating and aligning the two images; s2, identifying a positioning identification mark, extracting depth information of the positioning identification mark, and converting the depth information into point cloud; and S3, comparing the number of the point clouds with a threshold value, and outputting the target pose for self-service charging and positioning. The invention adopts the scheme of positioning based on the stereo camera, not only can acquire 3D point cloud but also can acquire 2D images through the stereo camera, and the invention adopts the scheme that the cost of the stereo camera is lower than that of a laser radar, and the stability and the precision of the stereo camera are better than those of ultrasonic wave, infrared ray and monocular camera schemes.

Description

Autonomous charging positioning method and system based on stereoscopic camera
Technical Field
The invention belongs to the technical field of autonomous charging positioning, and particularly relates to an autonomous charging positioning method and system based on a stereoscopic camera.
Background
Along with the rising of labor cost and the increasing of people's demand for technological intellectualization, fully-autonomous cleaning robots have been increasingly applied to commercial scenes, such as various indoor and outdoor scenes of parks, markets, intelligent factories and the like, so that the expenditure on cleaning labor is greatly saved. The storage battery is a power source for maintaining the cleaning robot to work, but the capacity of the storage battery is limited, because the electric energy cannot be stored in a large amount and the storage battery is stored in the storage battery, when the electric quantity is about to be exhausted, the robot can end the current task, and the intelligent robot can autonomously search for the charging pile and then perform docking charging. The accurate and efficient autonomous charging technology can ensure that the robot can work continuously and continuously for a long time completely and autonomously, so the autonomous charging technology is a key technology of an intelligent robot system.
At present, the autonomous charging technology mainly comprises several schemes of infrared docking, ultrasonic ranging, laser radar and monocular cameras.
As disclosed in chinese patent publication No. CN103997082a, the device comprises a charging base and a companion base fixed on the mobile robot, wherein the charging base is provided with left and right infrared emitters, and the two infrared emitters emit different infrared docking signals, so that three signal areas are formed in front of the charging base; the left and right groups of infrared receivers are arranged on the mate base, the robot judges the signal area through the two groups of infrared receivers, and finds out the specific position of the charging base by combining the signal intensity difference between the infrared docking signals emitted by the two received infrared transmitters, so that the automatic docking with the charging base is realized. The device and the method are particularly suitable for mobile robots in indoor environments, and are suitable for different battery types and physical sizes of robots by adopting plug-in module design. However, in the using process, the scheme is based on the infrared docking technology, and the defects of poor precision, large environmental influence, low recharging efficiency and the like exist due to the characteristics of infrared rays.
In addition, other existing autonomous charging techniques suffer from the following disadvantages: based on the scheme of the ultrasonic ranging technology, the ultrasonic ranging equipment has the defects of poor directivity, environmental influence and the like due to the characteristics of ultrasonic waves; based on the scheme of the laser radar, a special identification structure is required to be manufactured on the charging pile, the cost of the laser radar is high, and in addition, the identification algorithm is complex; based on the scheme of the monocular camera, as the plane camera can only acquire 2.5D position information through an algorithm, the complete 3D position information cannot be acquired, and the stability and the precision of target position information identified on a longer distance are poor.
Disclosure of Invention
The invention aims to solve the problems and provide an autonomous charging positioning method based on a stereo camera; on the other hand, the invention also provides an autonomous charging positioning system based on the stereoscopic camera, which can acquire a 2D image and a 3D point cloud through the stereoscopic camera so as to perform autonomous charging positioning, and can reduce cost while having higher stability and precision.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an autonomous charging positioning method based on a stereo camera adopts the stereo camera for positioning, and specifically comprises the following steps:
s1, acquiring data, acquiring a depth image and an RGB image through a stereo camera, and calibrating and aligning the two images;
s2, identifying a positioning identification mark, extracting depth information of the positioning identification mark, and converting the depth information into point cloud;
and S3, comparing the number of the point clouds with a threshold value, and outputting the target pose for self-service charging and positioning.
According to the invention, a stereo camera is used for positioning, a 3D point cloud is used for target positioning when the distance is far, the influence of outliers on the whole result can be reduced through a least squares optimization algorithm, so that the positioning result is more stable, and the plane of the camera and the plane of the target are adjusted as much as possible in the process to prepare for subsequent 2D image positioning. When the distance is relatively close, particularly when the point cloud blind area is entered, the 2D image is adopted for target positioning, the state that the camera plane is parallel to the target plane as much as possible is adjusted, and then the 2.5D position information is acquired through schemes such as two-dimensional code recognition and the like, so that the 2.5D position information can be accurate and stable as much as possible.
Further, step S2 includes:
and identifying the two-dimensional code in the RGB image, extracting a two-dimensional code peripheral frame, extracting depth information in a range of a two-dimensional code frame circumcircle, and converting the depth information into point cloud.
Further, when the positioning identification mark is identified, if the two-dimensional code is not identified in the RGB image, the pose of the robot is adjusted to acquire data again, and the search is continued until the two-dimensional code is found. Since the RGB image of the stereo camera is aligned with the depth image, the position on the depth image can be corresponding to the position of the identification mark on the RGB image so as to calculate the point cloud data corresponding to the identification mark, but if the two-dimensional code is not identified, the robot is informed to adjust the gesture to find the two-dimensional code.
Further, step S3 includes:
s301, comparing the number of the point clouds with a threshold value, if the number of the point clouds is larger than the threshold value, calculating a space plane corresponding to the point clouds by a RANSAC algorithm, and calculating a normal vector of the space plane;
s302, calculating a yaw angle and a pitch angle between the space plane normal vector and an X axis of a camera coordinate system;
s303, calculating the pose by combining the angle and the position, and outputting the target pose.
Further, the calculating the pose specifically includes: and calculating the barycenter coordinates of the point cloud, representing the three-dimensional coordinates of the target by the barycenter coordinates, and obtaining the pose of the target through the three-dimensional coordinates and the rotation angle of the target.
Further, if the number of the point clouds is smaller than a threshold value, the target pose is directly calculated through identifying the two-dimensional code. When the number of the extracted point clouds is smaller than a threshold value, namely the noise effect is unstable compared with the large data when the number of the point clouds is insufficient, the target pose can be directly calculated by identifying the Aruco two-dimensional code.
Further, when the robot approaches the target in a close range or a blind area range, the pose of the target is obtained by identifying the two-dimensional code. The pose calculated by the Aruco two-dimensional code is stable at a short distance.
Further, the two-dimensional code is an Aruco two-dimensional code, and the depth image center obtained by recognizing the Aruco two-dimensional code is outwards expanded to a circular area with the radius R, and depth information in the circular area is extracted to calculate a point cloud.
An autonomous charging positioning system based on a stereoscopic camera is used for running an autonomous charging positioning method based on the stereoscopic camera and comprises an identification positioning module and a positioning identification mark arranged on a charging pile.
Further, the positioning identification mark comprises a protruding circular area and an Aruco two-dimensional code attached to the circular inscribed square;
the recognition positioning module comprises an Aruco two-dimensional code recognition module, a plane point cloud extraction and normal vector calculation module and a pose calculation module.
Compared with the prior art, the invention has the advantages that:
1. according to the invention, positioning is performed based on the stereo camera, not only can 3D point cloud be obtained through the stereo camera, but also 2D image can be obtained, when the distance is far, the 3D point cloud is adopted for target positioning, the influence of outliers on the whole result can be reduced through the RANSAC algorithm, so that the positioning result is more stable, and in the process, the plane of the camera and the plane of the target are adjusted as much as possible to be parallel to prepare for subsequent positioning through the 2D image; when the distance is relatively close, particularly when the blind area of the depth map of the stereoscopic camera is entered, the 2D image is adopted for target positioning, at the moment, the state that the camera plane is parallel to the target plane as much as possible is adjusted, and 2.5D position information is acquired as accurately and stably as possible through a scheme of identifying the two-dimensional code.
2. The invention adopts the stereo camera to carry out autonomous charging and positioning, has lower cost than a laser radar, and has stability and precision superior to ultrasonic wave, infrared ray and monocular camera schemes.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is an autonomous localized charging flow diagram of the present invention;
fig. 2 is a schematic diagram of a positioning signature of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, the autonomous charging positioning method based on the stereo camera of the present embodiment adopts the stereo camera for positioning, and specifically includes the steps of:
s1, acquiring data, acquiring a depth image and an RGB image through a stereo camera, and calibrating and aligning the two images;
s2, identifying a positioning identification mark, extracting depth information of the positioning identification mark, and converting the depth information into point cloud;
and S3, comparing the number of the point clouds with a threshold value, and outputting the target pose for self-service charging and positioning.
According to the invention, a stereo camera is used for positioning, a 3D point cloud is used for target positioning when the distance is far, the influence of outliers on the whole result can be reduced through a least squares optimization algorithm, so that the positioning result is more stable, and the plane of the camera and the plane of the target are adjusted as much as possible in the process to prepare for subsequent 2D image positioning. When the distance is relatively close, particularly when the point cloud blind area is entered, the 2D image is adopted for target positioning, the state that the camera plane is parallel to the target plane as much as possible is adjusted, and then the 2.5D position information is acquired through schemes such as two-dimensional code recognition and the like, so that the 2.5D position information can be accurate and stable as much as possible.
Wherein, step S2 includes:
searching and judging whether a two-dimensional code exists in the RGB image, if the two-dimensional code is identified, identifying the two-dimensional code in the RGB image, extracting a two-dimensional code peripheral frame, extracting depth information in a two-dimensional code frame circumscribing circle range, and converting the depth information into point cloud.
If the two-dimensional code is not recognized in the RGB image, the robot is informed of adjusting the pose to acquire data again and continues searching until the two-dimensional code is found. Since the RGB image of the stereo camera is aligned with the depth image, the position on the depth image can be corresponding to the position of the identification mark on the RGB image so as to calculate the point cloud data corresponding to the identification mark, but if the two-dimensional code is not identified, the robot is informed to adjust the gesture to find the two-dimensional code.
Step S3 of the present embodiment includes:
s301, comparing the number of the point clouds with a threshold value, if the number of the point clouds is larger than the threshold value, calculating a space plane corresponding to the point clouds by a RANSAC algorithm, and calculating a normal vector of the space plane; wherein, RANSAC is random sampling consistency;
s302, calculating a yaw angle and a pitch angle between the robot and an X axis of a camera coordinate system according to a space plane normal vector, wherein a roll angle cannot be calculated but the roll angle is assumed to be 0 all the time because the robot moves on an approximate plane; wherein the X axis is outward perpendicular to the camera plane;
s303, calculating the pose by combining the angle and the position, and outputting the target pose.
The pose calculation method specifically comprises the following steps: and calculating the barycenter coordinates of the point cloud, representing the three-dimensional coordinates of the target by the barycenter coordinates, and obtaining the pose of the target through the three-dimensional coordinates and the rotation angle of the target.
If the number of the point clouds is smaller than the threshold value, the target pose is directly calculated through identifying the two-dimensional code. When the number of the extracted point clouds is smaller than a threshold value, namely the noise effect is unstable compared with the large data when the number of the point clouds is insufficient, the target pose can be directly calculated by identifying the Aruco two-dimensional code.
When the robot approaches the target in a close range or a blind area range, the pose of the target is acquired by identifying the two-dimensional code. The pose calculated by the Aruco two-dimensional code is stable at a short distance. When the robot approaches the target, because the depth camera has a blind area of about 20cm nearby, depth information cannot be acquired at the moment, and the pose of the target is still acquired by identifying the Aruco two-dimensional code; when the target is at a short distance of generally not more than 1m, the pose of the target can be obtained by identifying the Aruco two-dimensional code directly.
In the embodiment, the two-dimensional code is an Aruco two-dimensional code, and the depth image center obtained by recognizing the Aruco two-dimensional code is outwards expanded to a circular area with a radius of R, and depth information in the circular area is extracted to calculate point clouds.
Example 2
The embodiment provides an autonomous charging positioning system based on a stereo camera, which is used for running the autonomous charging positioning method based on the stereo camera in embodiment 1, and comprises an identification positioning module and a positioning identification mark arranged on a charging pile.
As shown in fig. 2, the positioning identification mark of the present embodiment includes a protruding circular area and an arco two-dimensional code attached to a circular inscribed square, wherein the radius of the circular area is R.
The recognition and positioning module comprises an Aruco two-dimensional code recognition module, a plane point cloud extraction and normal vector calculation module and a pose calculation module.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (10)

1. The autonomous charging positioning method based on the stereo camera is characterized by adopting the stereo camera for positioning, and comprises the following specific steps:
s1, acquiring data, acquiring a depth image and an RGB image through a stereo camera, and calibrating and aligning the two images;
s2, identifying a positioning identification mark, extracting depth information of the positioning identification mark, and converting the depth information into point cloud;
and S3, comparing the number of the point clouds with a threshold value, and outputting the target pose for self-service charging and positioning.
2. The autonomous charging positioning method based on the stereoscopic camera according to claim 1, wherein the step S2 includes:
and identifying the two-dimensional code in the RGB image, extracting a two-dimensional code peripheral frame, extracting depth information in a range of a two-dimensional code frame circumcircle, and converting the depth information into point cloud.
3. The autonomous charging positioning method based on the stereo camera according to claim 2, wherein when the positioning identification mark is identified, if the two-dimensional code is not identified in the RGB image, the pose of the robot is adjusted to reacquire the data and continue searching until the two-dimensional code is found.
4. The autonomous charging positioning method based on the stereoscopic camera according to claim 1, wherein the step S3 includes:
s301, comparing the number of the point clouds with a threshold value, if the number of the point clouds is larger than the threshold value, calculating a space plane corresponding to the point clouds by a RANSAC algorithm, and calculating a normal vector of the space plane;
s302, calculating a yaw angle and a pitch angle between the space plane normal vector and an X axis of a camera coordinate system;
s303, calculating the pose by combining the angle and the position, and outputting the target pose.
5. The stereo camera based autonomous charging positioning method of claim 4, wherein the calculating the pose specifically comprises: and calculating the barycenter coordinates of the point cloud, representing the three-dimensional coordinates of the target by the barycenter coordinates, and obtaining the pose of the target through the three-dimensional coordinates and the rotation angle of the target.
6. The autonomous charging positioning method based on the stereo camera according to claim 4, wherein if the number of the point clouds is smaller than a threshold value, the target pose is directly calculated by identifying the two-dimensional code.
7. The autonomous charging positioning method based on the stereo camera according to claim 1, wherein when the robot approaches the target in a close range or a blind area range, the pose of the target is obtained by identifying the two-dimensional code.
8. The autonomous charging positioning method based on the stereo camera according to claim 1, wherein the two-dimensional code is an arco two-dimensional code, and the point cloud is calculated by expanding the center of a depth image obtained by identifying the arco two-dimensional code to a circular area with a radius of R and extracting depth information in the circular area.
9. An autonomous charging positioning system based on a stereo camera for operating the autonomous charging positioning method based on the stereo camera according to any one of claims 1 to 8, characterized by comprising an identification positioning module and a positioning identification mark mounted on a charging pile.
10. The autonomous charging positioning system based on a stereoscopic camera according to claim 9, wherein the positioning identification mark comprises a protruding circular area and an Aruco two-dimensional code attached to a circular inscribed square;
the recognition positioning module comprises an Aruco two-dimensional code recognition module, a plane point cloud extraction and normal vector calculation module and a pose calculation module.
CN202310231506.2A 2023-03-07 2023-03-07 Autonomous charging positioning method and system based on stereoscopic camera Pending CN116339326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310231506.2A CN116339326A (en) 2023-03-07 2023-03-07 Autonomous charging positioning method and system based on stereoscopic camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310231506.2A CN116339326A (en) 2023-03-07 2023-03-07 Autonomous charging positioning method and system based on stereoscopic camera

Publications (1)

Publication Number Publication Date
CN116339326A true CN116339326A (en) 2023-06-27

Family

ID=86881600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310231506.2A Pending CN116339326A (en) 2023-03-07 2023-03-07 Autonomous charging positioning method and system based on stereoscopic camera

Country Status (1)

Country Link
CN (1) CN116339326A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132598A (en) * 2023-10-26 2023-11-28 国创移动能源创新中心(江苏)有限公司 Foreign matter detection method and foreign matter detection device for electric automobile charging interface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132598A (en) * 2023-10-26 2023-11-28 国创移动能源创新中心(江苏)有限公司 Foreign matter detection method and foreign matter detection device for electric automobile charging interface
CN117132598B (en) * 2023-10-26 2024-03-22 国创移动能源创新中心(江苏)有限公司 Foreign matter detection method and foreign matter detection device for electric automobile charging interface

Similar Documents

Publication Publication Date Title
CN108406731B (en) Positioning device, method and robot based on depth vision
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN106599108B (en) Method for constructing multi-modal environment map in three-dimensional environment
Liang et al. Image based localization in indoor environments
CN109901590B (en) Recharging control method of desktop robot
US8380384B2 (en) Apparatus and method for localizing mobile robot
CN110874100A (en) System and method for autonomous navigation using visual sparse maps
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
CN108481327B (en) Positioning device, positioning method and robot for enhancing vision
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN112085003B (en) Automatic recognition method and device for abnormal behaviors in public places and camera equipment
EP4283567A1 (en) Three-dimensional map construction method and apparatus
CN112819943B (en) Active vision SLAM system based on panoramic camera
CN109035841B (en) Parking lot vehicle positioning system and method
CN116339326A (en) Autonomous charging positioning method and system based on stereoscopic camera
CN113936198A (en) Low-beam laser radar and camera fusion method, storage medium and device
CN112484746A (en) Monocular vision-assisted laser radar odometer method based on ground plane
CN113675923A (en) Charging method, charging device and robot
CN116468786A (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
CN113536820B (en) Position identification method and device and electronic equipment
CN113447014A (en) Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN113467451A (en) Robot recharging method and device, electronic equipment and readable storage medium
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
CN112433542B (en) Automatic robot recharging method and system based on visual positioning
CN114740867A (en) Intelligent obstacle avoidance method and device based on binocular vision, robot and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination