WO2022078467A1 - 机器人自动回充方法、装置、机器人和存储介质 - Google Patents

机器人自动回充方法、装置、机器人和存储介质 Download PDF

Info

Publication number
WO2022078467A1
WO2022078467A1 PCT/CN2021/123910 CN2021123910W WO2022078467A1 WO 2022078467 A1 WO2022078467 A1 WO 2022078467A1 CN 2021123910 W CN2021123910 W CN 2021123910W WO 2022078467 A1 WO2022078467 A1 WO 2022078467A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
robot
recharging
charging
charging base
Prior art date
Application number
PCT/CN2021/123910
Other languages
English (en)
French (fr)
Inventor
杨勇
吴泽晓
林位麟
Original Assignee
深圳市杉川机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市杉川机器人有限公司 filed Critical 深圳市杉川机器人有限公司
Publication of WO2022078467A1 publication Critical patent/WO2022078467A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0261Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic plots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Definitions

  • the present application relates to the field of robotics, and in particular, to an automatic recharging method, device, robot and storage medium for a robot.
  • the robot recharging methods on the market can be roughly divided into the following types according to the different sensors used: infrared sensors, ultrasonic sensors, light sensors, and methods of using a combination of the above-mentioned sensors.
  • the existing recharging method has poor docking accuracy, is greatly affected by the number of sensors and the environment, and has low recharging efficiency; when using a small number of sensors, it is necessary to use the strength of the detected signal to determine the direction of the robot, and also That is, a receiving circuit with very high signal strength detection accuracy is required for detection. Therefore, the existing robots have problems of poor docking accuracy and low recharging efficiency when recharging is performed.
  • the embodiments of the present application aim to solve the problems of poor docking accuracy and low recharging efficiency when the existing robot performs recharging.
  • the present application provides an automatic recharging method for a robot, the automatic recharging method for a robot comprises the following steps:
  • the charging stand is docked to perform automatic recharging according to the posture information.
  • the step of recognizing the charging stand to obtain an identification image includes:
  • a recognition image is obtained by performing shape recognition based on the outline of the charging stand according to the depth image and the grayscale image.
  • the step of acquiring the target point cloud corresponding to the target area includes:
  • the point cloud coordinates of each pixel in the target area are calculated according to the internal parameters to obtain the target point cloud.
  • the step of docking the charging stand to perform automatic recharging according to the pose information includes:
  • the step of sending a recharging signal to the charging base includes:
  • the step of matching the target point cloud with a point cloud template to determine the pose information of the charging stand includes:
  • the pose information of the charging stand is determined.
  • another aspect of the present application further provides a robot automatic recharging device, the device comprising:
  • the sending module is used to send a recharging signal to the charging base when the recharging conditions are met;
  • an identification module configured to identify the charging base to obtain an identification image, and to segment the identification image to obtain a target area, where the target area is an area containing information of the charging base;
  • an acquisition module configured to acquire a target point cloud corresponding to the target area, and match the target point cloud with a point cloud template to determine the pose information of the charging stand;
  • the docking module is used for docking the charging stand to perform automatic recharging according to the posture information.
  • another aspect of the present application also provides a robot, the robot includes a memory, a processor and a robot automatic recharging program stored in the memory and running on the processor, the processor executes all The steps of realizing the above-mentioned automatic robot recharging method when the robot automatic recharging program is described.
  • another aspect of the present application further provides a computer-readable storage medium, where a robot automatic recharging program is stored on the computer-readable storage medium, and when the robot automatic recharging program is executed by the processor Implement the steps of the robot automatic recharging method as described above.
  • a recharging signal is sent to the charging base; the time-of-flight camera is used to capture images of the charging base, and the captured images are segmented to obtain a region of interest; further acquisition of the corresponding region of interest Target point cloud, match the target point cloud with the point cloud template to determine the pose information of the charging stand, and complete the docking operation with the charging stand according to the pose information to realize the automatic recharging of the robot.
  • the charging base is identified through the time-of-flight technology, which effectively reduces the power consumption of the robot, improves the docking accuracy and improves the recharging efficiency of the robot.
  • FIG. 1 is a schematic structural diagram of a robot of a hardware operating environment involved in a solution according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the automatic recharging method for a robot according to the present application
  • FIG. 3 is a schematic flowchart of the second embodiment of the automatic recharging method for a robot according to the present application
  • FIG. 4 is a schematic flowchart of a third embodiment of a robot automatic recharging method of the present application.
  • FIG. 5 is a schematic flowchart of obtaining a target point cloud corresponding to the target area in the automatic recharging method for a robot of the present application;
  • FIG. 6 is a schematic flowchart of matching the target point cloud with a point cloud template to determine the pose information of the charging stand in the automatic recharging method of the robot of the present application;
  • FIG. 7 is a method for planning a recharging path according to the pose information and the environmental image information in the automatic recharging method of the robot of the present application, moves to the charging stand based on the recharging path, and docks the charging stand for automatic recharging.
  • FIG. 8 is a schematic diagram of an automatic recharging device for a robot according to the present application.
  • the main solutions of the embodiments of the present application are: when the recharging conditions are met, send a recharging signal to the charging base; identify the charging base to obtain an identification image, and segment the identification image to obtain a target area, where the target area is an area containing charging base information ; Obtain the target point cloud corresponding to the target area, match the target point cloud with the point cloud template to determine the pose information of the charging stand; dock the charging stand to automatically recharge according to the pose information.
  • a recharging signal is sent to the charging base;
  • the time-of-flight camera is used to capture images of the charging base, and the captured images are segmented to obtain a region of interest;
  • the target corresponding to the region of interest is further obtained Point cloud, match the target point cloud with the point cloud template to determine the pose information of the charging stand, and complete the docking operation with the charging stand according to the pose information to realize the automatic recharging of the robot.
  • the charging base is identified through the time-of-flight technology, which effectively reduces the power consumption of the robot, improves the docking accuracy and improves the recharging efficiency of the robot.
  • FIG. 1 is a schematic structural diagram of a robot of a hardware operating environment involved in the solution of an embodiment of the present application.
  • the robot may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize the connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
  • the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • the robot may further include a camera, an RF (Radio Frequency, radio frequency) circuit, a sensor, an audio circuit, a WiFi module, a detector, and the like.
  • RF Radio Frequency, radio frequency
  • the robot may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, and a temperature sensor, which will not be repeated here.
  • the robot structure shown in FIG. 1 does not constitute a limitation to the robot equipment, and may include more or less components than the one shown, or combine some components, or arrange different components.
  • the memory 1005 which is a computer-readable storage medium, may include an operating system, a network communication module, a user interface module, and a robot automatic recharging program.
  • the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
  • the user interface 1003 is mainly used to connect to the client (client) and perform data communication with the client;
  • the processor 1001 can be used to call the robot automatic recharge program stored in memory 1005, and perform the following operations:
  • the charging stand is docked to perform automatic recharging according to the posture information.
  • FIG. 2 is a schematic flowchart of the first embodiment of the robot automatic recharging method of the present application.
  • the robot automatic recharging method includes the following steps:
  • Step S10 sending a recharging signal to the charging stand when the recharging condition is satisfied
  • Robots have important applications in industry and service industry, and are gradually changing industry and service industry.
  • the purpose is to reduce the labor intensity of human beings and create more convenient and relaxed working conditions for human beings.
  • the current power value is detected in real time through the power detection module.
  • the robot automatically enters the charging mode and sends the communication module to the charging mode.
  • the charging stand sends a recharge signal.
  • the robot and the charging stand are in the same local area network, and when it is detected that the recharging condition is satisfied, the robot sends a recharging signal to the charging stand through the local area network.
  • satisfying the recharge conditions includes, but is not limited to, when it is detected that the battery power of the robot is less than the set power threshold, or a recharge command is received from the user, etc.
  • the recharge command here may be that the user touches the control panel on the robot body. Triggered, or triggered by the user through a mobile terminal device connected to the robot. Further, if the power of the robot is lower than a certain amount, such as less than 5%, it starts to enter the charging mode, which may easily lead to the failure of recharging. If the path is too long, the power is not enough; Charging, it is easy to lead to low work efficiency of the robot, thereby reducing the reliability and practicability of the robot.
  • the robot can obtain the position information of the charging base in real time, plan the path to the charging base based on the position information, and calculate the time to reach the charging base based on the path; power, so as to determine how much power is left to send a recharge signal to the charging base.
  • the robot when the robot fails to send a recharging signal to the charging base under no network or other special circumstances, the robot can notify the administrator through sound and light reminders or send a short message, so that the administrator can charge the robot in time.
  • Step S20 recognizing the charging base to obtain an identification image, and dividing the identification image to obtain a target area, where the target area is an area containing the information of the charging base;
  • the time of flight (Tof) technology is used to identify the charging base, so as to achieve the purpose of automatic recharging of the robot; among them, the TOF technology adopts the active light detection method, and the TOF irradiation unit is different from the general lighting requirements.
  • the purpose of the TOF is not to illuminate, but to measure the distance by using the change of the incident light signal and the reflected light signal; therefore, the illumination unit of the TOF modulates the light at high frequency before transmitting it.
  • the charging cradle is photographed by a time-of-flight camera (TOF camera).
  • TOF camera is mainly used to collect the depth image of the charging cradle.
  • the richer positional relationship between objects can be obtained through the distance information, that is, the foreground and the background can be distinguished.
  • the application of segmentation, marking, recognition, and tracking of the target image can be completed through the depth information in the TOF camera. Further obtain the depth image in the TOF camera, identify the shape of the charging base based on the depth image, thereby obtain the identification image, and segment the identification image to obtain a target area, wherein the target area is a region of interest containing charging base information ( ROI).
  • ROI charging base information
  • a difference operation of depth values is performed on the background depth image and the identification image to obtain a target depth image including the charging base; according to the depth value range corresponding to the charging base to be identified, the target depth image is binarized, to obtain a binarized image; according to the pixel characteristics of the binarized image, the image area where the charging stand to be identified is located, that is, the region of interest (ROI), is segmented from the binarized image.
  • ROI region of interest
  • a region of interest (ROI) is an image region selected from an image, and this region is the focus region of image analysis.
  • Step S30 obtaining a target point cloud corresponding to the target area, and matching the target point cloud with a point cloud template to determine the pose information of the charging stand;
  • the two-dimensional plane information captured by an ordinary camera is often used to identify the charging base, while for robots working in a three-dimensional environment, a lot of spatial stereo information is missed.
  • the three-dimensional point cloud also contains the Z-axis coordinate of the point, that is, depth information, which can play a key role in the identification and positioning of the charging base.
  • the robot When the robot obtains the region of interest (ROI), it performs coordinate transformation of the region of interest (ROI), that is, the depth image is coordinately transformed based on the transformation matrix and transformation algorithm to obtain the target point cloud; the point cloud can be obtained from a single frame depth image It can also be generated by splicing point clouds of multiple frames of depth images. Further, the transformation matrix is determined based on the internal parameters of the time-of-flight camera (TOF camera) and the coordinate system transformation parameters. Therefore, with reference to FIG. 5 , the step of acquiring the target point cloud corresponding to the target area includes:
  • Step S31 acquiring internal parameters in the time-of-flight camera
  • Step S32 Calculate the point cloud coordinates of each pixel in the target area according to the internal parameters to obtain the target point cloud.
  • the robot To obtain the target point cloud, the robot must first perform 3D reconstruction of the environment where the charging base is located, move the TOF camera to a position higher than the charging base, and the TOF camera is approximately parallel to the plane where the target is located. Large field of view for better acquisition of depth images of the charging stand. Further obtain the internal parameters in the TOF camera, that is, the internal parameter matrix of the camera, such as the pixel x-axis length of the camera focal length, the pixel y-axis length of the camera focal length, depth ratio, etc.; calculate each pixel in the region of interest (ROI) according to the internal parameters.
  • the point cloud coordinates of the point are used to obtain the target point cloud. Specifically, the coordinate system transformation parameters are obtained.
  • the coordinate system transformation parameters include rotation parameters, displacement parameters, and coordinate scale scale parameters. According to the coordinate system transformation parameters and the internal parameters of the camera, the The coordinates of each pixel in the region of interest (ROI) are converted into three-dimensional space coordinates, namely point cloud coordinates, based on the transformation matrix, and all the point cloud coordinates form the target point cloud.
  • ROI region of interest
  • the pre-stored point cloud template is obtained, and then the target point cloud and the point cloud template are matched to determine the pose information of the charging stand.
  • the template matching method It can be traditional point cloud matching, or it can be a method through deep learning. Further, referring to FIG. 6 , the step of matching the target point cloud with a point cloud template to determine the pose information of the charging stand includes:
  • Step S33 acquiring the charging stand point cloud in the point cloud template, and matching the target point cloud with the charging stand point cloud;
  • Step S34 if the target point cloud matches the point cloud of the charging stand, determine the pose information of the charging stand.
  • the point cloud template stored by the robot contains the data model of the charging base to be identified.
  • the data model of the charging base to be identified can be in the form of a point cloud or a reconstructed grid. It only needs to be used when different data models are used. Just use the charging base template of the same data type. In the process of matching the target point cloud and the point cloud template, it is mainly determined whether there is point cloud data matching the point cloud of the charging stand in the target point cloud. If there is matching point cloud data, the charging can be determined based on the point cloud data. seat posture information.
  • Step S40 docking the charging stand to perform automatic recharging according to the posture information.
  • the robot When the robot obtains the pose information of the charging stand, it determines the three-dimensional position of the charging stand in space based on the pose information, and further identifies the charging interface in the charging stand, that is, the docking interface; the control unit controls the robot to move to the docking interface, And complete the docking work with the charging stand to realize the function of automatic recharging.
  • a recharging signal is sent to the charging base; the time-of-flight camera is used to capture images of the charging base, and the captured images are segmented to obtain a region of interest; further acquisition of the corresponding region of interest Target point cloud, match the target point cloud with the point cloud template to determine the pose information of the charging stand, and complete the docking operation with the charging stand according to the pose information to realize the automatic recharging of the robot.
  • the use of infrared sensors is avoided, which can effectively reduce the power consumption of the robot; secondly, the infrared sensor signal coverage is limited in the docking, and it is easily affected by the environment, resulting in the robot being unstable. This reduces the accuracy of docking. Identifying the appearance of the charging stand through time-of-flight technology can effectively reduce power consumption, improve docking accuracy and recharge efficiency, and at the same time have a certain robustness to the environment.
  • Step S21 acquiring the depth image and the grayscale image in the time-of-flight camera
  • Step S22 performing shape recognition based on the outline of the charging stand according to the depth image and the grayscale image to obtain a recognition image.
  • the charging base is identified by the time-of-flight technology, specifically, the charging base is identified by using the TOF sensor, and the combination of one or two images (depth image and IR image) in the TOF sensor is used, which includes but is not limited to the image Sensor-related physical parameters such as size, resolution, exposure time, frame rate, etc. can be adjusted to realize the shape (contour) identification of the charging base; among them, the identification objects are different shapes, contours, volumes, sizes, weights, colors, surface materials, etc. Physical parameters of the charging stand.
  • the robot obtains the depth image and grayscale image in the time-of-flight camera.
  • the depth image is also called the range image.
  • the depth image uses the distance (depth) from the image collector to each point in the scene as the The pixel value can directly reflect the geometry of the visible surface of the scene;
  • the grayscale image is an IR image, and the format of the IR image (Infrared Radiation) is gray8, which is an 8-bit grayscale image.
  • shape recognition based on the outline of the charging stand is performed to obtain a recognized image.
  • the front side view, the rear side view, the left side view and the right side view of the charging stand in the time-of-flight camera are obtained respectively.
  • Corresponding depth images and IR images triangulate the depth images in all directions, then fuse all the triangulated depth images in the scale space to construct a hierarchical directed distance field, and apply the overall triangulation to all voxels in the distance field
  • the sub-algorithm generates a convex hull covering all voxels, and uses the algorithm to construct the isosurface.
  • the top surfaces of the isosurface of the front side view, the isosurface of the rear side view, the left side view, and the right side view are completely overlapped, so as to obtain the three-dimensional image of the charging stand, that is, the identification image.
  • the shape of the charging stand is recognized through the depth image and the IR image in the time-of-flight camera. Compared with image recognition captured by an ordinary camera, the time-of-flight camera can quickly and accurately identify the charging stand.
  • the difference between the third embodiment of the robot automatic recharging method and the first and second embodiments of the robot automatic recharging method is that the step of docking the charging stand to perform automatic recharging according to the posture information includes the following steps: :
  • Step S41 obtaining the current environment image information
  • Step S42 planning a recharging path according to the pose information and the environment image information, moving to the charging stand based on the recharging path, and docking the charging stand to perform automatic recharging.
  • the robot When the robot recognizes the three-dimensional image of the charging base, that is, the shape information, it needs to determine the recharging path to the charging base. Specifically, based on the time-of-flight camera, the environmental image information of the environment where the robot is moving is collected to obtain the environmental image. information; determine the obstacle information between the robot and the charging base based on the environmental image information and pose information; further determine the recharging path of the robot based on the obstacle information, if there is no obstacle between the robot and the charging base, the robot will directly Move to the charging base in a straight line; if there is an obstacle between the robot and the charging base, one or more recharging paths are determined based on the position information of the obstacle.
  • each corner such as moving 1 meter straight ahead, move 120° to the left front.
  • the shortest recharging path needs to be selected from multiple recharging paths.
  • the robot needs to detect the status of the obstacle in real time during the movement; when the obstacle moves, the recharging path needs to be adjusted in time to avoid collision with the obstacle.
  • the recharging path is planned according to the pose information and the environment image information, the recharging path is moved to the charging base based on the recharging path, and the charging base is docked to perform an automatic recharging operation. after the steps, including:
  • Step S420 judging whether the docking with the charging stand is successful
  • Step S421 if the docking with the charging stand is successful, a docking success signal is received, and the identification operation of the charging stand is ended.
  • the detection unit detects in real time whether the docking operation is successful; when the robot receives the successful docking signal sent by the charging base, the time-of-flight camera is turned off to identify the charging base. If the robot does not receive the successful docking information sent by the charging base, it means that the current docking operation has failed, and the position and attitude information of the charging base is re-identified based on the time-of-flight camera.
  • the recharging path is set in advance by the image information of the environment where the robot is located and the posture information of the charging stand, so that the robot can quickly connect to the charging stand for charging, which improves the efficiency of recharging.
  • the present application also provides a robot automatic recharging device, which includes:
  • the sending module 10 is used for sending a recharging signal to the charging stand when the recharging condition is met;
  • the identification module 20 is used for identifying the charging stand to obtain an identification image, and dividing the identification image to obtain a target area, where the target area is an area containing the charging stand information;
  • an acquisition module 30 configured to acquire a target point cloud corresponding to the target area, and match the target point cloud with a point cloud template to determine the pose information of the charging stand;
  • the docking module 40 is used for docking the charging stand to perform automatic recharging according to the posture information.
  • the identification module 20 includes an acquisition unit and an identification unit
  • the acquisition unit is used to acquire the depth image and the grayscale image in the time-of-flight camera;
  • the identification and retrieval unit is configured to perform shape identification based on the outline of the charging stand according to the depth image and the grayscale image to obtain an identification image.
  • the acquisition module 30 includes an acquisition unit and a calculation unit
  • the acquisition unit configured to acquire internal parameters in the time-of-flight camera
  • the computing unit is configured to calculate the point cloud coordinates of each pixel in the target area according to the internal parameters to obtain the target point cloud.
  • the docking module 40 includes: an acquisition unit and a planning unit;
  • the acquisition unit is used to acquire the current image information of the environment
  • the planning unit is configured to plan a recharging path according to the pose information and the environment image information, move to the charging stand based on the recharging path, and dock the charging stand to perform automatic recharging.
  • the planning unit includes: a judgment subunit and a docking subunit;
  • the judging subunit is used for judging whether the docking with the charging stand is successful
  • the docking sub-unit is configured to receive a docking success signal if the docking with the charging stand is successful, and end the identification operation of the charging stand.
  • the sending module 10 includes: a detecting unit and a sending unit;
  • the detection unit is configured to enter the charging mode when it is detected that the current power value is lower than the set power threshold
  • the sending unit is configured to send a recharging signal to the charging base according to the charging mode.
  • the acquisition unit is further configured to acquire the point cloud of the charging stand in the point cloud template, and match the target point cloud with the point cloud of the charging stand;
  • the acquiring unit is further configured to determine the pose information of the charging stand if the target point cloud matches the charging stand point cloud.
  • the present application also provides a robot, which includes a memory, a processor, and a robot automatic recharging program stored in the memory and running on the processor.
  • a robot which includes a memory, a processor, and a robot automatic recharging program stored in the memory and running on the processor.
  • the robot When the robot satisfies the recharging conditions, it sends a recharging to the charging base.
  • the time-of-flight camera uses the time-of-flight camera to capture the image of the charging base, and obtain the depth image and IR image in the time-of-flight camera, and perform shape recognition based on the outline of the charging base according to the depth image and IR image to obtain the recognized image, and segment the recognized image to obtain Region of interest (ROI); further calculate the point cloud coordinates of each pixel in the region of interest (ROI) according to the internal parameters in the time-of-flight camera to obtain the target point cloud, and match the target point cloud with the point cloud template to determine
  • the position and attitude information of the charging base, and the docking operation with the charging base is completed according to the position and attitude information to realize the automatic recharging of the robot.
  • the charging base is identified through the time-of-flight technology, which effectively reduces the power consumption of the robot, improves the docking accuracy and improves the recharging efficiency of the robot.
  • the present application also provides a computer-readable storage medium, on which a robot automatic recharging program is stored, and when the robot automatic recharging program is executed by a processor, the above-mentioned automatic robot recharging is realized steps of the method.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
  • the word “a” or “an” preceding an element does not preclude the presence of a plurality of such elements.
  • the present application can be implemented by means of hardware comprising several different components and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware.
  • the use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.

Abstract

一种机器人自动回充方法、装置、机器人和存储介质,机器人自动回充方法包括:在满足回充条件时,向充电座发送回充信号(S10);对充电座进行识别得到识别图像,分割识别图像得到目标区域,目标区域为含充电座信息的区域(S20);获取目标区域对应的目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息(S30);根据位姿信息对接充电座进行自动回充(S40)。

Description

机器人自动回充方法、装置、机器人和存储介质
相关申请的交叉引用
本申请基于申请号为202011095113.6、申请日为2020年10月14日的中国专利申请提出,并要求上述中国专利申请的优先权,上述中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及机器人技术领域,尤其涉及一种机器人自动回充方法、装置、机器人和存储介质。
背景技术
目前,市面上的机器人回充方法依据所使用的不同传感器大致可以分为以下几种:红外传感器、超声波传感器、光传感器,以及上述几种传感器混合使用的方法。但现有的回充方法对接精度较差,受传感器数量和环境影响较大,回充效率较低;当采用数量较少的传感器时,需要利用检测出信号的强度来判断机器人的方向,也即需要信号强度检测精度非常高的接收电路进行检测。因此,现有的机器人在进行回充时,存在对接精度较差、回充效率低的问题。
发明内容
本申请实施例通过提供一种机器人自动回充方法、装置、机器人和存储介质,旨在解决现有的机器人在进行回充时,存在对接精度较差、回充效率低的问题。
为实现上述目的,本申请一方面提供一种机器人自动回充方法,所述机器人自动回充方法包括以下步骤:
在满足回充条件时,向充电座发送回充信号;
对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
根据所述位姿信息对接所述充电座进行自动回充。
可选地,所述对所述充电座进行识别得到识别图像的步骤包括:
获取飞行时间相机中的深度图像与灰度图像;
根据所述深度图像与所述灰度图像进行基于所述充电座轮廓的形状识别得到识别图像。
可选地,所述获取所述目标区域对应的目标点云的步骤包括:
获取所述飞行时间相机中的内部参数;
根据所述内部参数计算所述目标区域中每个像素点的点云坐标以获取所述目标点云。
可选地,所述根据所述位姿信息对接所述充电座进行自动回充的步骤包括:
获取当前所处的环境图像信息;
根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充。
可选地,所述根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充作的步骤之后,包括:
判断与所述充电座是否对接成功;
若与所述充电座对接成功,则接收对接成功信号,并结束所述充电座的识别操作。
可选地,所述在满足回充条件时,向充电座发送回充信号的步骤包括:
在检测到当前的电量值低于设定电量阈值时,进入充电模式;
根据所述充电模式向所述充电座发送回充信号。
可选地,所述将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息的步骤包括:
获取所述点云模板中的充电座点云,将所述目标点云与所述充电座点云进行匹配;
若所述目标点云与所述充电座点云匹配,则确定所述充电座的位姿信息。
此外,为实现上述目的,本申请另一方面还提供一种机器人自动回充装置,所述装置包括:
发送模块,用于在满足回充条件时,向充电座发送回充信号;
识别模块,用于对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
获取模块,用于获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
对接模块,用于根据所述位姿信息对接所述充电座进行自动回充。
此外,为实现上述目的,本申请另一方面还提供一种机器人,所述机器人包括存储 器、处理器及存储在存储器上并在处理器上运行的机器人自动回充程序,所述处理器执行所述机器人自动回充程序时实现如上所述机器人自动回充方法的步骤。
此外,为实现上述目的,本申请另一方面还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有机器人自动回充程序,所述机器人自动回充程序被处理器执行时实现如上所述机器人自动回充方法的步骤。
本实施例在机器人满足回充条件时,向充电座发送回充信号;采用飞行时间相机对充电座进行图像拍摄,并将拍摄得到的图像进行分割得到感兴趣区域;进一步获取感兴趣区域对应的目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息,根据位姿信息完成与充电座的对接操作以实现机器人的自动回充。通过飞行时间技术对充电座进行识别,使得有效减少了机器人的功耗、提高对接精度以及提升机器人的回充效率。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的机器人结构示意图;
图2为本申请机器人自动回充方法第一实施例的流程示意图;
图3为本申请机器人自动回充方法第二实施例的流程示意图;
图4为本申请机器人自动回充方法第三实施例的流程示意图;
图5为本申请机器人自动回充方法中获取所述目标区域对应的目标点云的流程示意图;
图6为本申请机器人自动回充方法中将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息的流程示意图;
图7为本申请机器人自动回充方法中根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充作的步骤之后的流程示意图;
图8为本申请机器人自动回充装置示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例的主要解决方案是:在满足回充条件时,向充电座发送回充信号;对充电座进行识别得到识别图像,分割识别图像得到目标区域,目标区域为含充电座信息 的区域;获取目标区域对应的目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息;根据位姿信息对接充电座进行自动回充。
由于现有的机器人回充方法大多数是通过红外传感器、超声波传感器、光传感器,以及上述几种传感器混合使用实现的。同时,现有的机器人回充方法受传感器数量和环境影响较大,导致对接精度较差和回充效率较低。本申请在机器人满足回充条件时,向充电座发送回充信号;采用飞行时间相机对充电座进行图像拍摄,并将拍摄得到的图像进行分割得到感兴趣区域;进一步获取感兴趣区域对应的目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息,根据位姿信息完成与充电座的对接操作以实现机器人的自动回充。通过飞行时间技术对充电座进行识别,使得有效减少了机器人的功耗、提高对接精度以及提升机器人的回充效率。
如图1所示,图1为本申请实施例方案涉及的硬件运行环境的机器人结构示意图。
如图1所示,该机器人可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,机器人还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块、检测器等等。当然,所述机器人还可配置陀螺仪、气压计、湿度计、温度传感器等其他传感器,在此不再赘述。
本领域技术人员可以理解,图1中示出的机器人结构并不构成对机器人设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机可读存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及机器人自动回充程序。
在图1所示的机器人中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的机器人自动回充程序,并执行以下操作:
在满足回充条件时,向充电座发送回充信号;
对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标 区域为含所述充电座信息的区域;
获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
根据所述位姿信息对接所述充电座进行自动回充。
参考图2,图2为本申请机器人自动回充方法第一实施例的流程示意图,所述机器人自动回充方法包括以下步骤:
步骤S10,在满足回充条件时,向充电座发送回充信号;
机器人在工业和服务业中有着重大的应用,并且正在逐步改变着工业和服务业,其目的是减轻人类的劳动强度,为人类创造更加便捷和轻松的工作条件。而为保证机器人的正常运转,需要实现机器人自动回充功能。
机器人在工作的过程中,通过电量检测模块实时检测当前的电量值,当检测到当前的电量值低于设定电量阈值,如低于10%,则机器人自动进入充电模式,并通过通信模块向充电座发送回充信号。在一实施例中,机器人与充电座处于同一局域网中,在检测到满足回充条件时,机器人通过局域网向充电座发送回充信号。其中,满足回充条件包括但不限定于在检测机器人的电池电量小于设定电量阈值时,或者接收到用户的回充指令等,这里的回充指令可以是用户触按机器人机体上的控制面板触发,或者是用户通过与机器人相连接的移动终端设备触发。进一步地,若机器人的电量低于一定量时,如低于5%,开始进入充电模式,则容易导致回充失败,如路径太长导致电量不够;但若预留较多的电量就开始回充,易导致机器人的工作效率低下,从而降低机器人的可靠性和实用性。因此,机器人在工作的过程中,可实时获取充电座的位置信息,基于位置信息规划到达充电座的路径,并计算基于该路径达到充电座的时间;进一步根据到达充电座的时间计算需要消耗的电量,从而确定在剩余电量达到多少时向充电座发送回充信号。
可选地,当在无网络或者其他特殊情况下,导致机器人未成功向充电座发送回充信号时,机器人可通过声光提醒或者发送短信通知管理员,以使管理员及时为机器人进行充电。
步骤S20,对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
本申请通过采用飞行时间(Time of flight,Tof)技术来识别充电座,从而达到机器人自动回充的目的;其中,TOF技术采用主动光探测方式,与一般光照需求不一样的是,TOF照射单元的目的不是照明,而是利用入射光信号与反射光信号的变化来进行 距离测量;所以,TOF的照射单元都是对光进行高频调制之后再进行发射。
在机器人向充电座发送回充信号后,通过飞行时间相机(TOF相机)对充电座进行拍摄;其中,TOF相机主要用于采集充电座的深度图像,该深度图像相对二维图像而言,可通过距离信息获取物体之间更加丰富的位置关系,即区分前景与后景;同时,通过TOF相机中的深度信息可以完成对目标图像的分割、标记、识别、跟踪等应用。进一步获取TOF相机中的深度图像,基于该深度图像对充电座的形状进行识别,从而得到识别图像,分割识别图像以得到目标区域,其中,所述目标区域为含充电座信息的感兴趣区域(ROI)。在一实施例中,对背景深度图像以及识别图像进行深度值的差运算,以得到包含充电座的目标深度图像;根据待识别充电座对应的深度值范围,对目标深度图像进行二值化,以得到二值化图像;根据二值化图像的像素特征,从二值化图像中分割待识别充电座所在的图像区域,即感兴趣区域(ROI)。其中,在图像处理领域中,感兴趣区域(ROI)是从图像中选择的一个图像区域,该区域为图像分析所关注的重点区域。
步骤S30,获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
在现有的机器人回充技术中,往往使用普通相机拍摄的二维平面信息进行充电座的识别,而对于在三维环境中工作的机器人来说会遗漏掉很多空间的立体信息。同时,在二维图像中只有点的X轴和Y轴坐标,而三维的点云中还包含点的Z轴坐标,即深度信息,可以对充电座的识别与定位起到关键的作用。
机器人在获取到感兴趣区域(ROI)时,将感兴趣区域(ROI)进行坐标转换,即将深度图像基于转换矩阵和转换算法进行坐标转换得到目标点云;其中,点云可以由单帧深度图像生成,也可以由多帧深度图像的点云进行拼接生成。进一步地,转换矩阵是基于飞行时间相机(TOF相机)中的内部参数与坐标系转换参数确定的,因此,参考图5,所述获取所述目标区域对应的目标点云的步骤包括:
步骤S31,获取所述飞行时间相机中的内部参数;
步骤S32,根据所述内部参数计算所述目标区域中每个像素点的点云坐标以获取所述目标点云。
机器人要获取目标点云首先要对充电座所在环境进行三维重建,将TOF相机移动到一个离充电座较高处的位置,且TOF相机近似平行于目标所在平面,此时TOF相机能获得一个较大的视野范围,以使更好的获取充电座的深度图像。进一步获取TOF相机中的内部参数,即相机的内参矩阵,如相机焦距的像素x轴长度、相机焦距的像素y轴长度、深度比例等;根据内部参数计算感兴趣区域(ROI)中每个像素点的点云坐标以获取目 标点云,具体地,获取坐标系转换参数,该坐标系转换参数包括旋转参数、位移参数和坐标尺度比例参数等;根据坐标系转换参数和相机的内部参数,将感兴趣区域(ROI)中每个像素点的坐标基于转换矩阵转换为三维空间坐标,即点云坐标,所有的点云坐标形成目标点云。
在获取到感兴趣区域(ROI)对应的目标点云后,获取预先存储的点云模板,再将目标点云与点云模板进行匹配以确定充电座的位姿信息,其中,模板匹配的方法可以是传统点云匹配,也可以是通过深度学习的方法。进一步地,参考图6,所述将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息的步骤包括:
步骤S33,获取所述点云模板中的充电座点云,将所述目标点云与所述充电座点云进行匹配;
步骤S34,若所述目标点云与所述充电座点云匹配,则确定所述充电座的位姿信息。
机器人存储的点云模板中含待识别的充电座数据模型,其中,待识别的充电座数据模型可以是点云形式,也可以是经过重建的网格形式,只需在采用不同的数据模型时使用相同数据类型的充电座模板即可。在进行目标点云与点云模板匹配的过程中,主要判断目标点云中是否存在与充电座点云匹配的点云数据,若存在匹配的点云数据,则可以基于该点云数据确定充电座的位姿信息。
步骤S40,根据所述位姿信息对接所述充电座进行自动回充。
机器人在获取到充电座的位姿信息时,基于该位姿信息确定充电座在空间中的三维位置,进一步识别充电座中的充电接口,即对接接口;控制单元控制机器人向该对接接口移动,并完成与充电座的对接工作,实现自动回充的功能。
本实施例在机器人满足回充条件时,向充电座发送回充信号;采用飞行时间相机对充电座进行图像拍摄,并将拍摄得到的图像进行分割得到感兴趣区域;进一步获取感兴趣区域对应的目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息,根据位姿信息完成与充电座的对接操作以实现机器人的自动回充。通过采用飞行时间技术对充电座进行识别,避免了使用红外传感器,使得能够有效降低机器人的功耗;其次,在对接中红外传感信号覆盖范围有限,且易受到环境的影响,导致机器人不能稳定地解析信号,降低了对接时的精度。而通过飞行时间技术识别充电座外观能够有效减少功耗,提高对接精度和回充效率,同时对环境有一定的健壮性。
进一步地,参考图3,提出本申请机器人自动回充方法第二实施例。
所述机器人自动回充方法第二实施例与机器人自动回充方法第一实施例的区别在 于,所述对所述充电座进行识别得到识别图像的步骤包括:
步骤S21,获取飞行时间相机中的深度图像与灰度图像;
步骤S22,根据所述深度图像与所述灰度图像进行基于所述充电座轮廓的形状识别得到识别图像。
本申请是通过飞行时间技术对充电座进行识别,具体是通过使用TOF传感器识别充电座,通过TOF传感器中一种或两种图像(深度图像和IR图像)的组合使用,其包含但不仅限于图像大小、分辨率、曝光时间、帧率等传感器相关物理参数均可调整,以实现充电座的形状(轮廓)识别;其中,识别对象为不同形状轮廓、体积、大小、重量、颜色、表面材质等物理参数的充电座。
机器人获取飞行时间相机中的深度图像与灰度图像,其中,深度图像(depth image)也被称为距离影像(range image),深度图像将图像采集器到场景中各点的距离(深度)作为像素值,可直接反映景物可见表面的几何形状;灰度图像为IR图,IR图(红外线Infrared Radiation)的格式为gray8,即8位的灰度图。进一步根据深度图像与灰度图像进行基于充电座轮廓的形状识别得到识别图像,在一实施例中,获取飞行时间相机中充电座的前侧视图、后侧视图、左侧视图以及右侧视图分别对应的深度图像和IR图像;对各个方向的深度图像进行三角化,然后在尺度空间中融合所有三角化的深度图像构建分层有向距离场,对距离场中所有的体素应用整体三角剖分算法产生一个涵盖所有体素的凸包,并利用算法构造等值面,将获得的前侧视图等值面、后侧视图等值面、左侧视图等值面、右侧视图等值面进行拼接,拼接时,使得前侧视图等值面、后侧视图等值面、左侧视图、右侧视图等值面的顶面完全重合,从而得到充电座的三维图像,即识别图像。
本实施例通过飞行时间相机中的深度图像与IR图像对充电座的形状进行识别,相对于普通相机拍摄的图像识别来说,通过飞行时间相机能快速、精准地的实现充电座的识别。
进一步地,参考图4,提出本申请机器人自动回充方法第三实施例。
所述机器人自动回充方法第三实施例与机器人自动回充方法第一实施例和第二实施例的区别在于,所述根据所述位姿信息对接所述充电座进行自动回充的步骤包括:
步骤S41,获取当前所处的环境图像信息;
步骤S42,根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充。
机器人在识别到充电座的三维图像,即形状信息时,需要确定到达充电座的回充路径,具体地,基于飞行时间相机对机器人移动过程中所处环境的环境图像信息进行采集,得到环境图像信息;基于该环境图像信息和位姿信息确定机器人与充电座之间的障碍物信息;进一步基于障碍物信息确定机器人的回充路径,若机器人与充电座之间不存在障碍物,则机器人直接以直线的路径移动至充电座;若机器人与充电座之间存在障碍物,则基于障碍物的位置信息确定一条或多条回充路径,其中,当回充路径不是直线形式时,则需要确定每个转角的方向、角度信息以及转角速度信息等,如往正前方移动1米后,向左前方120°移动。同时,为保证机器人能快速到达充电座,需要从多条回充路径中选择最短的回充路径。可选地,若回充路径上存在动态障碍物,则机器人在移动的过程中需要实时检测障碍物的状态;当障碍物发生移动时,需要及时调整回充路径,以避免与障碍物碰撞。
进一步地,参考图7,所述根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充作的步骤之后,包括:
步骤S420,判断与所述充电座是否对接成功;
步骤S421,若与所述充电座对接成功,则接收对接成功信号,并结束所述充电座的识别操作。
机器人与充电座在执行对接操作时,通过检测单元实时检测该对接操作是否成功;当机器人接收到充电座发送的对接成功信号时,关闭飞行时间相机对充电座的识别。若机器人未接收到充电座发送的对接成功信息时,则说明当前对接操作失败,则基于飞行时间相机重新识别充电座的位姿信息。
本实施例通过机器人所处的环境图像信息和充电座的位姿信息提前设置的回充路径,使得机器人能快速对接充电座进行充电,提升了回充的效率。
此外,参考图8,本申请还提供一种机器人自动回充装置,所述装置包括:
发送模块10,用于在满足回充条件时,向充电座发送回充信号;
识别模块20,用于对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
获取模块30,用于获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
对接模块40,用于根据所述位姿信息对接所述充电座进行自动回充。
进一步地,所述识别模块20包括获取单元和识别单元;
所述获取单元,用于获取飞行时间相机中的深度图像与灰度图像;
所述识别取单元,用于根据所述深度图像与所述灰度图像进行基于所述充电座轮廓的形状识别得到识别图像。
进一步地,所述获取模块30包括获取单元和计算单元;
所述获取单元,用于获取所述飞行时间相机中的内部参数;
所述计算单元,用于根据所述内部参数计算所述目标区域中每个像素点的点云坐标以获取所述目标点云。
进一步地,所述对接模块40包括:获取单元和规划单元;
所述获取单元,用于获取当前所处的环境图像信息;
所述规划单元,用于根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充。
进一步地,所述规划单元包括:判断子单元和对接子单元;
所述判断子单元,用于判断与所述充电座是否对接成功;
所述对接子单元,用于若与所述充电座对接成功,则接收对接成功信号,并结束所述充电座的识别操作。
进一步地,所述发送模块10包括:检测单元和发送单元;
所述检测单元,用于在检测到当前的电量值低于设定电量阈值时,进入充电模式;
所述发送单元,用于根据所述充电模式向所述充电座发送回充信号。
进一步地,所述获取单元,还用于获取所述点云模板中的充电座点云,将所述目标点云与所述充电座点云进行匹配;
所述获取单元,还用于若所述目标点云与所述充电座点云匹配,则确定所述充电座的位姿信息。
上述的机器人自动回充装置各个模块功能的实现与上述方法实施例中的过程相似,在此不再一一赘述。
此外,本申请还提供一种机器人,所述机器人包括存储器、处理器及存储在存储器上并在处理器上运行的机器人自动回充程序,在机器人满足回充条件时,向充电座发送回充信号;采用飞行时间相机对充电座进行图像拍摄,并获取飞行时间相机中的深度图像和IR图像,根据深度图像和IR图像进行基于充电座轮廓的形状识别得到识别图像,将识别图像进行分割得到感兴趣区域(ROI);进一步根据飞行时间相机中的内部参数 计算感兴趣区域(ROI)中每个像素点的点云坐标以得到目标点云,将目标点云与点云模板进行匹配以确定充电座的位姿信息,根据位姿信息完成与充电座的对接操作以实现机器人的自动回充。通过飞行时间技术对充电座进行识别,使得有效减少了机器人的功耗、提高对接精度以及提升机器人的回充效率。
此外,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有机器人自动回充程序,所述机器人自动回充程序被处理器执行时实现如上所述机器人自动回充方法的步骤。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
应当注意的是,在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的部件或步骤。位于部件之前的单词“一”或“一个”不排除存在多个这样的部件。本申请可以借助于包括有若干不 同部件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
尽管已描述了本申请的可选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括可选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (10)

  1. 一种机器人自动回充方法,其中,所述方法包括:
    在满足回充条件时,向充电座发送回充信号;
    对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
    获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
    根据所述位姿信息对接所述充电座进行自动回充。
  2. 根据权利要求1所述的机器人自动回充方法,其中,所述对所述充电座进行识别得到识别图像的步骤包括:
    获取飞行时间相机中的深度图像与灰度图像;
    根据所述深度图像与所述灰度图像进行基于所述充电座轮廓的形状识别得到识别图像。
  3. 根据权利要求1或2所述的机器人自动回充方法,其中,所述获取所述目标区域对应的目标点云的步骤包括:
    获取飞行时间相机中的内部参数;
    根据所述内部参数计算所述目标区域中每个像素点的点云坐标以获取所述目标点云。
  4. 根据权利要求1至3中任一项所述的机器人自动回充方法,其中,所述根据所述位姿信息对接所述充电座进行自动回充的步骤包括:
    获取当前所处的环境图像信息;
    根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充。
  5. 根据权利要求4所述的机器人自动回充方法,其中,所述根据所述位姿信息与所述环境图像信息规划回充路径,基于所述回充路径向所述充电座移动,并对接所述充电座进行自动回充作的步骤之后,包括:
    判断与所述充电座是否对接成功;
    若与所述充电座对接成功,则接收对接成功信号,并结束所述充电座的识别操作。
  6. 根据权利要求1至5中任一项所述的机器人自动回充方法,其中,所述在满足回充条件时,向充电座发送回充信号的步骤包括:
    在检测到当前的电量值低于设定电量阈值时,进入充电模式;
    根据所述充电模式向所述充电座发送回充信号。
  7. 根据权利要求1至6中任一项所述的机器人自动回充方法,其中,所述将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息的步骤包括:
    获取所述点云模板中的充电座点云,将所述目标点云与所述充电座点云进行匹配;
    若所述目标点云与所述充电座点云匹配,则确定所述充电座的位姿信息。
  8. 一种机器人自动回充装置,其中,所述装置包括:
    发送模块,用于在满足回充条件时,向充电座发送回充信号;
    识别模块,用于对所述充电座进行识别得到识别图像,分割所述识别图像得到目标区域,所述目标区域为含所述充电座信息的区域;
    获取模块,用于获取所述目标区域对应的目标点云,将所述目标点云与点云模板进行匹配以确定所述充电座的位姿信息;
    对接模块,用于根据所述位姿信息对接所述充电座进行自动回充。
  9. 一种机器人,其中,所述机器人包括存储器、处理器及存储在所述存储器上并在所述处理器上运行的机器人自动回充程序,所述处理器执行所述机器人自动回充程序时实现如权利要求1至7中任一项所述的方法的步骤。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有机器人自动回充程序,所述机器人自动回充程序被处理器执行时实现如权利要求1至7中任一项所述的方法的步骤。
PCT/CN2021/123910 2020-10-14 2021-10-14 机器人自动回充方法、装置、机器人和存储介质 WO2022078467A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011095113.6 2020-10-14
CN202011095113.6A CN112346453A (zh) 2020-10-14 2020-10-14 机器人自动回充方法、装置、机器人和存储介质

Publications (1)

Publication Number Publication Date
WO2022078467A1 true WO2022078467A1 (zh) 2022-04-21

Family

ID=74360783

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123910 WO2022078467A1 (zh) 2020-10-14 2021-10-14 机器人自动回充方法、装置、机器人和存储介质

Country Status (2)

Country Link
CN (1) CN112346453A (zh)
WO (1) WO2022078467A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115173890A (zh) * 2022-06-30 2022-10-11 珠海一微半导体股份有限公司 一种多机器人的回充控制方法、系统及芯片
CN116581850A (zh) * 2023-07-10 2023-08-11 深圳市森树强电子科技有限公司 一种智能识别类型的移动充电器及其充电方法
CN117137374A (zh) * 2023-10-27 2023-12-01 张家港极客嘉智能科技研发有限公司 基于计算机视觉的扫地机器人回充方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346453A (zh) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 机器人自动回充方法、装置、机器人和存储介质
CN113138596A (zh) * 2021-03-31 2021-07-20 深圳市优必选科技股份有限公司 机器人自动充电方法、系统、终端设备及存储介质
CN113675923B (zh) * 2021-08-23 2023-08-08 追觅创新科技(苏州)有限公司 充电方法、充电装置及机器人
CN113696180A (zh) * 2021-08-31 2021-11-26 千里眼(广州)人工智能科技有限公司 机器人自动回充方法、装置、存储介质及机器人系统
CN114296447B (zh) * 2021-12-07 2023-06-30 北京石头世纪科技股份有限公司 自行走设备的控制方法、装置、自行走设备和存储介质
CN114355889A (zh) * 2021-12-08 2022-04-15 上海擎朗智能科技有限公司 控制方法、机器人、机器人充电座及计算机可读存储介质
CN114983302B (zh) * 2022-06-28 2023-08-08 追觅创新科技(苏州)有限公司 姿态的确定方法、装置、清洁设备、存储介质及电子装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170105592A1 (en) * 2012-10-05 2017-04-20 Irobot Corporation Robot management systems for determining docking station pose including mobile robots and methods using same
CN108700880A (zh) * 2015-09-04 2018-10-23 罗伯特有限责任公司 自主移动机器人的基站的识别与定位
CN109579852A (zh) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 基于深度相机的机器人自主定位方法及装置
CN109940606A (zh) * 2019-01-29 2019-06-28 中国工程物理研究院激光聚变研究中心 基于点云数据的机器人引导系统及方法
CN110632915A (zh) * 2018-06-21 2019-12-31 科沃斯机器人股份有限公司 机器人回充路径规划方法、机器人及充电系统
CN110793512A (zh) * 2019-09-11 2020-02-14 上海宾通智能科技有限公司 位姿识别方法、装置、电子设备和存储介质
CN111625005A (zh) * 2020-06-10 2020-09-04 浙江欣奕华智能科技有限公司 一种机器人充电方法、机器人充电的控制装置及存储介质
CN112346453A (zh) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 机器人自动回充方法、装置、机器人和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108284427B (zh) * 2017-11-24 2020-08-25 浙江国自机器人技术有限公司 安防机器人及其自动巡检方法
CN109648602A (zh) * 2018-09-11 2019-04-19 深圳优地科技有限公司 自动充电方法、装置及终端设备
CN111481112B (zh) * 2019-01-29 2022-04-29 北京奇虎科技有限公司 扫地机的回充对准方法、装置及扫地机
CN109730590B (zh) * 2019-01-30 2023-08-25 深圳银星智能集团股份有限公司 清洁机器人以及清洁机器人自动返回充电的方法
CN110378285A (zh) * 2019-07-18 2019-10-25 北京小狗智能机器人技术有限公司 一种充电座的识别方法、装置、机器人及存储介质
CN111474928B (zh) * 2020-04-02 2023-08-01 上海高仙自动化科技发展有限公司 机器人控制方法、机器人、电子设备和可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170105592A1 (en) * 2012-10-05 2017-04-20 Irobot Corporation Robot management systems for determining docking station pose including mobile robots and methods using same
CN108700880A (zh) * 2015-09-04 2018-10-23 罗伯特有限责任公司 自主移动机器人的基站的识别与定位
CN110632915A (zh) * 2018-06-21 2019-12-31 科沃斯机器人股份有限公司 机器人回充路径规划方法、机器人及充电系统
CN109579852A (zh) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 基于深度相机的机器人自主定位方法及装置
CN109940606A (zh) * 2019-01-29 2019-06-28 中国工程物理研究院激光聚变研究中心 基于点云数据的机器人引导系统及方法
CN110793512A (zh) * 2019-09-11 2020-02-14 上海宾通智能科技有限公司 位姿识别方法、装置、电子设备和存储介质
CN111625005A (zh) * 2020-06-10 2020-09-04 浙江欣奕华智能科技有限公司 一种机器人充电方法、机器人充电的控制装置及存储介质
CN112346453A (zh) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 机器人自动回充方法、装置、机器人和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115173890A (zh) * 2022-06-30 2022-10-11 珠海一微半导体股份有限公司 一种多机器人的回充控制方法、系统及芯片
CN115173890B (zh) * 2022-06-30 2024-02-20 珠海一微半导体股份有限公司 一种多机器人的回充控制方法、系统及芯片
CN116581850A (zh) * 2023-07-10 2023-08-11 深圳市森树强电子科技有限公司 一种智能识别类型的移动充电器及其充电方法
CN116581850B (zh) * 2023-07-10 2024-01-26 深圳市森树强电子科技有限公司 一种智能识别类型的移动充电器及其充电方法
CN117137374A (zh) * 2023-10-27 2023-12-01 张家港极客嘉智能科技研发有限公司 基于计算机视觉的扫地机器人回充方法
CN117137374B (zh) * 2023-10-27 2024-01-26 张家港极客嘉智能科技研发有限公司 基于计算机视觉的扫地机器人回充方法

Also Published As

Publication number Publication date
CN112346453A (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
WO2022078467A1 (zh) 机器人自动回充方法、装置、机器人和存储介质
CN108885459B (zh) 导航方法、导航系统、移动控制系统及移动机器人
TWI686746B (zh) 識別車輛受損部件的方法、裝置、伺服器、客戶端及系統
US11308347B2 (en) Method of determining a similarity transformation between first and second coordinates of 3D features
US20200309534A1 (en) Systems and methods for robust self-relocalization in a pre-built visual map
US20220414910A1 (en) Scene contour recognition method and apparatus, computer-readable medium, and electronic device
US20110150300A1 (en) Identification system and method
US20230344979A1 (en) Wide viewing angle stereo camera apparatus and depth image processing method using the same
US20210174538A1 (en) Control apparatus, object detection system, object detection method and program
CN112258567A (zh) 物体抓取点的视觉定位方法、装置、存储介质及电子设备
US10902610B2 (en) Moving object controller, landmark, and moving object control method
WO2023173950A1 (zh) 障碍物检测方法、移动机器人及机器可读存储介质
JP2015118442A (ja) 情報処理装置、情報処理方法およびプログラム
WO2022028110A1 (zh) 自移动设备的地图创建方法、装置、设备及存储介质
US10753736B2 (en) Three-dimensional computer vision based on projected pattern of laser dots and geometric pattern matching
Shen et al. A multi-view camera-projector system for object detection and robot-human feedback
US20210348927A1 (en) Information processing apparatus, information processing method, and recording medium
JP2020201922A (ja) 拡張現実アプリケーションに関するシステム及び方法
WO2022083529A1 (zh) 一种数据处理方法及装置
CN116136408A (zh) 室内导航方法、服务器、装置和终端
CN114494857A (zh) 一种基于机器视觉的室内目标物识别和测距方法
Wang et al. Development of a vision system and a strategy simulator for middle size soccer robot
Nowak et al. Vision-based positioning of electric buses for assisted docking to charging stations
EP3336801A1 (en) Method and apparatus for constructing lighting environment representations of 3d scenes
Nowak et al. Leveraging object recognition in reliable vehicle localization from monocular images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879509

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 30.08.2023)