CN114744721A - Charging control method of robot, terminal device and storage medium - Google Patents

Charging control method of robot, terminal device and storage medium Download PDF

Info

Publication number
CN114744721A
CN114744721A CN202210460220.7A CN202210460220A CN114744721A CN 114744721 A CN114744721 A CN 114744721A CN 202210460220 A CN202210460220 A CN 202210460220A CN 114744721 A CN114744721 A CN 114744721A
Authority
CN
China
Prior art keywords
area
robot
image
determining
charging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210460220.7A
Other languages
Chinese (zh)
Inventor
赵勇胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubicon Qingdao Technology Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202210460220.7A priority Critical patent/CN114744721A/en
Publication of CN114744721A publication Critical patent/CN114744721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J7/00Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F13/00Illuminated signs; Luminous advertising
    • G09F13/16Signs formed of or incorporating reflecting elements or surfaces, e.g. warning signs having triangular or other geometrical shape
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02T90/10Technologies relating to charging of electric vehicles
    • Y02T90/12Electric charging stations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Power Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a charging control method of a robot, a terminal device and a storage medium, wherein the method comprises the following steps: the method comprises the steps of firstly acquiring a first image acquired by an infrared camera on a robot, determining a first area of a reflective mark on a charging pile for charging the robot in the first image, determining the position of the robot relative to the charging pile based on the position of the first area on the first image and the position of the reflective mark on the charging pile, and finally controlling the robot to charge on the charging pile based on the position of the robot relative to the charging pile. This application passes through infrared camera and gathers first image to the area of reflection of light sign in the discernment first image can avoid near filling electric pile to exist and fill electric pile appearance structure similar object time, and the problem of electric pile is filled in unable accurate discernment, and the acquisition robot that this application can be accurate is for filling the position appearance of electric pile, and then the accuracy charges for the robot.

Description

Charging control method of robot, terminal device and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a charging control method for a robot, a terminal device, and a storage medium.
Background
The robot is mostly powered by a battery, so that the action of the robot is not limited by a power supply line, and the action of the robot is more free. In addition, in order to further improve the intelligent degree of the robot, the robot can reach the effect of automatic charging by detecting the position of the charging pile.
At present, a laser radar is often installed on a robot, the position of a charging pile is identified by the laser radar, and then the pose relationship between the robot and the charging pile is calculated according to the identified position of the charging pile, so that the robot is controlled to automatically charge. If fill near electric pile and exist and fill electric pile appearance structure similar object, laser radar can become to fill electric pile with this object discernment, leads to the robot can't charge.
Disclosure of Invention
The embodiment of the application provides a charging control method of a robot, terminal equipment and a storage medium, and can solve the problem that the robot cannot be charged due to inaccurate identification of a charging pile.
In a first aspect, an embodiment of the present application provides a charging control method for a robot, where at least one reflective mark is disposed on a sidewall of a charging pile for charging the robot, and the method includes:
acquiring a first image acquired by an infrared camera on the robot, wherein the first image comprises the reflective mark;
determining a first area of the reflective marker in the first image;
determining the pose of the robot relative to the charging pile based on the position of the first area on the first image and the position of the preset reflective mark on the charging pile;
and controlling the robot to charge on the charging pile based on the pose of the robot relative to the charging pile.
In a second aspect, an embodiment of the present application provides a charging control apparatus for a robot, including:
the image acquisition module is used for acquiring a first image acquired by an infrared camera on the robot, wherein the first image comprises the reflective mark;
the first area determining module is used for determining a first area of the reflective marker in the first image;
the pose determining module is used for determining the pose of the robot relative to the charging pile based on the position of the first area on the first image and the position of the preset reflective mark on the charging pile;
and the control module is used for controlling the robot to charge on the charging pile based on the pose of the robot relative to the charging pile.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the charging control method of the robot according to any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and when executed by a processor, the computer program implements the charging control method for the robot according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, when the computer program product runs on a terminal device, the terminal device is caused to execute the charging control method for the robot in any one of the first aspect.
Compared with the prior art, the embodiment of the first aspect of the application has the following beneficial effects: according to the method and the device, a first image acquired by an infrared camera on the robot is acquired first, a first area of a reflective mark on a charging pile for charging the robot in the first image is determined, the position of the robot relative to the charging pile is determined based on the position of the first area on the first image and the position of the reflective mark on the charging pile, and finally the robot is controlled to charge on the charging pile based on the position of the robot relative to the charging pile. For using laser radar to confirm the position of filling electric pile, cause the unsafe problem of filling electric pile discernment, this application passes through infrared camera and gathers first image to the region of reflection of light sign in the discernment first image can avoid near filling electric pile to exist with fill electric pile appearance structure similar object time, the problem of the electric pile of unable accurate discernment, the relative position appearance that fills electric pile of obtaining robot that this application can be accurate, and then the accuracy charges for the robot.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic view of an application scenario of a charging control method for a robot according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a charging control method for a robot according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a first image provided by an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for determining a first region from a first image according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a method for determining a second area according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a method for determining a first area from a second area according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a charging control apparatus for a robot according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise.
When the laser radar on the robot is used for detecting the position of the charging pile, the laser radar is required to be used for scanning the charging pile to obtain point cloud data of the charging pile, the point cloud data obtained through scanning is matched with the cross section shape of the charging pile at the height, and the relative position information of the robot and the charging pile is obtained. However, if there is the moving object in the charging pile surrounding environment, the moving object can cause interference to the identification of charging pile, causing the erroneous identification of charging pile.
Based on the problems, the application provides a charging control method of a robot, images of reflective marks in preset shapes arranged on charging piles are collected through infrared cameras, the pose of the robot relative to the charging piles is determined based on the images of the reflective marks, and the problem of inaccurate identification of the charging piles caused by interference objects around the charging piles is avoided.
Fig. 1 is a schematic view of an application scenario of a charging control method of a robot according to an embodiment of the present application, where the charging control method of the robot may be used for the robot to use a charging pile to perform automatic charging. Wherein, infrared camera 10 is used for gathering first image, and the reflection of light sign that sets up including filling electric pile in the first image. The processor 20 is configured to obtain a first image acquired by the infrared camera 10, process the first image to obtain a pose of the robot relative to the charging pile, and control the robot to charge on the charging pile according to the pose of the robot relative to the charging pile.
Fig. 2 shows a schematic flowchart of a charging control method of a robot provided in the present application, and with reference to fig. 2, the method is described in detail as follows:
s101, acquiring a first image acquired by an infrared camera on the robot, wherein the first image comprises the reflective mark.
In this embodiment, be provided with at least one reflection of light sign on the lateral wall of the electric pile that fills for the robot charges. The shape of the reflective mark can be set as required, for example, the shape of the reflective mark can be set to be round, rectangular, square, etc. If the reflective marks are multiple, the shapes of the reflective marks can be the same or different. When there are a plurality of reflection of light signs, a plurality of reflection of light signs can set up according to predetermineeing the rule, for example, predetermine the rule and can set up a reflection of light sign for the first line, the second line sets up three reflection of light signs, predetermine the rule still can be for reflection of light sign bilateral symmetry setting etc..
Specifically, on the face that is provided with the reflection of light sign on filling electric pile to fill the direction of height (the direction of gravity) of electric pile for vertical axis (Z axle), fill the width direction of electric pile for cross axle (X axle), fill the thickness direction of electric pile for axis of ordinates (Y axle) and establish first coordinate system. The X-axis, Y-axis and Z-axis of the first coordinate system are perpendicular to each other. And determining the coordinates (X-axis coordinates and Z-axis coordinates) of the central point of the reflective marker in a first coordinate system. And the coordinates of the central points of all the reflective marks form a second matrix.
In this embodiment, the infrared camera may shield ambient light interference, and only acquire images in the infrared band. The infrared camera comprises an infrared light supplement lamp which can emit infrared rays. Because set up infrared light filling lamp in the infrared camera, consequently, use the infrared camera and can gather first image under the insufficient condition of light, for example as shown in fig. 3, in the area that the ellipse is circled in the picture, white region is reflection of light sign. In addition, the distance that infrared camera can shine is far away, and the robot can use infrared camera to gather the first image that fills electric pile in the position of filling the electric pile ten meters beyond the distance. The first image may be a grayscale image.
S102, determining a first area of the reflective mark in the first image.
In this embodiment, since the reflective mark has a reflective effect, the reflective mark region has a higher brightness than other regions, that is, a higher gray value, in the first image.
Specifically, the first image is input to the trained convolutional neural network to obtain a first region.
S103, determining the pose of the robot relative to the charging pile based on the position of the first area on the first image and the position of the preset reflective mark on the charging pile.
Specifically, a first matrix is generated based on coordinates of a center point of the first region on the first image; generating a second matrix based on the coordinate of the central point of the light-reflecting mark on the charging pile; and determining the pose of the robot relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera.
Specifically, the pose of the infrared camera relative to the charging pile is determined based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera; and determining the pose of the robot relative to the charging pile based on the position of the infrared camera on the robot.
In this embodiment, a second coordinate system is established in the first image. Specifically, the lower left corner of the first image may be used as an origin, the lower frame of the first image may be used as a horizontal axis, and the left frame of the first image may be used as a vertical axis to establish the second coordinate system.
And determining the coordinates of the center point of each first area in the second coordinate system, wherein the coordinates of the center points of all the first areas in the second coordinate system form a first matrix. When the first matrix is established, each first area corresponds to the reflective mark of the charging pile according to the position of the first area, the first matrix is generated according to the creation rule of the second matrix, for example, the second matrix is created according to the counterclockwise sequence of the reflective marks, when the first matrix is created, the first matrix also needs to be created according to the counterclockwise sequence of the first areas, and the first area in the first areas is determined according to the first reflective mark in the second matrix.
In this embodiment, the infrared camera is calibrated to obtain an internal parameter matrix and a distortion parameter of the infrared camera. In addition, the internal parameter matrix and distortion parameters of the infrared camera can be prestored or obtained from an external storage device.
In this embodiment, the position of the infrared camera on the robot may be preset, for example, coordinates of the infrared camera on the robot.
In this embodiment, based on the first matrix, the second matrix, the reference matrix of the infrared camera, and the distortion parameter of the infrared camera, the position and the spatial coordinate of the infrared camera with respect to the charging pile are calculated by using a PnP (P4P) algorithm, so as to obtain a rotation vector and a translation matrix. And taking a negative number for the parameter of the rotation steering quantity, and inverting the translation matrix to obtain a 6-element attitude matrix comprising coordinates, a rotation angle, a left-right swing angle (0 degree) and a top-bottom swing angle (0 degree), so as to obtain the pose relation between the charging pile and the infrared camera.
The PnP (passive-n-point) algorithm aims at solving the motion method of a 3D-2D point pair. In short, how to estimate the pose of the camera (i.e. the pose of the camera in coordinate system a) is known for n three-dimensional space point coordinates (relative to some specified coordinate system a) and its two-dimensional projection position.
And S104, controlling the robot to charge on the charging pile based on the pose of the robot relative to the charging pile.
Specifically, when the robot is charged by using the charging pile, the robot may have a certain distance from the charging pile, so after the pose of the robot relative to the charging pile is determined, the pose of the robot relative to the charging position needs to be determined, and the robot is controlled to reach the charging position and be charged at the charging position.
In the embodiment of the application, a first image acquired by an infrared camera on a robot is acquired, a first area of a reflective mark on a charging pile for charging the robot in the first image is determined, the position of the robot relative to the charging pile is determined based on the position of the first area on the first image and the position of the reflective mark on the charging pile, and finally the robot is controlled to charge on the charging pile based on the position of the robot relative to the charging pile. For the position that uses laser radar to confirm to fill electric pile, cause and fill the unsafe problem of electric pile discernment, this application passes through infrared camera and gathers first image to the area of reflection of light sign in the discernment first image can avoid near filling electric pile to exist with fill electric pile appearance structure similar object time, the problem of electric pile can't accurate discernment, the acquisition robot that this application can be accurate is for filling the position appearance of electric pile, and then accurate charges for the robot.
As shown in fig. 4, in a possible implementation manner, the implementation process of step S102 may include:
and S1021, determining a target pixel point of the pixel points of the first image, wherein the gray value of the target pixel point is greater than a first preset threshold value.
In this embodiment, each pixel point in the first image corresponds to a gray scale value. And searching the gray values which are larger than the first preset threshold value in all the gray values, and recording the gray values which are larger than the first preset threshold value as the first gray values. The first preset threshold may be set as desired.
And searching pixel points corresponding to the first gray value, wherein the pixel points corresponding to the first gray value are target pixel points.
In addition, binarization processing is performed on the first image to obtain a binary image. Specifically, the first gray value is set to 255, and the other gray values are set to 0, so as to obtain a binary image.
S1022, determining a second area in the first image based on the position of the target pixel point in the first image.
In this embodiment, if two target pixel points are adjacent pixel points in the first image, the two target pixel points are in a third area. The adjacent pixel points can be adjacent up and down or adjacent left and right.
For example, if the target pixel point a is a pixel point in the second row and the third column in the first image and the target pixel point B is a pixel point in the second row and the fourth column, the target pixel point a and the target pixel point B are adjacent pixel points, and the target pixel point a and the target pixel point B are in the same third region.
And if a non-target pixel point exists between the two target pixel points, the two target pixel points are in different third areas.
For example, if the target pixel point a is a pixel point in the second row and the third column in the first image and the target pixel point B is a pixel point in the second row and the fifth column, the target pixel point a and the target pixel point B are non-adjacent pixel points, and the target pixel point a and the target pixel point B are in different third regions.
In this embodiment, after all the third regions are determined according to all the target pixel points, the perimeter and/or the area of the third regions are used for re-screening, so as to obtain the second region in the third regions.
After the binary image is obtained, a white area in the binary image is a third area.
S1023, if the number of the second areas is larger than or equal to the number of the reflective marks, determining the first area from the second areas based on coordinates of target pixel points on contour lines of the second areas.
In this embodiment, whether the shape of the second region matches the shape of the reflective marker can be determined according to the coordinates of the target pixel point on the contour line of the second region, and then the first region is determined according to the second region matching the shape of the reflective marker.
In this embodiment, if the number of the second areas is smaller than the number of the reflective marks, the first image is discarded, which means that the first area cannot be determined according to the first image, and another first image needs to be obtained again.
In this embodiment, since the gray value of the region of the reflective mark in the first image is different from the gray values of the other regions, the second region may be determined according to the gray value, and then the first region may be determined more accurately according to the coordinates of the target pixel point on the contour line of the second region.
As shown in fig. 5, in a possible implementation manner, the implementation process of step S1022 may include:
s201, determining a third area in the first image based on the position of the target pixel point in the first image.
Specifically, please refer to the description of the third area obtained in step S1022, which is not described herein again.
S202, calculating first data of the third area, wherein the first data comprises area and/or perimeter.
Specifically, the area and/or perimeter of the third region is calculated according to the coordinates of the target pixel point on the contour of the third region.
S203, determining second data meeting preset requirements in the first data, wherein a third area corresponding to the second data is the second area.
In this embodiment, when the first data includes the area, the preset requirement includes that the area of the third region is within a first preset interval, when the first data includes the perimeter, the preset requirement includes that the perimeter of the third region is within a second preset interval, and when the first data includes the area and the perimeter, the preset requirement includes that the area of the third region is within the first preset interval and the perimeter of the third region is within the second preset interval.
In this embodiment, after the third region is determined by using the position of the target pixel point, the third region is screened by using the area and/or the perimeter of the third region, and the third region with the too small area and/or the too small perimeter in the third region is removed to obtain the second region. And screening the third area through the first data to obtain a more accurate area of the reflective mark.
As shown in fig. 6, in a possible implementation manner, the implementation process of step S1023 may include:
s301, determining a fourth area matched with the shape of the reflective marker from the second area based on the coordinates of the target pixel point on the outline of the second area.
Optionally, based on the coordinates of the target pixel point on the contour of the second region, the perimeter and the area of the second region are calculated; calculating a first ratio of a perimeter of the second region to an area of the second region; if the first ratio is within a third preset interval, the second area corresponding to the first ratio within the third preset interval is the fourth area.
In this embodiment, the ratio of the perimeter to the area of the reflective marks with different shapes is different, for example, the ratio of the perimeter to the area of the reflective mark with a circular shape is different from the ratio of the perimeter to the area of the reflective mark with a rectangular shape. Therefore, whether the shape of the second region matches the shape of the retroreflective sign can be determined using the ratio of the perimeter to the area of the second region.
And if the first ratio is not in the third preset interval, removing a second area corresponding to the first ratio which is not in the third preset interval.
Optionally, determining smoothness of the contour line of the second region based on coordinates of pixel points on the contour line of the second region; if the smoothness is within a fourth preset interval, a second area corresponding to the smoothness within the fourth preset interval is the fourth area.
In this embodiment, since the smoothness of the contour line of the reflective marker in different shapes is different, whether the second area matches the shape of the reflective marker can be determined according to the smoothness of the contour line of the second area. Specifically, the coordinates of the pixel points on the contour of the second region are input into the calculation model, and the smoothness of the contour line of the second region is obtained.
If the smoothness is not in the fourth preset interval, it is indicated that the second area corresponding to the smoothness is not matched with the shape of the light reflecting mark, and the second area corresponding to the smoothness which is not in the fourth preset interval is omitted.
Optionally, if the number of the reflective marks is multiple, the coordinate of the center point of each second region is determined according to the coordinates of the target pixel points on the contour lines of the second regions, and a third invariant moment of the second regions is obtained according to the coordinates of the center points of the second regions, for example, there are 3 second regions, and each second region corresponds to one third invariant moment. If the similarity between the third invariant moment and the invariant moment of the preset reflective mark is greater than the preset value, the second area corresponding to the third invariant moment with the similarity greater than the preset value is a fourth area.
In this embodiment, the main idea of invariant moments is to use several moments of the regions that are not sensitive to the transformation as shape features. Invariant moments are sets of moments computed from a digital graph, typically used to describe global features of the graph and provide geometric feature information of the graph. The geometric feature information may include size, position, orientation, shape, and the like. The invariant moment can be first, second, third, hu (Visual pattern recognition by moment innovations) moment, and the like.
In this embodiment, if the shapes of the reflective marks are different, for example, there are two circular reflective marks and two square reflective marks. The shape type of the fourth area also needs to be determined, specifically, each preset interval corresponds to one shape type.
For example, if the third predetermined section includes a first section and a second section, the shape type corresponding to the first section is a circle, and the shape type corresponding to the second section is a rectangle. The first ratio corresponding to the fourth area A is in the first interval, the first ratio corresponding to the fourth area B is in the first interval, and the first ratio corresponding to the fourth area C is in the second interval. The shape types of the fourth area A and the fourth area B are both circular, and the shape type of the fourth area C is rectangular.
S302, if the number of the fourth areas is equal to that of the light reflecting marks, obtaining a first moment-invariant moment based on coordinates of the center points of the fourth areas.
In this embodiment, if the number of the fourth areas is equal to the number of the reflective marks, it may be determined whether the fourth area is the first area. And if the number of the fourth areas is less than or greater than the number of the reflective marks, the first image is discarded, and the first area is not searched continuously.
In this embodiment, when there are a plurality of reflective marks having different shapes, if the number of the fourth area is equal to the number of the reflective marks and the number corresponding to the shape type included in the fourth area is the same as the number corresponding to the shape type included in the reflective marks, the first moment-invariant value is obtained based on the coordinates of the center point of the fourth area.
For example, there are 4 retro-reflective markers, two circular, two rectangular. The number of the fourth areas is 4, the shape types of the fourth areas comprise two circles and two rectangles, and the first moment-invariant moment is obtained based on the coordinates of the center points of the fourth areas.
In the present embodiment, the first moment-stabilizing amount is calculated based on the coordinates of the center points of all the fourth regions, for example, there are 4 fourth regions, and the first moment-stabilizing amount is obtained from the coordinates of the 4 center points. The first invariant moment reflects the invariant moment of the shape composed of the center points of all the fourth regions.
And S303, calculating the similarity of the first invariant moment and a preset invariant moment, wherein the preset invariant moment is determined based on the coordinate of the central point of the light reflecting mark on the charging pile.
In this embodiment, the first moment of inertia and the preset moment of inertia are input to the similarity calculation model, and the similarity between the first moment of inertia and the preset moment of inertia is obtained.
And S304, if the similarity is greater than a second preset threshold, the fourth area is the first area corresponding to the light reflecting mark.
In this embodiment, the second preset threshold may be set as needed. The first area is determined by using the similarity of the first invariant moment and the preset invariant moment, whether the distribution condition of all the fourth areas is the same as that of all the reflective marks or not can be determined, and whether the fourth areas are the areas where the reflective marks are located or not is further determined, so that the accuracy of the determined first area is guaranteed.
In a possible implementation manner, the method may further include:
s10, acquiring a first image acquired by an infrared camera on the robot, wherein the first image comprises the reflective mark.
And S20, determining target pixel points of the first image, wherein the gray value of the target pixel points is greater than a first preset threshold value.
S30, determining a third area in the first image based on the position of the target pixel point in the first image.
S40, calculating first data of the third area, wherein the first data comprises area and/or perimeter. And determining second data meeting preset requirements in the first data, wherein a third area corresponding to the second data is the second area.
And S50, if the number of the second areas is larger than or equal to the number of the reflective marks, determining a fourth area matched with the shape of the reflective marks from the second areas based on the coordinates of target pixel points on the outlines of the second areas.
And S60, if the number of the fourth areas is equal to the number of the light reflecting marks, obtaining a first invariant moment based on the coordinates of the center point of the fourth areas. Calculating the similarity of the first invariant moment and a preset invariant moment, wherein the preset invariant moment is determined based on the coordinate of the central point of the light reflecting mark on the charging pile; and if the similarity is greater than a second preset threshold value, the fourth area is the first area corresponding to the reflective mark.
S70, generating a first matrix based on the coordinates of the center point of the first region on the first image.
And S80, generating a second matrix based on the coordinates of the central point of the reflective mark on the charging pile.
S90, determining the pose of the infrared camera relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera;
and S100, determining the pose of the robot relative to the charging pile based on the position of the infrared camera on the robot, and controlling the robot to charge on the charging pile based on the pose of the robot relative to the charging pile.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a block diagram of a charging control device of a robot according to an embodiment of the present application, which corresponds to the charging control method of a robot according to the above-described embodiment.
Referring to fig. 7, the apparatus 400 may include: an image acquisition module 410, a first region determination module 420, a pose determination module 430, and a control module 440.
The image acquisition module 410 is configured to acquire a first image acquired by an infrared camera on the robot, where the first image includes the reflective mark;
a first region determining module 420, configured to determine a first region of the reflective marker in the first image;
a pose determination module 430, configured to determine a pose of the robot with respect to the charging pile based on a position of the first area on the first image and a preset position of the reflective marker on the charging pile;
and the control module 440 is configured to control the robot to charge the charging pile based on the pose of the robot relative to the charging pile.
In a possible implementation manner, the first region determining module 420 may specifically be configured to:
determining target pixel points of which the gray values are larger than a first preset threshold value in the pixel points of the first image;
determining a second region in the first image based on the position of the target pixel point in the first image;
and if the number of the second areas is larger than or equal to that of the reflective marks, determining the first area from the second areas based on the coordinates of target pixel points on the contour lines of the second areas.
In a possible implementation manner, the first region determining module 420 may specifically be configured to:
determining a third area in the first image based on the position of the target pixel point in the first image;
calculating first data of the third region, the first data comprising an area and/or a circumference;
determining second data which meets preset requirements in the first data, wherein a third region corresponding to the second data is the second region, the preset requirements comprise that the area of the third region is within a first preset interval when the first data comprises the area, the preset requirements comprise that the perimeter of the third region is within a second preset interval when the first data comprises the perimeter, and the preset requirements comprise that the area of the third region is within the first preset interval and the perimeter of the third region is within the second preset interval when the first data comprises the area and the perimeter.
In a possible implementation manner, the first region determining module 420 may specifically be configured to:
determining a fourth area matched with the shape of the reflective mark from the second area based on the coordinates of a target pixel point on the outline of the second area;
if the number of the fourth areas is equal to that of the reflective marks, obtaining a first invariant moment based on the coordinates of the center point of the fourth areas;
calculating the similarity of the first invariant moment and a preset invariant moment, wherein the preset invariant moment is determined based on the coordinate of the central point of the light reflecting mark on the charging pile;
and if the similarity is greater than a second preset threshold, the fourth area is the first area corresponding to the light reflecting mark.
In a possible implementation manner, the first area determining module 420 may be specifically configured to:
calculating the perimeter and the area of the second region based on the coordinates of target pixel points on the outline of the second region;
calculating a first ratio of a perimeter of the second region to an area of the second region;
if the first ratio is within a third preset interval, the second area corresponding to the first ratio within the third preset interval is the fourth area.
In a possible implementation manner, the first region determining module 420 may specifically be configured to:
determining smoothness of the contour line of the second area based on the coordinates of the pixel points on the contour line of the second area;
if the smoothness is within a fourth preset interval, a second area corresponding to the smoothness within the fourth preset interval is the fourth area.
In one possible implementation, the pose determination module 430 may be specifically configured to:
generating a first matrix based on coordinates of a center point of the first region on the first image;
generating a second matrix based on the coordinate of the central point of the light-reflecting mark on the charging pile;
and determining the pose of the robot relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera.
In one possible implementation, the pose determination module 430 may be specifically configured to:
determining the pose of the infrared camera relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera;
and determining the pose of the robot relative to the charging pile based on the position of the infrared camera on the robot.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 8, the terminal device 500 may include: at least one processor 510, a memory 520, and a computer program stored in the memory 520 and operable on the at least one processor 510, wherein the processor 510, when executing the computer program, implements the steps of any of the above-described method embodiments, such as the steps S101 to S104 in the embodiment shown in fig. 2. Alternatively, the processor 510, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 410 to 440 shown in fig. 7.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 520 and executed by the processor 510 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 500.
Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 510 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 520 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 520 is used for storing the computer programs and other programs and data required by the terminal device. The memory 520 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The charging control method of the robot provided by the embodiment of the application can be applied to terminal devices such as a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal devices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by one or more processors to implement the steps of the embodiments of the methods described above.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
Also, as a computer program product, when the computer program product runs on a terminal device, the terminal device is enabled to implement the steps in the above-mentioned method embodiments when executed.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. The charging control method of the robot is characterized in that at least one reflective mark is arranged on the side wall of a charging pile for charging the robot;
the method comprises the following steps:
acquiring a first image acquired by an infrared camera on the robot, wherein the first image comprises the light reflecting mark;
determining a first area of the reflective marker in the first image;
determining the pose of the robot relative to the charging pile based on the position of the first area on the first image and the position of the preset reflective mark on the charging pile;
and controlling the robot to charge on the charging pile based on the pose of the robot relative to the charging pile.
2. The method of controlling charging of a robot of claim 1, wherein said determining a first area of said retro-reflective marker in said first image comprises:
determining target pixel points of which the gray values are larger than a first preset threshold value in the pixel points of the first image;
determining a second region in the first image based on the position of the target pixel point in the first image;
and if the number of the second areas is larger than or equal to the number of the reflective marks, determining the first area from the second areas based on the coordinates of target pixel points on the contour lines of the second areas.
3. The method of controlling charging of a robot according to claim 2, wherein said determining a second area in the first image based on the position of the target pixel point in the first image comprises:
determining a third area in the first image based on the position of the target pixel point in the first image;
calculating first data of the third region, the first data comprising an area and/or a circumference;
determining second data which meets preset requirements in the first data, wherein a third region corresponding to the second data is the second region, the preset requirements comprise that the area of the third region is within a first preset interval when the first data comprises the area, the preset requirements comprise that the perimeter of the third region is within a second preset interval when the first data comprises the perimeter, and the preset requirements comprise that the area of the third region is within the first preset interval and the perimeter of the third region is within the second preset interval when the first data comprises the area and the perimeter.
4. The method of controlling charging of a robot according to claim 2, wherein the determining the first area from the second area based on coordinates of a target pixel point on an outline of the second area includes:
determining a fourth area matched with the shape of the reflective mark from the second area based on the coordinates of a target pixel point on the outline of the second area;
if the number of the fourth areas is equal to that of the reflective marks, obtaining a first invariant moment based on the coordinates of the center point of the fourth areas;
calculating the similarity of the first invariant moment and a preset invariant moment, wherein the preset invariant moment is determined based on the coordinate of the central point of the light reflecting mark on the charging pile;
and if the similarity is greater than a second preset threshold value, the fourth area is the first area corresponding to the reflective mark.
5. The method according to claim 4, wherein the determining a fourth area matching the shape of the retro-reflective marker from the second area based on coordinates of a target pixel point on an outline of the second area includes:
calculating the perimeter and the area of the second region based on the coordinates of target pixel points on the outline of the second region;
calculating a first ratio of a perimeter of the second region to an area of the second region;
if the first ratio is within a third preset interval, the second area corresponding to the first ratio within the third preset interval is the fourth area.
6. The method according to claim 4, wherein the determining a fourth area matching the shape of the retro-reflective marker from the second area based on coordinates of a target pixel point on an outline of the second area includes:
determining smoothness of the contour line of the second area based on the coordinates of the pixel points on the contour line of the second area;
if the smoothness is within a fourth preset interval, a second area corresponding to the smoothness within the fourth preset interval is the fourth area.
7. The method for controlling charging of a robot according to any one of claims 1 to 6, wherein the determining the pose of the robot with respect to the charging post based on the position of the first area on the first image and the position of the reflective mark on the charging post, which is preset, comprises:
generating a first matrix based on the coordinates of the central point of the first area on the first image;
generating a second matrix based on the coordinate of the central point of the reflective mark on the charging pile;
and determining the pose of the robot relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera.
8. The method of claim 7, wherein the determining the pose of the robot relative to the charging post based on the first matrix, the second matrix, the reference matrix of the infrared camera, and the distortion parameter of the infrared camera comprises:
determining the pose of the infrared camera relative to the charging pile based on the first matrix, the second matrix, the internal reference matrix of the infrared camera and the distortion parameters of the infrared camera;
and determining the pose of the robot relative to the charging pile based on the position of the infrared camera on the robot.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the charging control method of the robot according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements a charging control method of a robot according to any one of claims 1 to 8.
CN202210460220.7A 2022-04-28 2022-04-28 Charging control method of robot, terminal device and storage medium Pending CN114744721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210460220.7A CN114744721A (en) 2022-04-28 2022-04-28 Charging control method of robot, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210460220.7A CN114744721A (en) 2022-04-28 2022-04-28 Charging control method of robot, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN114744721A true CN114744721A (en) 2022-07-12

Family

ID=82284430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210460220.7A Pending CN114744721A (en) 2022-04-28 2022-04-28 Charging control method of robot, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN114744721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700262A (en) * 2023-06-19 2023-09-05 国广顺能(上海)能源科技有限公司 Automatic recharging control method, device, equipment and medium for mobile robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700262A (en) * 2023-06-19 2023-09-05 国广顺能(上海)能源科技有限公司 Automatic recharging control method, device, equipment and medium for mobile robot
CN116700262B (en) * 2023-06-19 2024-03-15 国广顺能(上海)能源科技有限公司 Automatic recharging control method, device, equipment and medium for mobile robot

Similar Documents

Publication Publication Date Title
CN110307838B (en) Robot repositioning method and device, computer-readable storage medium and robot
CN110136182B (en) Registration method, device, equipment and medium for laser point cloud and 2D image
US10872227B2 (en) Automatic object recognition method and system thereof, shopping device and storage medium
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN110807807B (en) Monocular vision target positioning pattern, method, device and equipment
CN110349092B (en) Point cloud filtering method and device
CN111080662A (en) Lane line extraction method and device and computer equipment
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
CN111681285B (en) Calibration method, calibration device, electronic equipment and storage medium
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN114862929A (en) Three-dimensional target detection method and device, computer readable storage medium and robot
CN114742789B (en) General part picking method and system based on surface structured light and electronic equipment
CN115147333A (en) Target detection method and device
CN114744721A (en) Charging control method of robot, terminal device and storage medium
CN110557622B (en) Depth information acquisition method and device based on structured light, equipment and medium
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN116912417A (en) Texture mapping method, device, equipment and storage medium based on three-dimensional reconstruction of human face
CN113362445B (en) Method and device for reconstructing object based on point cloud data
CN111783637B (en) Key point labeling method and device, and target object space pose determining method and device
CN114972531A (en) Calibration board, corner detection method, equipment and readable storage medium
CN114638947A (en) Data labeling method and device, electronic equipment and storage medium
CN115187769A (en) Positioning method and device
CN111223139B (en) Target positioning method and terminal equipment
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium
CN112614181B (en) Robot positioning method and device based on highlight target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221221

Address after: 266104 Room 202-1, Building 3, No. 8, Shengshui Road, Laoshan District, Qingdao, Shandong

Applicant after: Ubicon (Qingdao) Technology Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Youbixuan Technology Co.,Ltd.

TA01 Transfer of patent application right