WO2019174484A1 - Charging base identification method and mobile robot - Google Patents

Charging base identification method and mobile robot Download PDF

Info

Publication number
WO2019174484A1
WO2019174484A1 PCT/CN2019/076764 CN2019076764W WO2019174484A1 WO 2019174484 A1 WO2019174484 A1 WO 2019174484A1 CN 2019076764 W CN2019076764 W CN 2019076764W WO 2019174484 A1 WO2019174484 A1 WO 2019174484A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
charging stand
identified
image area
preset
Prior art date
Application number
PCT/CN2019/076764
Other languages
French (fr)
Chinese (zh)
Inventor
朱建华
沈冰伟
蒋腻聪
郭斌
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Publication of WO2019174484A1 publication Critical patent/WO2019174484A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present application relates to the field of mobile robot control technologies, and in particular, to a charging stand recognition method and a mobile robot.
  • a mobile robot is a machine that can perform work in accordance with a predetermined program.
  • the mobile robot has a mobile function.
  • Mobile robots are capable of performing many types of tasks during the move. For example, the cleaning robot can clean the road surface during the movement, and the care robot can transport the medical device or the patient during the movement.
  • automatic refill technology can improve the intelligence of mobile robots.
  • the automatic recharging process of the mobile robot specifically includes: when the battery power of the mobile robot is lower than the threshold, the mobile robot can move to the charging base according to the program, complete the charging task, and continue to perform the task after the charging is completed.
  • the mobile robot is required to recognize the position of the charging stand.
  • the mobile robot can scan the mark of the charging stand by a laser radar mounted on the mobile robot to identify the position of the charging stand.
  • the laser radar on the mobile robot in FIG. 1a can emit a laser scanning line.
  • the laser radar can receive the laser signal reflected by the charging stand and determine according to the reflected laser signal. The location of the charging stand.
  • the position of the charging stand can be identified in the above manner.
  • the laser scanning line is swept in a plane, when the ground on which the mobile robot is located is tilted, the laser scanning line may not be irradiated onto the mark of the charging stand, and the position of the charging stand may not be recognized.
  • the ground on which the mobile robot is located is tilted downward, and the laser scanning line cannot be irradiated onto the mark of the charging stand, which causes the mobile robot to not recognize the position of the charging stand. Therefore, when the position of the charging stand is recognized in the above manner, the recognition success rate is not high enough.
  • the purpose of the embodiment of the present application is to provide a charging stand identification method and a mobile robot to improve the recognition success rate of the charging stand.
  • an embodiment of the present application provides a charging stand identification method, which is applied to a mobile robot, and the mobile robot includes: an infrared camera module; the method includes:
  • Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
  • the step of determining an image area of the charging stand to be identified from the image according to the preset charging stand image feature includes:
  • the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
  • the step of determining location information of the charging stand to be identified according to the determined image area includes:
  • the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, Determining the location information of the charging stand to be identified relative to the mobile robot includes:
  • (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function,
  • the n represents the first preset number
  • the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i )
  • (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
  • the step of determining location information of the charging stand to be identified according to the determined image area includes:
  • the step of acquiring the depth information of the second preset number of identifier points in the image area includes:
  • the infrared camera module further has a depth sensing function, acquiring a depth image corresponding to the image collected by the infrared camera module, and acquiring a second preset number of the image regions from the depth image Identifying depth information of the point; wherein the depth image includes depth information of each identification point; or
  • the image includes a first image captured by the left camera module and a second image captured by the right camera module
  • the image The region is a first image region determined from the first image or a second image region determined from the second image, according to different images of corresponding pixel points in the first image region and the second image region Position, determining depth information of a second preset number of identification points in the image area; or
  • the mobile robot further includes an Inertial Measurement Unit (IMU), acquiring a previous image acquired by the infrared camera module before acquiring the image, acquiring a feature according to a preset charging stand image Obtaining a previous image region of the to-be-identified charging cradle determined in an image, acquiring a motion parameter when the IMU is collected from the first position to the second position, according to the motion parameter and the previous image region Determining depth information of a second predetermined number of identification points in the image area; wherein the first position is a spatial position of the mobile robot when the previous image is collected, and the second location is a collection location The spatial position of the mobile robot when the image is described.
  • IMU Inertial Measurement Unit
  • the second predetermined number of identification points in the image area are determined according to the second preset formula according to the depth information and the image position of the second preset number of identification points in the image area.
  • the steps of the spatial location include:
  • (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area
  • the Z i is depth information of an i-th identification point in the image area
  • K represents a preset internal reference matrix of the infrared camera module
  • the (u i , v i ) represents an image position of the i-th identification point in the image region.
  • the step of determining location information of the charging stand to be identified according to the determined image area includes:
  • the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
  • an infrared emitter capable of emitting infrared light
  • the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
  • the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
  • the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
  • an embodiment of the present application further provides a mobile robot, including: a processor, a memory, and an infrared camera module;
  • the infrared camera module is configured to collect an image and store the image to the memory
  • the processor is configured to acquire the image in the memory, determine an image area of the charging stand to be identified from the image according to a preset charging stand image feature, and determine, according to the determined image area, Determining the location information of the charging cradle; wherein the charging cradle to be identified can emit infrared light.
  • the processor is specifically configured to:
  • the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
  • the processor is specifically configured to:
  • the processor is specifically configured to:
  • (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function,
  • the n represents the first preset number
  • the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i )
  • (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
  • the processor is specifically configured to:
  • the infrared camera module when the infrared camera module further has a depth sensing function, the infrared camera module is further configured to collect a depth image corresponding to the image, and store the image to the memory; the processor, Specifically, the depth image is obtained from the memory, and the depth information of the second preset number of identifier points in the image area is obtained from the depth image, where the depth image includes depths of each identifier point. Information; or,
  • the processor is specifically configured to: when the infrared camera module includes a left camera module and a right camera module, the image includes the first image captured by the left camera module and the right camera module. a second image, the image area being a first image area determined from the first image or a second image area determined from the second image, according to the first image area and the second image area Determining depth information of a second preset number of identification points in the image area; or
  • the processor is specifically configured to acquire, when the mobile robot further includes an IMU, a previous image acquired by the infrared camera module before acquiring the image, and acquire an image according to a preset charging stand image from the upper image Obtaining, in an image, a previous image region of the charging stand to be identified, acquiring a motion parameter when the IMU is collected from the first position to the second position, and determining, according to the motion parameter and the previous image region, a depth information of a second predetermined number of identifier points in the image area; the IMU is configured to collect motion parameters when the first position is to the second position; wherein the first position is an acquisition The spatial position of the mobile robot when the previous image is, and the second position is a spatial position of the mobile robot when the image is acquired.
  • the processor is specifically configured to:
  • (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area
  • the Z i is depth information of an i-th identification point in the image area
  • K represents a preset internal reference matrix of the infrared camera module
  • the (u i , v i ) represents an image position of the i-th identification point in the image region.
  • the processor is specifically configured to:
  • the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
  • an infrared emitter capable of emitting infrared light
  • the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
  • the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
  • the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
  • an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the charging stand provided by the embodiment of the present application is implemented. recognition methods.
  • the method includes:
  • Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
  • the embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application.
  • the method includes:
  • Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
  • the charging stand identification method and the mobile robot provided by the embodiment of the present application can obtain an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine the charging to be identified from the image according to the preset charging stand image feature.
  • the image area of the seat determines the position information of the charging stand to be identified based on the image area. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
  • the charging stand to be identified can emit infrared light
  • the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
  • implementing any of the products or methods of the present application does not necessarily require that all of the advantages described above be achieved at the same time.
  • Figure 1a and Figure 1b are several reference diagrams of a mobile robot using a lidar to identify a charging stand;
  • FIG. 2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application.
  • FIG. 3 is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application
  • Figure 3b is a reference diagram of the infrared camera captured containing the marker of Figure 3a;
  • Figure 3c is a schematic view of the relative position between the mobile robot and the charging stand
  • Figure 3d is a schematic diagram of a refill path determined by the mobile robot
  • FIG. 4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module according to an embodiment of the present application
  • Figure 5a is a schematic structural view of a reflective sticker coated with glass beads
  • Figure 5b is a schematic diagram of a principle of crystal reflection of light
  • Figure 5c is a schematic diagram of a principle of reflecting light by a prism
  • FIG. 6 is a schematic flow chart of step S203 in FIG. 2;
  • FIG. 7 is a schematic diagram of locations of respective identification points on a charging stand to be identified according to an embodiment of the present application.
  • FIG. 8 is another schematic flowchart of step S203 in FIG. 2;
  • Figure 9a and Figure 9b are schematic diagrams of the imaging principle and depth calculation of the binocular camera, respectively;
  • FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera + IMU embodiment
  • 10a is a schematic diagram of a relative position of an infrared camera module and a charging stand to be identified;
  • FIG. 10b is a schematic cross-sectional view of a charging stand to be identified according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
  • the embodiment of the present application provides a charging stand identification method and a mobile robot.
  • the charging stand identification method provided by the embodiment of the present application will be described in detail below through specific embodiments.
  • FIG. 2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application.
  • the method embodiment is applied to a mobile robot comprising: an infrared camera module.
  • the infrared camera module can be mounted at the front of the mobile robot or near the front.
  • the infrared camera module can be an infrared camera, an infrared camera, and the like.
  • the infrared camera module is a camera module based on near-infrared light imaging. Generally, light having a wavelength of 0.76 ⁇ m to 1.5 ⁇ m is referred to as near-infrared light.
  • an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module can be obtained by adding a filter that blocks visible light to an ordinary camera.
  • the charging stand identification method provided in this embodiment includes the following steps S201 to S203.
  • Step S201 Acquire an image acquired by the infrared camera module.
  • the infrared camera module can collect images according to a preset period, and the mobile robot can acquire images collected by the infrared camera module according to a preset period.
  • the mobile robot can acquire an image of the moving direction of the mobile robot collected by the infrared camera module.
  • the image captured by the infrared camera module can be understood as an image of an environmental object surrounding the mobile robot. Since the mobile robot is movable, the mobile robot may be farther away from the charging stand to be identified, or may be closer; the charging stand to be identified may be within the image capturing range of the infrared camera module, or may not be in the infrared camera module. The image is captured within range. Therefore, the image may include the charging stand to be identified, or may not include the charging stand to be identified; when the image contains the charging stand to be identified, the charging stand to be identified may be located at any position in the image.
  • the infrared camera module can be a monocular camera or a binocular camera.
  • Step S202 Determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
  • the charging stand to be identified can emit infrared light.
  • the charging stand to be recognized emits infrared light
  • the image area of the image to be recognized in the image is displayed as a highlighted area, which makes the feature of the charging stand to be identified in the image more obvious and easier to be recognized.
  • the area where the infrared light is to be emitted on the charging stand may be the entire area of the charging stand to be identified, or may be the mark area on the charging stand to be identified.
  • the marker By setting the marker to a specific shape or pattern, the charging stand can be made recognizable.
  • the marker may be a rectangular block of four shape rules arranged in a predetermined order or the like.
  • the infrared light emitted by the charging stand to be identified is: infrared light emitted by the charging stand itself to be identified.
  • the inside of the charging stand to be identified may have an infrared light emitter, so that the charging stand to be identified emits infrared light outward, so that the charging stand to be identified has obvious highlight features in the image.
  • the infrared light emitted by the charging stand to be identified is: infrared light emitted by other external devices reflected by the charging stand.
  • the charging stand to be identified can reflect the infrared light emitted by the external infrared light emitter, so that the charging stand to be identified can emit infrared light.
  • it can also present a highlight feature in the image, which can be realized by setting some special materials on the charging stand to be identified.
  • the image area of the image to be recognized in the charging stand is a highlighted area.
  • the shape of the marker on the charging stand may be a preset shape.
  • FIG. 3a is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application, wherein the black to-be-identified charging stand has four white rectangular markers, and the markers can emit infrared light.
  • FIG. 3b is a reference diagram of the marker included in FIG. 3a collected by the infrared camera module, wherein four rectangular highlight areas are visible as markers on the charging stand to be identified.
  • the area of the charging stand in the image is a highlighted area of the preset shape.
  • the preset charging stand image feature may be the preset shape.
  • the area of the outer frame of each marker can be used as the image area of the charging stand to be identified.
  • the determined image area includes image areas of four rectangular markers.
  • the step S202 may specifically include: determining, according to a preset charging stand image feature, whether there is a charging stand to be identified in the image; if present, determining an image area of the charging stand to be identified from the image; If it does not exist, it can be left unprocessed or the image can be discarded.
  • the charging stand image feature may include a charging stand pixel feature.
  • the charging stand pixel feature may include: the pixel value is greater than a preset pixel threshold.
  • the charging stand image feature may also include a charging stand size feature.
  • the charging stand pixel feature may include at least one of an aspect ratio feature, a length range feature, and a width range feature of the image area.
  • the preset pixel threshold may be determined in advance according to the pixel value of the highlight area portion of the sample charging stand in the image, for example, the preset pixel threshold may be 200 or other values.
  • the above pixel feature is a feature determined based on the size of the pixel value.
  • the cradle pixel feature is a feature determined based on the size of the pixel value of the cradle in the image.
  • the image area of the charging stand to be identified is determined from the image according to the preset charging stand pixel feature and/or the preset charging stand size feature. Specifically, the image may be scanned to detect an area having a charging stand pixel feature and/or a charging stand size feature, and the area is used as an image area of the charging stand to be identified.
  • Step S203 Determine location information of the charging stand to be identified according to the determined image area.
  • the location information of the charging stand to be identified may include: a spatial location and a spatial orientation of the charging stand to be identified.
  • Spatial position ie space coordinates.
  • the spatial orientation can be represented by the normal vector of the plane in which the charging dock is to be identified.
  • the spatial position and spatial orientation of the charging stand to be identified may be the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot, that is, the spatial position and spatial orientation of the charging stand in the mobile robot coordinate system to be identified.
  • the mobile robot coordinate system can be understood as the coordinate system whose coordinate origin is located on the mobile robot. For example, the coordinate origin of the mobile robot coordinate system is the center position of the mobile robot.
  • the step S203 may include: determining a spatial position of each pixel in the image region relative to the mobile robot, and using an average value of spatial positions of the respective pixel points in the image region as the charging stand to be identified relative to the mobile
  • the spatial position of the robot the target plane is determined according to the spatial position of each pixel in the image region, and the normal vector of the target plane is taken as the spatial orientation of the charging stand to be identified relative to the mobile robot.
  • the spatial position of each pixel in the image region relative to the mobile robot is the position of each pixel in the image region in the mobile robot coordinate system.
  • the mobile robot may further determine a refill path from the mobile robot to the charging stand to be identified according to the determined spatial position and spatial orientation, by controlling the mobile robot.
  • the driving component drives the mobile robot to move along the recharging path to the charging stand to be identified.
  • the above refilling path can enable the mobile robot to treat the charging stand when the charging stand is to be recognized, and realize automatic charging of the mobile robot.
  • Figure 3c shows the spatial position and spatial orientation of the charging stand relative to the mobile robot, wherein the front of the mobile robot is not facing the charging stand.
  • a refill path from the mobile robot to the charging cradle can be planned.
  • Figure 3d shows a schematic diagram of a refill path from the mobile robot to the charging cradle.
  • the charging component on the mobile robot is located at the front of the mobile robot.
  • the mobile robot is treating the charging stand, that is, the line connecting the front part of the mobile robot and the charging stand to be identified is perpendicular to the target plane.
  • the image obtained by the infrared camera module and containing the infrared charging light to be recognized can be obtained, and the image area of the charging stand to be identified is determined from the image according to the preset charging seat image feature, according to The image area determines location information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
  • the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
  • the mobile robot may further include: an infrared emitter capable of emitting infrared light.
  • the infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified.
  • the infrared emitter can be mounted close to the infrared camera module.
  • the marker on the charging stand to be identified may be a mirror material.
  • the charging stand to be identified may be emitted by the infrared emitter.
  • the infrared light is reflected into the lens of the infrared camera module, and the reflection is specular reflection. In this way, the infrared camera module can collect an image containing the highlighted charging stand to be recognized, and the charging stand to be identified is more easily recognized.
  • the infrared emitter acts as a fill light to provide illumination to the surrounding environment of the mobile robot.
  • the charging stand to be recognized is illuminated by infrared light
  • the charging stand to be identified in the image captured by the infrared camera module can be made clearer.
  • the charging stand to be identified in order to make the mobile robot at any position relative to the charging stand to be identified, can display a highlight feature in the image, and the reflective seat can be used for reflection.
  • the material for example, the charging stand to be identified is covered with a reflective material, or the marker of the charging stand to be identified uses a reflective material. Among them, the reflective material can return the reflected light along the optical path of the incident light.
  • FIG. 4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module in a mobile robot according to an embodiment of the present application.
  • an object in the field of view (FOV) of the infrared emitter reflects a certain amount of infrared light into the lens of the infrared camera module, thereby generating an infrared image in the lens.
  • FOV field of view
  • the charging stand to be identified appears in the overlapping area of the infrared emitter FOV and the infrared camera lens FOV, since the reflective material can almost completely reflect the infrared light, a bright area appears in the image, that is, the charging stand to be identified It can be highlighted in the image, and the charging stand to be identified can have higher recognition in the image than the object around the charging stand to be identified.
  • the reflective material can be a reflective sticker, and the reflective sticker is a high reflectivity material.
  • the surface of the reflective material is coated with a high refractive index layer.
  • the high refractive index layer may comprise high refractive index glass beads, crystals or prisms. These high reflectivity layers are capable of reflecting light from different directions and reflecting the light back in the direction of incidence.
  • Figure 5a is a schematic view of a structure of a reflective sticker coated with high refractive index glass beads.
  • the reflective sticker comprises a surface resin layer, a high refractive index glass bead, an adhesive layer, a reflective layer and a sticker layer, and the incident light is projected on the high refractive index glass microsphere through the surface resin layer, and after being reflected by the reflective layer, It is reflected back from the surface resin layer.
  • the glass beads in the reflective sticker of Figure 5a can also be crystals or lenses.
  • Figure 5b is a schematic diagram of the crystal reflecting light
  • Figure 5c is a schematic diagram of the prism reflecting light. It can be seen that after the incident light is reflected and refracted after being projected onto the crystal or the lens, the light can be emitted in the opposite direction of the incident light.
  • a reflective sticker having a special shape is pasted on the charging stand to be identified, and the special shape can be used to identify the charging stand.
  • the charging stand to be recognized can reflect the infrared light emitted by the infrared emitter of the mobile robot back to the infrared camera module, so that the charging stand to be recognized can present a high-brightness pattern in the image. .
  • step S203 the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. The following steps S203a and S203b may be included.
  • Step S203a Determine an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the charging stand to be identified that are acquired in advance.
  • the first preset quantity may be a preset quantity value.
  • the first predetermined number may be a value greater than three.
  • the first preset number of identification points are preset points, and the relative positions between the points are fixed.
  • the first preset number of identification points on the charging stand to be identified which may be understood as: pre-acquiring the spatial position of the first preset number of identification points on the charging stand to be identified, that is, the first preset number of identification points The spatial position on the charging stand to be identified.
  • the spatial position of the first preset number of identification points may be a space coordinate in a coordinate system established by using one of the first preset number of identification points as a coordinate origin, or may be any space The space coordinate in the coordinate system established when the fixed point is the origin of the coordinate.
  • Determining the image position of the first preset number of identification points may be understood as determining the coordinates of the first preset number of identification points in the image. Specifically, for each identification point, when determining the image position of the identification point, the pixel point corresponding to the identification point may be determined from the image area, and the image position of the identification point is determined according to the coordinates of the pixel point. The pixel corresponding to the identifier point may be one or more.
  • the image coordinate of the pixel point may be directly determined as the image position of the identifier point; when the pixel point corresponding to the identifier point is multiple, the plurality of pixel points may be The average of the image coordinates is determined as the image position of the marker point.
  • the image coordinates are coordinates in the image coordinate system.
  • the four rectangular frames are the markers on the charging stand to be identified, and the center points of the four rectangular frames are used as preset marking points, and the first preset number is 4.
  • the space coordinates of each marker point are determined in the counterclockwise direction to be (0, 0, 0), (L 2 , 0, 0), ( L 2 , -L 1 , 0) and (0, -L 1 , 0).
  • the central pixel point of each rectangular area in the image area is the first preset number of identification points in the image area, and the image coordinates of the identification points are (u 1 , v 1 ), respectively (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ).
  • the infrared camera module can be a monocular camera.
  • Step S203b determining, according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, determining the charging stand to be identified relative to the mobile robot Location information.
  • the step S203b may specifically include: determining, according to the following first preset formula (1), a rotation matrix R and a translation matrix t of the charging stand to be identified relative to the mobile robot, the rotation matrix R and the translation matrix t That is, the position information of the charging stand to be identified relative to the mobile robot:
  • (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity.
  • K represents the internal parameter matrix of the preset infrared camera module
  • argmin represents the minimum projection error function
  • n is the first preset number
  • (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i )
  • (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image
  • the image coordinates of the projection is
  • K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
  • the spatial coordinates of each identification point are (0, 0, 0), (L 2 , 0, 0), (L 2 , -L 1 , 0) and (0, -L 1 , 0); the image coordinates of each marker point are (u 1 , v 1 ), (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ), respectively, according to these known quantities,
  • the rotation matrix R and the translation matrix t can be obtained.
  • the spatial orientation and spatial position of the charging stand to be identified relative to the mobile robot may be determined according to the rotation matrix R and the translation matrix t as the position information of the charging stand to be identified.
  • the spatial position of the identification point on the charging stand to be identified and the image position of the identification point in the image area are determined in advance, and the spatial position of the charging stand to be identified relative to the mobile robot can be determined.
  • the position information of the charging stand to be identified is accurately determined.
  • step S203 the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. Specifically, the following steps S203A, S203B, and S203C may be included.
  • Step S203A Acquire depth information of a second preset number of identification points in the image area.
  • the second preset quantity may be a preset quantity value, and may be a quantity value greater than 3.
  • the second preset number may be the same as the first preset number or may be different.
  • the second preset number of identification points in the image area may be pixels determined according to a preset rule, or may be preset pixels.
  • the foregoing preset rule may be randomly selecting a pixel point, or may be selecting a pixel point at a preset position.
  • the central pixel of the rectangular region corresponding to each rectangular marker in the image region may be used as the second predetermined number of identification points.
  • the depth information may include at least one of a distance value, a distance error range, and the like.
  • the depth information can be understood as the distance between the point on the object corresponding to each identifier point and the infrared camera module, that is, the distance between the point on the object corresponding to each identifier point and the mobile robot.
  • Step S203B Determine, according to the depth information and the image position of the second preset number of identification points in the image region, the spatial position of the second preset number of identification points in the image region according to the second preset formula.
  • the spatial position of the second preset number of identification points in the image area can be understood as the spatial coordinate of the point on the object corresponding to the second preset number of identification points in the image area.
  • the step S203B may specifically include:
  • the spatial position of the second preset number of identification points in the image area is determined according to the following second preset formula (2):
  • (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region
  • i may take a value between 1 and m
  • m represents a second preset number.
  • the origin of the coordinate system corresponding to (X i , Y i , Z i ) can be established on the mobile robot, that is, the origin of the coordinate system corresponding to (X i , Y i , Z i ) can be the center position of the mobile robot.
  • Z i represents the depth information of the i-th identification point in the image area
  • (X i , Y i ) represents the plane coordinate of the i-th identification point in the image area
  • K represents the internal reference matrix of the preset infrared camera module.
  • (u i , v i ) represents the image position of the i-th identification point in the image area.
  • K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
  • the image positions of the second predetermined number of identification points are ⁇ (u 1 , v 1 ), (u 2 , v 2 ), . . . , (u m , v m ) ⁇ , respectively, the depth of each identification point.
  • the information may be ⁇ Z 1 , Z 2 , . . . , Z m ⁇ , respectively, where m is a second preset number, and according to the known quantities, and the second preset formula described above, a second image area may be obtained.
  • the spatial coordinates of a preset number of identification points are ⁇ (u 1 , v 1 ), (u 2 , v 2 ), . . . , (u m , v m ) ⁇ , respectively, the depth of each identification point.
  • the information may be ⁇ Z 1 , Z 2 , . . . , Z m ⁇ , respectively, where m is a second preset number, and according to the known quantities, and
  • Step S203C Determine location information of the charging stand to be identified according to the spatial position of the second preset number of identification points in the image area.
  • the step S203C may specifically include: using an average value of a spatial position of the second preset number of identification points in the image area as a spatial position of the charging station to be identified relative to the mobile robot; determining the image area The plane where the spatial position corresponding to the preset number of identification points is located, and the normal vector of the plane is used as the spatial orientation of the charging stand to be identified relative to the mobile robot.
  • the present embodiment can determine the spatial position of the identification point according to the depth information of the pixel in the image area, and determine the position information of the charging stand to be identified according to the spatial position, thereby improving the accuracy of determining the position information of the charging stand to be identified.
  • the depth information in step S203A of the foregoing embodiment may be obtained by using various embodiments.
  • the step of acquiring the depth information of the second preset number of the identifier points in the image region may include:
  • the depth image includes depth information of each identification point.
  • the infrared camera module may include a depth sensor and an infrared emitter.
  • the depth sensor can be a Time Of Flight (TOF) sensor.
  • the TOF sensor can calculate the depth information between the object and the lens by using the time difference between the infrared light emitted by the infrared emitter and the infrared light received by the infrared camera module lens to generate a depth image.
  • the TOF sensor can also modulate the infrared light to a certain frequency to obtain modulated light, emit the modulated light, and calculate the object to the lens by calculating the phase difference between the received modulated light and the emitted modulated light. The depth value between.
  • the infrared camera module also obtains a depth image corresponding to the image when the image is acquired, and the depth image includes depth information of each pixel in the image, and the identifier point is determined from the pixel of the image.
  • the depth image thus includes depth information for each of the identified points.
  • the infrared camera module is an infrared binocular camera, that is, when the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module, and a second image acquired by the right camera module, wherein the image area is a first image area determined from the first image or a second image area determined from the second image.
  • the first image area is an image area of the charging stand to be identified determined from the first image according to the preset charging stand image feature
  • the second image area is to be identified from the second image according to the preset charging stand image feature.
  • the image area of the charging stand is an infrared binocular camera, that is, when the infrared camera module includes a left camera module and a right camera module
  • the image includes a first image captured by the left camera module, and a second image acquired by the right camera module, wherein the image area is a first image area determined from the first image or a second image area determined from the second image.
  • the corresponding identifier points in the first image area and the second image area are: the first identifier point in the first image area and the second identifier point in the second image area correspond to the same point in the space; that is, The same point in space is the imaged point in the first image area and the second image area, respectively.
  • the first identification point is any identification point in the first image area
  • the second identification point is any identification point in the second image area.
  • the first identification point and the second identification point are taken as an example for illustration, and are not limiting.
  • Figure 9a is an imaging schematic of a binocular camera.
  • the infrared binocular camera includes a left-eye camera and a right-eye camera, and the line between the center points of the two cameras is a baseline.
  • Point P is the left eye pixel in the left eye camera and the right eye pixel in the right eye camera.
  • the left eye pixel and the right eye pixel are corresponding identification points.
  • the depth information of the second preset number of identification points in the image area may be determined according to the following third preset formula (3):
  • Z is the depth information of an identification point in the image area.
  • the identification point S 1 is taken as an example.
  • f is the focal length of the left camera module lens or the focal length of the right camera module lens.
  • the left camera module or the right camera module has the same focal length.
  • b is a baseline length between the center of the lens and the right lens center of the camera module of the left camera module,
  • u L and u R are the identifier identifying the corresponding point S and point S. 1. 1 identifies image coordinate point S 2.
  • the identification point S 1 is a pixel point in the first image area
  • the corresponding identification point S 2 is a pixel point in the second image area
  • the image area is the second image area
  • the identification point S 1 is a pixel point in the second image area
  • the corresponding identification point S 2 is a pixel point in the first image area.
  • the third preset formula may be used to determine the depth information of the identification point.
  • the center of the lens is the center of the aperture.
  • Figure 9b is a schematic diagram of the depth calculation of the binocular camera.
  • OL is the lens center of the left camera module
  • O R is the lens center of the right camera module
  • b is the baseline length between the lens center of the left camera module and the lens center of the right camera module.
  • P L is the imaging pixel point (ie, the identification point) of the P point on the imaging plane of the left camera module
  • P R is the imaging pixel point (ie, the corresponding identification point) of the P point on the imaging plane of the right camera module.
  • Z is the distance between the P point and the baseline, that is, the depth information of the identification point.
  • u L is the coordinate of P L and u R is the coordinate of P R .
  • the step of acquiring the depth information of the second preset number of identification points in the image region may include the following steps. 1 to 4.
  • Step 1 Obtain the previous image acquired by the infrared camera module before acquiring the above image.
  • the infrared camera module can collect images according to a preset period.
  • the previous image is an image captured by the infrared camera module at a previous acquisition time before the current acquisition time
  • the last acquisition time is an acquisition time adjacent to the current collection time. That is, the last acquisition time is the time at which the image is acquired by the previous infrared camera module.
  • the acquisition time is the time at which the infrared camera module collects images.
  • the image acquired at the above acquisition time is image A, that is, the previous image is image A
  • the image acquired at the current acquisition time is image B as an example.
  • the position of the mobile robot is the first position
  • the position of the mobile robot when the infrared camera module acquires the image B is the second position.
  • the infrared camera module collects images during the movement of the mobile robot, the first position and the second position are different.
  • the infrared camera module can be a monocular camera.
  • Step 2 Acquire a previous image area of the charging stand to be identified determined from the previous image according to the preset charging stand image feature.
  • step S202 For the step, refer to step S202 in the embodiment shown in FIG. 2, and the specific content is not described in detail herein.
  • Step 3 Acquire the motion parameters collected by the IMU from the first position to the second position.
  • the above motion parameters may include the amount of rotation and the amount of translation.
  • the IMU can capture the amount of rotation and the amount of translation between any two positions during the movement of the mobile robot.
  • the amount of rotation can be understood as a rotation matrix.
  • the amount of translation can be understood as a translation matrix.
  • the first position is the spatial position of the mobile robot when the previous image is acquired, and the second position is the spatial position of the mobile robot when the image is acquired.
  • Step 4 Determine depth information of the second preset number of identification points in the image area according to the motion parameter and the previous image area.
  • the step 4 may specifically include the following embodiments: for the target identification point in the image area, according to the image position of the corresponding identification point of the target identification point in the previous image area and the motion parameter, according to the following The fourth preset formula (4) determines the depth information of the target identification point; wherein the target identification point is any one of the second preset number of identification points in the image area, and the corresponding identification point of the target identification point is Identify points in the previous image area.
  • p' A (u A , v A , 1) T
  • p' B (u B , v B , 1) T
  • T is a matrix transpose symbol.
  • (u B , v B ) is the image position of the target marker point
  • (u A , v A ) is the image position of the corresponding marker point of the target marker point in the previous image region
  • p′ A is (u A , v A
  • the normalized coordinates of p) B are the normalized coordinates of (u B , v B ).
  • R and t are respectively the amount of rotation and the amount of translation in the above-mentioned motion parameters, that is, R is a rotation matrix in the above-described motion parameters, and t is a translation matrix in the above-described motion parameters.
  • K is the internal reference matrix of the preset infrared camera module.
  • s A is the depth information of the corresponding identification point of the target identification point
  • s B is the depth information of the target identification point.
  • x A represents the normalized plane coordinate of the corresponding identification point of the target identification point
  • x B represents the normalized plane coordinate of the target identification point. According to the fourth preset formula, depth information of each of the second preset number of identification points in the image area may be obtained.
  • K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
  • the corresponding identifier points of the target identifier point and the target identifier point are: the target identifier point of the image area and the corresponding identifier point of the target identifier point in the previous image area correspond to the same point in the space; that is, the same point in the space Imaging points in the previous image area and image area, respectively.
  • FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera+IMU embodiment.
  • A is the first position and B is the second position.
  • the image point at which the P point is detected at the A position is p A (u A , v A ), and the image point at which the point P is detected when the mobile robot moves to the B position is p B (u B , v B ), O A
  • It is the lens center of the infrared camera module at the A position
  • O B is the lens center of the infrared camera module at the B position.
  • the depth information of the imaged point can be acquired by triangulation.
  • the amount of rotation and the amount of translation from position A to position B are R and t, respectively.
  • x A K -1 p' A
  • the above s A and s B are unknowns.
  • the marker of the charging stand is used to use the reflective material, and when the charging stand to be identified appears in the FOV of the infrared camera module, it is easier to obtain the mobile robot. Relative positional relationship with the charging stand to be identified. However, when the mobile robot moves outside the FOV of the infrared camera module during the movement, the marker of the charging stand to be identified does not appear in the FOV of the infrared camera module, and it is difficult for the mobile robot to recognize the charging stand to be identified.
  • the marker on a single plane has a corresponding identifiable angle, ie, the FOV is identified, as shown in Figure 10a, the FOV of the cradle to be identified.
  • the FOV is identified
  • the infrared camera module moves to the position shown in FIG. 10a, that is, when the mobile robot moves to the position shown in FIG. 10a during the movement, the infrared camera module moves outside the FOV of the marker to be identified by the charging stand. It is difficult for the robot to recognize the charging stand to be identified.
  • the reflective side of each of the sides of the charging stand to be identified may include a reflective material, and the pattern features of the reflective material on each side are different. Specifically, when the charging stand is identified, the orientation of the charging stand to be identified may be determined according to the preset pattern features of the respective sides.
  • the charging stand to be identified may be divided into a plurality of sides, and the side faces may also be referred to as a cross section, and each side is marked with a different pattern of reflective material. In this way, the FOV of the entire charging stand can be increased.
  • FIG. 10b is a schematic cross-sectional view of the charging stand to be identified in the embodiment.
  • the upper side view of FIG. 10b is a schematic view of each side of the charging stand, including three sides, and the lower side view of FIG. 10b is a top view of the charging stand.
  • Each side of the charging stand is marked with a different pattern using a reflective material. In the case of a charging stand against the wall, the charging stand can be identified within a range of 180 degrees.
  • the trapezoid is shown as a charging stand.
  • the longest side of the trapezoid is the side of the charging stand against the wall, the other side of the trapezoid represents the other side of the charging stand, and the reflective materials of the other sides are marked with different patterns. .
  • FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. This embodiment corresponds to the method embodiment shown in FIG. 2.
  • the mobile robot includes a processor 110, a memory 111, and an infrared camera module 112.
  • the infrared camera module 112 can be mounted at the front of the mobile robot or near the front.
  • the infrared camera module 112 can be an infrared camera, an infrared camera, or the like.
  • the infrared camera module 112 is a camera module that is imaged according to near-infrared light. Generally, light having a wavelength of 0.76 ⁇ m to 1.5 ⁇ m is referred to as near-infrared light.
  • an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module 112 can be obtained by adding a filter that blocks visible light on an ordinary camera.
  • the infrared camera module 112 is configured to collect images and store the images in the memory 111;
  • the processor 110 is configured to acquire an image in the memory 111, determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature, and determine position information of the charging stand to be identified according to the determined image area;
  • the charging stand to be identified can emit infrared light.
  • the above memory 111 may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
  • RAM random access memory
  • NVM non-volatile memory
  • the memory 111 may also be at least one storage device located away from the foregoing processor.
  • the processor 110 may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or a digital signal processing (DSP), dedicated integration.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the processor 110 may be specifically configured to:
  • the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
  • the processor 110 may be specifically configured to:
  • the spatial position on the upper surface and the image position of the first preset number of identification points, and the first preset formula determine the position information of the charging stand to be identified relative to the mobile robot.
  • the processor 110 may be specifically configured to:
  • (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity.
  • K represents the internal parameter matrix of the preset infrared camera module
  • argmin represents the minimum projection error function
  • n represents the first preset number
  • (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i )
  • (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image Projection coordinates.
  • the processor 110 may be specifically configured to:
  • the infrared camera module 112 when the infrared camera module 112 further has a depth sensing function, the infrared camera module 112 is further configured to collect a depth image corresponding to the image, and store the image to Memory 111.
  • the processor 110 is specifically configured to acquire a depth image from the memory, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
  • the infrared camera module 112 may include a depth sensor (not shown), which may be a TOF sensor.
  • the depth sensor is used to acquire depth information of each pixel in the depth image.
  • the infrared camera module 112 can also be used to collect a depth image corresponding to the image and store it in the memory 111.
  • the processor 110 is specifically configured to obtain a depth image from the memory 111, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
  • the processor 110 may be specifically configured to: when the infrared camera module includes a left camera module and a right camera module (not shown), The image includes a first image acquired by the left camera module and a second image respectively acquired by the right camera module, and the image region is a first image region determined from the first image or a second image region determined from the second image. Determining depth information of a second predetermined number of identification points in the image area according to different image positions of the corresponding identification points in the first image area and the second image area.
  • the processor 110 may be specifically configured to: when the mobile robot further includes an IMU (not shown), acquire the infrared camera module 112 to acquire an image.
  • the previously acquired previous image acquires a previous image region of the charging stand to be identified determined from the previous image according to the preset charging stand image feature, and acquires a motion parameter when the IMU collects from the first position to the second position, Determining depth information of a second predetermined number of identification points in the image area according to the motion parameter and the previous image area.
  • the IMU is configured to collect motion parameters when moving from the first position to the second position; wherein the first position is a spatial position of the mobile robot when the previous image is acquired, and the second position is a spatial position of the mobile robot when the image is acquired.
  • the processor 110 is specifically configured to:
  • (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region
  • Z i represents the depth information of the i-th identification point in the image region
  • K represents the preset infrared camera module.
  • the internal reference matrix, (u i , v i ), represents the image position of the i-th identified point in the image region.
  • the processor 110 is specifically configured to:
  • the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot are determined.
  • the mobile robot may further include: an infrared emitter (not shown) capable of emitting infrared light.
  • the infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified.
  • the charging stand to be identified may include a reflective material, and the reflective material can return the reflected light along the optical path of the incident light.
  • each side of the charging stand to be identified may include a reflective material, and the pattern features of the reflective materials on the respective sides are different.
  • the embodiment of the present application further provides a computer readable storage medium.
  • the computer readable storage medium stores a computer program.
  • the charging stand identification method provided by the embodiment of the present application is implemented. The method includes:
  • the position information of the charging stand to be identified is determined according to the determined image area.
  • the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
  • the image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
  • the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
  • the embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application.
  • the method includes:
  • the position information of the charging stand to be identified is determined according to the determined image area.
  • the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
  • the image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
  • the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.

Abstract

A charging base identification method and a mobile robot. The mobile robot comprises an infrared photographing module (112). The method comprises: obtaining an image captured by the infrared photographing module (S201); determining an image region of a charging base to be identified from the image according to a preset charging base image feature (S202); and determining position information of the charging base to be identified according to the determined image region (S203), wherein the charging base to be identified can emit infrared light. The method can improve the success rate of identifying a charging base.

Description

一种充电座识别方法及移动机器人Charging base recognition method and mobile robot
本申请要求于2018年3月12日提交中国专利局、申请号为201810202018.8发明名称为“一种充电座识别方法及移动机器人”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201,810,020, 018, filed on March 12, 2018, entitled "A Charge Holder Identification Method and Mobile Robot", the entire contents of which are incorporated herein by reference. in.
技术领域Technical field
本申请涉及移动机器人控制技术领域,特别是涉及一种充电座识别方法及移动机器人。The present application relates to the field of mobile robot control technologies, and in particular, to a charging stand recognition method and a mobile robot.
背景技术Background technique
移动机器人是一种能够按照既定程序执行工作的机器装置。移动机器人具有移动功能。在移动过程中移动机器人能够执行多种类型的任务。例如,扫地机器人可以在移动过程中清洁路面,护理机器人可以在移动过程中运送医疗器械或病人等。A mobile robot is a machine that can perform work in accordance with a predetermined program. The mobile robot has a mobile function. Mobile robots are capable of performing many types of tasks during the move. For example, the cleaning robot can clean the road surface during the movement, and the care robot can transport the medical device or the patient during the movement.
在移动机器人领域,自动回充技术能够提高移动机器人的智能性。移动机器人的自动回充过程具体包括:在移动机器人的电池电量低于阈值时,移动机器人能够按照程序移动至充电座处,完成充电任务,在充电完毕之后继续执行任务。为了实现移动机器人的自动回充,就需要移动机器人能够识别充电座的位置。In the field of mobile robots, automatic refill technology can improve the intelligence of mobile robots. The automatic recharging process of the mobile robot specifically includes: when the battery power of the mobile robot is lower than the threshold, the mobile robot can move to the charging base according to the program, complete the charging task, and continue to perform the task after the charging is completed. In order to achieve automatic recharging of the mobile robot, the mobile robot is required to recognize the position of the charging stand.
相关技术中,移动机器人可以通过安装在移动机器人上的激光雷达对充电座的标记进行扫描,来识别充电座的位置。例如,图1a中移动机器人上的激光雷达可以发射激光扫描线,当激光扫描线照射到充电座上黑白相间的标记时,激光雷达可以接收到充电座反射的激光信号,并根据反射的激光信号确定充电座的位置。In the related art, the mobile robot can scan the mark of the charging stand by a laser radar mounted on the mobile robot to identify the position of the charging stand. For example, the laser radar on the mobile robot in FIG. 1a can emit a laser scanning line. When the laser scanning line is irradiated to the black and white mark on the charging stand, the laser radar can receive the laser signal reflected by the charging stand and determine according to the reflected laser signal. The location of the charging stand.
采用上述方式能够识别充电座的位置。但是,由于激光扫描线是在平面内旋转扫动的,当移动机器人所在的地面存在倾斜时,激光扫描线旋转时可能无法照射到充电座的标记上,导致无法识别充电座的位置。参见图1b,移动机器人所在的地面向下倾斜,此时激光扫描线无法照射到充电座的标记上,这导致移动机器人无法识别充电座的位置。因此,采用上述方式对充电座的位置进行识别时,识别成功率不够高。The position of the charging stand can be identified in the above manner. However, since the laser scanning line is swept in a plane, when the ground on which the mobile robot is located is tilted, the laser scanning line may not be irradiated onto the mark of the charging stand, and the position of the charging stand may not be recognized. Referring to Fig. 1b, the ground on which the mobile robot is located is tilted downward, and the laser scanning line cannot be irradiated onto the mark of the charging stand, which causes the mobile robot to not recognize the position of the charging stand. Therefore, when the position of the charging stand is recognized in the above manner, the recognition success rate is not high enough.
发明内容Summary of the invention
本申请实施例的目的在于提供了一种充电座识别方法及移动机器人,以提高对充电座的识别成功率。The purpose of the embodiment of the present application is to provide a charging stand identification method and a mobile robot to improve the recognition success rate of the charging stand.
为了达到上述目的,本申请实施例提供了一种充电座识别方法,该方法应用于移动机器人,所述移动机器人包括:红外摄像模组;所述方法包括:In order to achieve the above object, an embodiment of the present application provides a charging stand identification method, which is applied to a mobile robot, and the mobile robot includes: an infrared camera module; the method includes:
获取所述红外摄像模组采集的图像;Obtaining an image acquired by the infrared camera module;
根据预设的充电座图像特征,从所述图像中确定待识别充电座的图像区域;其中,所述待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
根据确定的图像区域,确定所述待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the determined image area.
可选的,所述根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域的步骤,包括:Optionally, the step of determining an image area of the charging stand to be identified from the image according to the preset charging stand image feature includes:
根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
可选的,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:Optionally, the step of determining location information of the charging stand to be identified according to the determined image area includes:
根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging stand acquired in advance;
根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining the to-be-identified charging according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and a first preset formula Position information relative to the mobile robot.
可选的,所述根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息的步骤,包括:Optionally, the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, Determining the location information of the charging stand to be identified relative to the mobile robot includes:
根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:
Figure PCTCN2019076764-appb-000001
Figure PCTCN2019076764-appb-000001
其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述(X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
可选的,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:Optionally, the step of determining location information of the charging stand to be identified according to the determined image area includes:
获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;
根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;
根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
可选的,所述获取所述图像区域中第二预设数量个标识点的深度信息的步骤,包括:Optionally, the step of acquiring the depth information of the second preset number of identifier points in the image area includes:
当所述红外摄像模组还具有深度感知功能时,获取所述红外摄像模组采集的与所述图像对应的深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,When the infrared camera module further has a depth sensing function, acquiring a depth image corresponding to the image collected by the infrared camera module, and acquiring a second preset number of the image regions from the depth image Identifying depth information of the point; wherein the depth image includes depth information of each identification point; or
当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应像素点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,When the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module and a second image captured by the right camera module, the image The region is a first image region determined from the first image or a second image region determined from the second image, according to different images of corresponding pixel points in the first image region and the second image region Position, determining depth information of a second preset number of identification points in the image area; or
当所述移动机器人还包括惯性感测单元(Inertial Measurement Unit,IMU) 时,获取所述红外摄像模组在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。When the mobile robot further includes an Inertial Measurement Unit (IMU), acquiring a previous image acquired by the infrared camera module before acquiring the image, acquiring a feature according to a preset charging stand image Obtaining a previous image region of the to-be-identified charging cradle determined in an image, acquiring a motion parameter when the IMU is collected from the first position to the second position, according to the motion parameter and the previous image region Determining depth information of a second predetermined number of identification points in the image area; wherein the first position is a spatial position of the mobile robot when the previous image is collected, and the second location is a collection location The spatial position of the mobile robot when the image is described.
可选的,所述根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置的步骤,包括:Optionally, the second predetermined number of identification points in the image area are determined according to the second preset formula according to the depth information and the image position of the second preset number of identification points in the image area. The steps of the spatial location include:
按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:
Figure PCTCN2019076764-appb-000002
Figure PCTCN2019076764-appb-000002
其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
可选的,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:Optionally, the step of determining location information of the charging stand to be identified according to the determined image area includes:
根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
可选的,所述移动机器人还包括:能发射红外光的红外发射器;所述红外发射器发射的红外光能够照射在所述待识别充电座上。Optionally, the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
可选的,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。Optionally, the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
可选的,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。Optionally, the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
为了达到上述目的,本申请实施例还提供了一种移动机器人,该移动机器人包括:处理器、存储器以及红外摄像模组;In order to achieve the above object, an embodiment of the present application further provides a mobile robot, including: a processor, a memory, and an infrared camera module;
所述红外摄像模组,用于采集图像,并将所述图像存储至所述存储器;The infrared camera module is configured to collect an image and store the image to the memory;
所述处理器,用于获取所述存储器中的所述图像,根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域,根据确定的图像区域,确定所述待识别充电座的位置信息;其中,所述待识别充电座能发出红外光。The processor is configured to acquire the image in the memory, determine an image area of the charging stand to be identified from the image according to a preset charging stand image feature, and determine, according to the determined image area, Determining the location information of the charging cradle; wherein the charging cradle to be identified can emit infrared light.
可选的,所述处理器具体用于:Optionally, the processor is specifically configured to:
根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
可选的,所述处理器具体用于:Optionally, the processor is specifically configured to:
根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging cradle obtained in advance; according to the first preset quantity Determining a spatial position of the identification point on the charging stand to be identified and an image position of the first preset number of identification points, and a first preset formula determining the charging stand to be identified relative to the mobile robot location information.
可选的,所述处理器具体用于:Optionally, the processor is specifically configured to:
根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:
Figure PCTCN2019076764-appb-000003
Figure PCTCN2019076764-appb-000003
其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述 (X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
可选的,所述处理器具体用于:Optionally, the processor is specifically configured to:
获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;
根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;
根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
可选的,当所述红外摄像模组还具有深度感知功能时,所述红外摄像模组,还用于采集与所述图像对应的深度图像,并存储至所述存储器;所述处理器,具体用于从所述存储器中获取所述深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,Optionally, when the infrared camera module further has a depth sensing function, the infrared camera module is further configured to collect a depth image corresponding to the image, and store the image to the memory; the processor, Specifically, the depth image is obtained from the memory, and the depth information of the second preset number of identifier points in the image area is obtained from the depth image, where the depth image includes depths of each identifier point. Information; or,
所述处理器,具体用于当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,The processor is specifically configured to: when the infrared camera module includes a left camera module and a right camera module, the image includes the first image captured by the left camera module and the right camera module. a second image, the image area being a first image area determined from the first image or a second image area determined from the second image, according to the first image area and the second image area Determining depth information of a second preset number of identification points in the image area; or
所述处理器,具体用于当所述移动机器人还包括IMU时,获取所述红外摄像模组在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;所述IMU,用于采集从所述第一位置到所述第二位置时的运动参量;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。The processor is specifically configured to acquire, when the mobile robot further includes an IMU, a previous image acquired by the infrared camera module before acquiring the image, and acquire an image according to a preset charging stand image from the upper image Obtaining, in an image, a previous image region of the charging stand to be identified, acquiring a motion parameter when the IMU is collected from the first position to the second position, and determining, according to the motion parameter and the previous image region, a depth information of a second predetermined number of identifier points in the image area; the IMU is configured to collect motion parameters when the first position is to the second position; wherein the first position is an acquisition The spatial position of the mobile robot when the previous image is, and the second position is a spatial position of the mobile robot when the image is acquired.
可选的,所述处理器,具体用于:Optionally, the processor is specifically configured to:
按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:
Figure PCTCN2019076764-appb-000004
Figure PCTCN2019076764-appb-000004
其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
可选的,所述处理器,具体用于:Optionally, the processor is specifically configured to:
根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
可选的,所述移动机器人还包括:能发射红外光的红外发射器;所述红外发射器发射的红外光能够照射在所述待识别充电座上。Optionally, the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
可选的,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。Optionally, the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
可选的,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。Optionally, the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
为了达到上述目的,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的充电座识别方法。该方法包括:In order to achieve the above object, an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the charging stand provided by the embodiment of the present application is implemented. recognition methods. The method includes:
获取所述红外摄像模组采集的图像;Obtaining an image acquired by the infrared camera module;
根据预设的充电座图像特征,从所述图像中确定待识别充电座的图像区域;其中,所述待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
根据确定的图像区域,确定所述待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the determined image area.
为了达到上述目的,本申请实施例还提供了一种计算机程序,所述计算 机程序被处理器执行时实现本申请实施例提供的充电座识别方法。该方法包括:In order to achieve the above object, the embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application. The method includes:
获取所述红外摄像模组采集的图像;Obtaining an image acquired by the infrared camera module;
根据预设的充电座图像特征,从所述图像中确定待识别充电座的图像区域;其中,所述待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
根据确定的图像区域,确定所述待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the determined image area.
本申请实施例提供的充电座识别方法及移动机器人,可以获取红外摄像模组采集的包含能发出红外光的待识别充电座的图像,根据预设的充电座图像特征从图像中确定待识别充电座的图像区域,根据该图像区域,确定待识别充电座的位置信息。由于红外摄像模组的图像采集范围是以红外摄像模组为顶点的圆锥形范围,当移动机器人所在地面存在倾斜时,待识别充电座也能处在红外摄像模组的图像采集范围内,进而能够实现对待识别充电座的位置信息识别。并且,待识别充电座能发出红外光,在红外摄像模组采集的图像中能发出红外光的待识别充电座具有较明显的图像特征,因此从图像中识别待识别充电座的位置信息时,能够提高识别的准确性。当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。The charging stand identification method and the mobile robot provided by the embodiment of the present application can obtain an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine the charging to be identified from the image according to the preset charging stand image feature. The image area of the seat determines the position information of the charging stand to be identified based on the image area. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized. Moreover, the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition. Of course, implementing any of the products or methods of the present application does not necessarily require that all of the advantages described above be achieved at the same time.
附图说明DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are only some of the embodiments of the present application, and those skilled in the art can obtain other drawings according to the drawings without any creative work.
图1a和图1b为移动机器人采用激光雷达识别充电座的几种参考图;Figure 1a and Figure 1b are several reference diagrams of a mobile robot using a lidar to identify a charging stand;
图2为本申请实施例提供的充电座识别方法的一种流程示意图;2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application;
图3a为本申请实施例提供的待识别充电座上标记物的一种参考图;FIG. 3 is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application; FIG.
图3b为红外摄像机采集的包含图3a中标记物的一种参考图;Figure 3b is a reference diagram of the infrared camera captured containing the marker of Figure 3a;
图3c为移动机器人和充电座之间相对位置的一种示意图;Figure 3c is a schematic view of the relative position between the mobile robot and the charging stand;
图3d为移动机器人确定的回充路径的一种示意图;Figure 3d is a schematic diagram of a refill path determined by the mobile robot;
图4为本申请实施例提供的红外发射器和红外摄像模组的一种安装位置示意图;4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module according to an embodiment of the present application;
图5a为涂有玻璃微珠的反光贴纸的一种结构示意图;Figure 5a is a schematic structural view of a reflective sticker coated with glass beads;
图5b为晶体对光进行反射的一种原理示意图;Figure 5b is a schematic diagram of a principle of crystal reflection of light;
图5c为棱镜对光进行反射的一种原理示意图;Figure 5c is a schematic diagram of a principle of reflecting light by a prism;
图6为图2中步骤S203的一种流程示意图;6 is a schematic flow chart of step S203 in FIG. 2;
图7为本申请实施例提供的待识别充电座上各个标识点位置的一种示意图;FIG. 7 is a schematic diagram of locations of respective identification points on a charging stand to be identified according to an embodiment of the present application; FIG.
图8为图2中步骤S203的另一种流程示意图;FIG. 8 is another schematic flowchart of step S203 in FIG. 2;
图9a和图9b分别为双目摄像机的成像原理图和深度计算原理图;Figure 9a and Figure 9b are schematic diagrams of the imaging principle and depth calculation of the binocular camera, respectively;
图9c为单目摄像机+IMU实施例中计算深度信息的一种原理示意图;FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera + IMU embodiment;
图10a为红外摄像模组与待识别充电座的一种相对位置示意图;10a is a schematic diagram of a relative position of an infrared camera module and a charging stand to be identified;
图10b为本申请实施例提供的待识别充电座的一种断面标记示意图;FIG. 10b is a schematic cross-sectional view of a charging stand to be identified according to an embodiment of the present application; FIG.
图11为本申请实施例提供的移动机器人的一种结构示意图。FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the accompanying drawings in the embodiments. It is apparent that the described embodiments are only a part of the embodiments of the present application, and not all of them. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
为了提高对充电座的识别成功率,本申请实施例提供了一种充电座识别方法及移动机器人。下面通过具体实施例,对本申请实施例提供的充电座识别方法进行详细说明。In order to improve the recognition success rate of the charging stand, the embodiment of the present application provides a charging stand identification method and a mobile robot. The charging stand identification method provided by the embodiment of the present application will be described in detail below through specific embodiments.
图2为本申请实施例提供的充电座识别方法的一种流程示意图。该方法实 施例应用于移动机器人,该移动机器人包括:红外摄像模组。该红外摄像模组可以安装于移动机器人的前部或靠近前部的位置。红外摄像模组可以为红外摄像头、红外摄像机等。红外摄像模组为根据近红外光成像的摄像模组。一般来说,将波长为0.76μm~1.5μm的光称为近红外光。通常,普通摄像机中的光学传感器可以对近红外光区和可见光区的光产生感应,因此红外摄像模组可以通过在普通摄像机上加装阻挡可见光的滤镜得到。FIG. 2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application. The method embodiment is applied to a mobile robot comprising: an infrared camera module. The infrared camera module can be mounted at the front of the mobile robot or near the front. The infrared camera module can be an infrared camera, an infrared camera, and the like. The infrared camera module is a camera module based on near-infrared light imaging. Generally, light having a wavelength of 0.76 μm to 1.5 μm is referred to as near-infrared light. Generally, an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module can be obtained by adding a filter that blocks visible light to an ordinary camera.
本实施例提供的充电座识别方法包括如下步骤S201~步骤S203。The charging stand identification method provided in this embodiment includes the following steps S201 to S203.
步骤S201:获取红外摄像模组采集的图像。Step S201: Acquire an image acquired by the infrared camera module.
红外摄像模组可以按照预设的周期采集图像,移动机器人可以按照预设的周期获取红外摄像模组采集的图像。当红外摄像模组位于移动机器人前部时,移动机器人可以获取红外摄像模组采集的移动机器人运动方向上的图像。The infrared camera module can collect images according to a preset period, and the mobile robot can acquire images collected by the infrared camera module according to a preset period. When the infrared camera module is located at the front of the mobile robot, the mobile robot can acquire an image of the moving direction of the mobile robot collected by the infrared camera module.
红外摄像模组采集的图像,可以理解为,采集移动机器人周围环境物体的图像。由于移动机器人是可移动的,移动机器人可能距离待识别充电座较远,也可能较近;待识别充电座可能处在红外摄像模组的图像采集范围内,也可能不处在红外摄像模组的图像采集范围内。因此,该图像中可能包含待识别充电座,也可能不包含待识别充电座;当图像中包含待识别充电座时,待识别充电座可以位于图像中的任意位置。The image captured by the infrared camera module can be understood as an image of an environmental object surrounding the mobile robot. Since the mobile robot is movable, the mobile robot may be farther away from the charging stand to be identified, or may be closer; the charging stand to be identified may be within the image capturing range of the infrared camera module, or may not be in the infrared camera module. The image is captured within range. Therefore, the image may include the charging stand to be identified, or may not include the charging stand to be identified; when the image contains the charging stand to be identified, the charging stand to be identified may be located at any position in the image.
本实施例中,红外摄像模组可以为单目摄像机,也可以为双目摄像机。In this embodiment, the infrared camera module can be a monocular camera or a binocular camera.
步骤S202:根据预设的充电座图像特征,从图像中确定待识别充电座的图像区域。Step S202: Determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
其中,待识别充电座能发出红外光。当待识别充电座发出红外光时,图像中待识别充电座所在图像区域显示为高亮区域,这使得待识别充电座在图像中的特征更明显,更容易被识别。Wherein, the charging stand to be identified can emit infrared light. When the charging stand to be recognized emits infrared light, the image area of the image to be recognized in the image is displayed as a highlighted area, which makes the feature of the charging stand to be identified in the image more obvious and easier to be recognized.
待识别充电座上发出红外光的区域,可以是待识别充电座的整个区域,也可以是待识别充电座上的标记物区域。将标记物设置成特定形状或图案,可以使充电座能具有可识别性。例如标记物可以是4个形状规则的矩形块按照预设顺序排列等。The area where the infrared light is to be emitted on the charging stand may be the entire area of the charging stand to be identified, or may be the mark area on the charging stand to be identified. By setting the marker to a specific shape or pattern, the charging stand can be made recognizable. For example, the marker may be a rectangular block of four shape rules arranged in a predetermined order or the like.
一个实施例中,待识别充电座发出的红外光为:待识别充电座本身发射的红外光。例如,待识别充电座内部可以具有红外光发射器,使待识别充电座自身向外发射红外光,使待识别充电座在图像中具有明显的高亮特征。另一个实施例中,待识别充电座发出的红外光为:待识别充电座反射的外部其他设备发射的红外光。例如,待识别充电座可以反射外部红外光发射器发射的红外光,实现待识别充电座能发出红外光。为了使待识别充电座在反射红外光时,也能在图像中呈现出高亮特征,可以在待识别充电座上设置一些特殊材料来实现。In one embodiment, the infrared light emitted by the charging stand to be identified is: infrared light emitted by the charging stand itself to be identified. For example, the inside of the charging stand to be identified may have an infrared light emitter, so that the charging stand to be identified emits infrared light outward, so that the charging stand to be identified has obvious highlight features in the image. In another embodiment, the infrared light emitted by the charging stand to be identified is: infrared light emitted by other external devices reflected by the charging stand. For example, the charging stand to be identified can reflect the infrared light emitted by the external infrared light emitter, so that the charging stand to be identified can emit infrared light. In order to enable the charging stand to be recognized to reflect the infrared light, it can also present a highlight feature in the image, which can be realized by setting some special materials on the charging stand to be identified.
当待识别充电座能发出红外光时,图像中的待识别充电座所在图像区域为高亮区域。充电座上的标记物的形状可以为预设形状。例如,图3a为本申请实施例提供的待识别充电座上标记物的一种参考图,其中,黑色的待识别充电座上有4个白色矩形标记物,这些标记物能发出红外光。图3b为红外摄像机模组采集的包含图3a中标记物的一种参考图,其中可见4个矩形高亮区域即为待识别充电座上的标记物。When the charging stand to be recognized can emit infrared light, the image area of the image to be recognized in the charging stand is a highlighted area. The shape of the marker on the charging stand may be a preset shape. For example, FIG. 3a is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application, wherein the black to-be-identified charging stand has four white rectangular markers, and the markers can emit infrared light. FIG. 3b is a reference diagram of the marker included in FIG. 3a collected by the infrared camera module, wherein four rectangular highlight areas are visible as markers on the charging stand to be identified.
若充电座上的标记物的形状为预设形状,充电座在图像中的区域为预设形状的高亮区域。为识别出待识别充电座的图像区域,预设的充电座图像特征可以为该预设形状。If the shape of the marker on the charging stand is a preset shape, the area of the charging stand in the image is a highlighted area of the preset shape. In order to identify the image area of the charging stand to be identified, the preset charging stand image feature may be the preset shape.
在从图像中检测出待识别充电座的标记物时,可以将各个标记物外框的区域作为待识别充电座的图像区域。例如,当待识别充电座包括4个矩形标记物时,确定的图像区域包括4个矩形标记物的图像区域。When the marker of the charging stand to be identified is detected from the image, the area of the outer frame of each marker can be used as the image area of the charging stand to be identified. For example, when the charging stand to be identified includes four rectangular markers, the determined image area includes image areas of four rectangular markers.
在一种实施方式中,本步骤S202具体可以包括:根据预设的充电座图像特征,判断图像中是否存在待识别充电座;如果存在,则从图像中确定待识别充电座的图像区域;如果不存在,则可以不予以处理,也可以丢弃图像。In an embodiment, the step S202 may specifically include: determining, according to a preset charging stand image feature, whether there is a charging stand to be identified in the image; if present, determining an image area of the charging stand to be identified from the image; If it does not exist, it can be left unprocessed or the image can be discarded.
上述充电座图像特征可以包括充电座像素特征,例如,充电座像素特征可以包括:像素值大于预设像素阈值。上述充电座图像特征也可以包括充电座尺寸特征,例如充电座像素特征可以包括:图像区域的长宽比例特征、长度范围特征、宽度范围特征中的至少一种。其中,可以预先根据样本充电座在图像中的高亮区域部分的像素值确定预设像素阈值,例如,预设像素阈值可以为200或其他值。The charging stand image feature may include a charging stand pixel feature. For example, the charging stand pixel feature may include: the pixel value is greater than a preset pixel threshold. The charging stand image feature may also include a charging stand size feature. For example, the charging stand pixel feature may include at least one of an aspect ratio feature, a length range feature, and a width range feature of the image area. Wherein, the preset pixel threshold may be determined in advance according to the pixel value of the highlight area portion of the sample charging stand in the image, for example, the preset pixel threshold may be 200 or other values.
上述像素特征为基于像素值的大小所确定的特征。充电座像素特征,即为基于充电座在图像中的像素值的大小所确定的特征。The above pixel feature is a feature determined based on the size of the pixel value. The cradle pixel feature is a feature determined based on the size of the pixel value of the cradle in the image.
相应的,本步骤S202可以为,根据预设的充电座像素特征和/或预设的充电座尺寸特征,从图像中确定待识别充电座的图像区域。具体的,可以对图像进行扫描,检测出具有充电座像素特征和/或充电座尺寸特征的区域,将该区域作为待识别充电座的图像区域。Correspondingly, in this step S202, the image area of the charging stand to be identified is determined from the image according to the preset charging stand pixel feature and/or the preset charging stand size feature. Specifically, the image may be scanned to detect an area having a charging stand pixel feature and/or a charging stand size feature, and the area is used as an image area of the charging stand to be identified.
步骤S203:根据确定的图像区域,确定待识别充电座的位置信息。Step S203: Determine location information of the charging stand to be identified according to the determined image area.
在一种实施方式中,待识别充电座的位置信息,可以包括:待识别充电座的空间位置和空间朝向。空间位置,即空间坐标。空间朝向可以采用待识别充电座所在平面的法向矢量表示。待识别充电座的空间位置和空间朝向,具体可以为待识别充电座相对于移动机器人的空间位置和空间朝向,即待识别充电座在移动机器人坐标系中的空间位置和空间朝向。移动机器人坐标系可以理解为坐标原点位于移动机器人上的坐标系。例如,移动机器人坐标系的坐标原点为移动机器人的中心位置。此时,本步骤S203具体可以为,根据确定的图像区域,确定待识别充电座相对于移动机器人的空间位置和空间朝向。In an embodiment, the location information of the charging stand to be identified may include: a spatial location and a spatial orientation of the charging stand to be identified. Spatial position, ie space coordinates. The spatial orientation can be represented by the normal vector of the plane in which the charging dock is to be identified. The spatial position and spatial orientation of the charging stand to be identified may be the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot, that is, the spatial position and spatial orientation of the charging stand in the mobile robot coordinate system to be identified. The mobile robot coordinate system can be understood as the coordinate system whose coordinate origin is located on the mobile robot. For example, the coordinate origin of the mobile robot coordinate system is the center position of the mobile robot. At this time, in this step S203, specifically, according to the determined image area, the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot are determined.
在一种实施方式中,本步骤S203可以包括:确定图像区域中各个像素点相对于移动机器人的空间位置,将图像区域中各个像素点的空间位置的平均值,作为待识别充电座相对于移动机器人的空间位置;根据图像区域中各个像素点的空间位置确定目标平面,将目标平面的法向矢量作为待识别充电座相对于移动机器人的空间朝向。In an embodiment, the step S203 may include: determining a spatial position of each pixel in the image region relative to the mobile robot, and using an average value of spatial positions of the respective pixel points in the image region as the charging stand to be identified relative to the mobile The spatial position of the robot; the target plane is determined according to the spatial position of each pixel in the image region, and the normal vector of the target plane is taken as the spatial orientation of the charging stand to be identified relative to the mobile robot.
其中,图像区域中各个像素点相对于移动机器人的空间位置,即为图像区域中各个像素点在移动机器人坐标系中的位置。The spatial position of each pixel in the image region relative to the mobile robot is the position of each pixel in the image region in the mobile robot coordinate system.
在确定待识别充电座相对于移动机器人的空间位置和空间朝向之后,移动机器人还可以根据确定的空间位置和空间朝向,确定从移动机器人到待识别充电座的回充路径,通过控制移动机器人中的驱动部件,驱动移动机器人沿回充路径靠近待识别充电座。上述回充路径可以使移动机器人在到达待识别充电座时正对待识别充电座,实现对移动机器人的自动充电。After determining the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot, the mobile robot may further determine a refill path from the mobile robot to the charging stand to be identified according to the determined spatial position and spatial orientation, by controlling the mobile robot. The driving component drives the mobile robot to move along the recharging path to the charging stand to be identified. The above refilling path can enable the mobile robot to treat the charging stand when the charging stand is to be recognized, and realize automatic charging of the mobile robot.
例如,图3c显示了充电座相对于移动机器人的空间位置和空间朝向,其中,移动机器人的前部并未正对充电座。根据该空间位置和空间朝向,可以规划从移动机器人到充电座之间的回充路径。图3d显示了从移动机器人到充电座之间的一种回充路径示意图,当移动机器人沿着该回充路径从A点移动至B点时,移动机器人可以通过旋转使自身正对充电座,并继续从B点直行至充电座,进行充电。上述图3c和图3d均为俯视图。在该示例中,移动机器人上的充电部件位于移动机器人的前部。移动机器人在得到充电座相对于移动机器人的空间朝向时,能更好地规划回充路径,使移动机器人在到达充电座时正对充电座,从而更准确地完成充电操作。For example, Figure 3c shows the spatial position and spatial orientation of the charging stand relative to the mobile robot, wherein the front of the mobile robot is not facing the charging stand. Depending on the spatial position and spatial orientation, a refill path from the mobile robot to the charging cradle can be planned. Figure 3d shows a schematic diagram of a refill path from the mobile robot to the charging cradle. When the mobile robot moves from point A to point B along the recharging path, the mobile robot can rotate itself to face the charging cradle. And continue straight from point B to the charging stand for charging. 3c and 3d above are both plan views. In this example, the charging component on the mobile robot is located at the front of the mobile robot. When the mobile robot obtains the spatial orientation of the charging stand relative to the mobile robot, the recharging path can be better planned, so that the mobile robot faces the charging stand when reaching the charging stand, thereby completing the charging operation more accurately.
其中,移动机器人正对待识别充电座,即为移动机器人的前部与待识别充电座的连线垂直于上述目标平面。Wherein, the mobile robot is treating the charging stand, that is, the line connecting the front part of the mobile robot and the charging stand to be identified is perpendicular to the target plane.
由上述内容可知,本实施例可以获取红外摄像模组采集的包含能发出红外光的待识别充电座的图像,根据预设的充电座图像特征从图像中确定待识别充电座的图像区域,根据该图像区域,确定待识别充电座的位置信息。由于红外摄像模组的图像采集范围是以红外摄像模组为顶点的圆锥形范围,当移动机器人所在地面存在倾斜时,待识别充电座也能处在红外摄像模组的图像采集范围内,进而能够实现对待识别充电座的位置信息识别。并且,待识别充电座能发出红外光,在红外摄像模组采集的图像中能发出红外光的待识别充电座具有较明显的图像特征,因此从图像中识别待识别充电座的位置信息时,能够提高识别的准确性。It can be seen from the above that the image obtained by the infrared camera module and containing the infrared charging light to be recognized can be obtained, and the image area of the charging stand to be identified is determined from the image according to the preset charging seat image feature, according to The image area determines location information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized. Moreover, the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
为了提高成像质量,在本申请的另一实施例中,在图2所示实施例中,移动机器人还可以包括:能发射红外光的红外发射器。红外发射器发射的红外光能够照射在待识别充电座上。红外发射器可以安装在靠近红外摄像模组的位置。In order to improve the imaging quality, in another embodiment of the present application, in the embodiment shown in FIG. 2, the mobile robot may further include: an infrared emitter capable of emitting infrared light. The infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified. The infrared emitter can be mounted close to the infrared camera module.
在本申请的一个实施例中,待识别充电座上的标记物可以采用镜面材料,当红外发射器和红外摄像模组均正对待识别充电座时,待识别充电座可以将红外发射器发出的红外光反射至红外摄像模组的镜头中,这种反射为镜面反射。这样,红外摄像模组即可以采集到包含高亮的待识别充电座的图像,待识别充电座更容易被识别。In an embodiment of the present application, the marker on the charging stand to be identified may be a mirror material. When both the infrared emitter and the infrared camera module are to identify the charging stand, the charging stand to be identified may be emitted by the infrared emitter. The infrared light is reflected into the lens of the infrared camera module, and the reflection is specular reflection. In this way, the infrared camera module can collect an image containing the highlighted charging stand to be recognized, and the charging stand to be identified is more easily recognized.
红外发射器作为补光器,可以对移动机器人的周围环境提供照明,当待识别充电座受到红外光照射时,能够使红外摄像模组拍摄到的图像中的待识别充电座更清晰。The infrared emitter acts as a fill light to provide illumination to the surrounding environment of the mobile robot. When the charging stand to be recognized is illuminated by infrared light, the charging stand to be identified in the image captured by the infrared camera module can be made clearer.
在本申请的另一个实施例中,为了使移动机器人在相对于待识别充电座的任意位置处时,待识别充电座均能在图像中显示出高亮特征,待识别充电座上可以使用反光材料,例如,待识别充电座布满反光材料,或者,待识别充电座的标记物使用反光材料。其中,反光材料能够使反射光沿入射光的光路返回。In another embodiment of the present application, in order to make the mobile robot at any position relative to the charging stand to be identified, the charging stand to be identified can display a highlight feature in the image, and the reflective seat can be used for reflection. The material, for example, the charging stand to be identified is covered with a reflective material, or the marker of the charging stand to be identified uses a reflective material. Among them, the reflective material can return the reflected light along the optical path of the incident light.
例如,图4为本申请实施例提供的移动机器人中红外发射器和红外摄像模组的一种安装位置示意图。当红外发射器发射出红外光之后,处于红外发射器视场(Field of view,FOV)中的物体会反射一定数量的红外光至红外摄像模组镜头中,从而在镜头中产生红外图像。当待识别充电座出现在红外发射器FOV和红外摄像模组镜头FOV的重叠区域中时,由于反光材料几乎能够对红外光进行完全反射,图像中会出现高亮的区域,即待识别充电座能够在图像中高亮显示,相较于待识别充电座周围的物体,待识别充电座在图像中可以具有更高的辨识度。For example, FIG. 4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module in a mobile robot according to an embodiment of the present application. When the infrared emitter emits infrared light, an object in the field of view (FOV) of the infrared emitter reflects a certain amount of infrared light into the lens of the infrared camera module, thereby generating an infrared image in the lens. When the charging stand to be identified appears in the overlapping area of the infrared emitter FOV and the infrared camera lens FOV, since the reflective material can almost completely reflect the infrared light, a bright area appears in the image, that is, the charging stand to be identified It can be highlighted in the image, and the charging stand to be identified can have higher recognition in the image than the object around the charging stand to be identified.
本实施例中,反光材料可以采用反光贴纸,反光贴纸是一种高反射率材料。反光材料表面涂有高折射率层。其中,高折射率层可以包括高折射率玻璃微珠、晶体或者棱镜。这些高反射率层能够对来自不同方向的光进行反射,使光沿着入射的方向反射回去。例如,图5a为涂有高折射率玻璃微珠的反光贴纸的一种结构示意图。这种反光贴纸包括表面树脂层、高折射率玻璃微珠、粘结剂层、反射层和贴纸层,入射光穿过表面树脂层投射在高折射率玻璃微珠上,经过反射层反射之后,又从表面树脂层反射回去。图5a中的反光贴纸中的玻璃微珠也可以为晶体或者透镜。图5b为晶体对光进行反射的一种原理图,图5c为棱镜对光进行反射的一种原理图。可见,入射光在投射到晶体或透镜上之后,经过反射和折射,光均可以以入射光的反方向出射。In this embodiment, the reflective material can be a reflective sticker, and the reflective sticker is a high reflectivity material. The surface of the reflective material is coated with a high refractive index layer. Wherein, the high refractive index layer may comprise high refractive index glass beads, crystals or prisms. These high reflectivity layers are capable of reflecting light from different directions and reflecting the light back in the direction of incidence. For example, Figure 5a is a schematic view of a structure of a reflective sticker coated with high refractive index glass beads. The reflective sticker comprises a surface resin layer, a high refractive index glass bead, an adhesive layer, a reflective layer and a sticker layer, and the incident light is projected on the high refractive index glass microsphere through the surface resin layer, and after being reflected by the reflective layer, It is reflected back from the surface resin layer. The glass beads in the reflective sticker of Figure 5a can also be crystals or lenses. Figure 5b is a schematic diagram of the crystal reflecting light, and Figure 5c is a schematic diagram of the prism reflecting light. It can be seen that after the incident light is reflected and refracted after being projected onto the crystal or the lens, the light can be emitted in the opposite direction of the incident light.
当红外光照射在一物体的反光贴纸上时,红外光几乎会完全返回,使图像中该物体的像素区域内出现高亮区域。在待识别充电座上粘贴具有特殊形状的反光贴纸,则可以利用该特殊形状对待识别充电座进行识别。When the infrared light is illuminated on the reflective sticker of an object, the infrared light will almost completely return, causing a highlight area to appear in the pixel area of the object in the image. A reflective sticker having a special shape is pasted on the charging stand to be identified, and the special shape can be used to identify the charging stand.
本实施例中,无论移动机器人位于什么位置,待识别充电座均能将移动机器人上红外发射器发出的红外光反射回红外摄像模组,使待识别充电座能在图像中呈现出高亮图案。In this embodiment, no matter where the mobile robot is located, the charging stand to be recognized can reflect the infrared light emitted by the infrared emitter of the mobile robot back to the infrared camera module, so that the charging stand to be recognized can present a high-brightness pattern in the image. .
在本申请的另一实施例中,在图2所示实施例中,步骤S203,根据确定的图像区域,确定待识别充电座的位置信息的步骤,可以按照图6所示流程示意图进行,具体可以包括以下步骤S203a和步骤S203b。In another embodiment of the present application, in the embodiment shown in FIG. 2, in step S203, the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. The following steps S203a and S203b may be included.
步骤S203a:根据预先获取的待识别充电座上的第一预设数量个标识点,从图像区域中确定第一预设数量个标识点的图像位置。Step S203a: Determine an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the charging stand to be identified that are acquired in advance.
其中,第一预设数量可以为预设的数量值。第一预设数量可以为大于3的数值。第一预设数量个标识点为预先设置的点,这些点之间的相对位置是固定的。The first preset quantity may be a preset quantity value. The first predetermined number may be a value greater than three. The first preset number of identification points are preset points, and the relative positions between the points are fixed.
预先获取待识别充电座上的第一预设数量个标识点,可以理解为,预先获取待识别充电座上的第一预设数量个标识点的空间位置,即第一预设数量个标识点在待识别充电座上的空间位置。第一预设数量个标识点的空间位置,可以为在以第一预设数量个标识点中的一个标识点为坐标原点建立的坐标系中的空间坐标,也可以是以空间中的任一固定点为坐标原点时建立的坐标系中的空间坐标。Obtaining, in advance, the first preset number of identification points on the charging stand to be identified, which may be understood as: pre-acquiring the spatial position of the first preset number of identification points on the charging stand to be identified, that is, the first preset number of identification points The spatial position on the charging stand to be identified. The spatial position of the first preset number of identification points may be a space coordinate in a coordinate system established by using one of the first preset number of identification points as a coordinate origin, or may be any space The space coordinate in the coordinate system established when the fixed point is the origin of the coordinate.
确定第一预设数量个标识点的图像位置,可以理解为,确定第一预设数量个标识点在图像中的坐标。具体的,针对每个标识点,在确定该标识点的图像位置时,可以从图像区域中确定该标识点对应的像素点,根据该像素点的坐标确定该标识点的图像位置。其中,该标识点对应的像素点可以是一个,也可以是多个。当该标识点对应的像素点为一个时,可以直接将该像素点的图像坐标确定为该标识点的图像位置;当该标识点对应的像素点为多个时,可以将该多个像素点的图像坐标的平均值确定为该标识点的图像位置。其中,图像坐标为在图像坐标系下的坐标。Determining the image position of the first preset number of identification points may be understood as determining the coordinates of the first preset number of identification points in the image. Specifically, for each identification point, when determining the image position of the identification point, the pixel point corresponding to the identification point may be determined from the image area, and the image position of the identification point is determined according to the coordinates of the pixel point. The pixel corresponding to the identifier point may be one or more. When the pixel point corresponding to the identifier point is one, the image coordinate of the pixel point may be directly determined as the image position of the identifier point; when the pixel point corresponding to the identifier point is multiple, the plurality of pixel points may be The average of the image coordinates is determined as the image position of the marker point. Among them, the image coordinates are coordinates in the image coordinate system.
例如,参见图7,其中的4个矩形框为待识别充电座上的标记物,以4个矩形框的中心点作为预先设置的标识点,第一预设数量为4。以左下角的标识点为坐标原点,建立图7所示坐标系,则按照逆时针方向确定各个标识点的 空间坐标分别为(0,0,0),(L 2,0,0),(L 2,-L 1,0)和(0,-L 1,0)。根据上述内容,可以确定图像区域中各个矩形区域的中心像素点即为图像区域中第一预设数量个标识点,这些标识点的图像坐标分别为(u 1,v 1),(u 2,v 2),(u 3,v 3)和(u 4,v 4)。 For example, referring to FIG. 7 , the four rectangular frames are the markers on the charging stand to be identified, and the center points of the four rectangular frames are used as preset marking points, and the first preset number is 4. Taking the marker point in the lower left corner as the coordinate origin and establishing the coordinate system shown in Fig. 7, the space coordinates of each marker point are determined in the counterclockwise direction to be (0, 0, 0), (L 2 , 0, 0), ( L 2 , -L 1 , 0) and (0, -L 1 , 0). According to the above content, it can be determined that the central pixel point of each rectangular area in the image area is the first preset number of identification points in the image area, and the image coordinates of the identification points are (u 1 , v 1 ), respectively (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ).
本实施例中,红外摄像模组可以为单目摄像机。In this embodiment, the infrared camera module can be a monocular camera.
步骤S203b:根据第一预设数量个标识点在待识别充电座上的空间位置和第一预设数量个标识点的图像位置,以及第一预设公式,确定待识别充电座相对于移动机器人的位置信息。Step S203b: determining, according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, determining the charging stand to be identified relative to the mobile robot Location information.
一种实施方式中,本步骤S203b具体可以包括,根据以下第一预设公式(1),确定待识别充电座相对于移动机器人的旋转矩阵R和平移矩阵t,该旋转矩阵R和平移矩阵t即为待识别充电座相对于移动机器人的位置信息:In an embodiment, the step S203b may specifically include: determining, according to the following first preset formula (1), a rotation matrix R and a translation matrix t of the charging stand to be identified relative to the mobile robot, the rotation matrix R and the translation matrix t That is, the position information of the charging stand to be identified relative to the mobile robot:
Figure PCTCN2019076764-appb-000005
Figure PCTCN2019076764-appb-000005
其中,(X i,Y i,Z i)表示第一预设数量个标识点中第i个标识点在待识别充电座上的空间位置,(u i,v i)表示第一预设数量个标识点中第i个标识点的图像位置,K表示预设的红外摄像模组的内参矩阵,argmin表示最小化投影误差函数,n为第一预设数量,(X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在图像的成像平面上投影的图像坐标。 Wherein, (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity. The image position of the i-th identification point in the identification point, K represents the internal parameter matrix of the preset infrared camera module, argmin represents the minimum projection error function, and n is the first preset number, (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i ), and (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image The image coordinates of the projection.
K具体可以为
Figure PCTCN2019076764-appb-000006
其中,f u和f v分别为红外摄像模组的镜头在图像的u轴和v轴方向上的等效焦距,c u和c v是上述镜头的光轴在图像中的投影中心的图像坐标。
K can be
Figure PCTCN2019076764-appb-000006
Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
通过求解
Figure PCTCN2019076764-appb-000007
的最优化解,可以得到上述旋转矩阵R和平移矩阵t。
By solving
Figure PCTCN2019076764-appb-000007
The optimized solution can obtain the above rotation matrix R and translation matrix t.
例如,当n为4时,各个标识点的空间坐标分别为(0,0,0),(L 2,0,0),(L 2,-L 1,0) 和(0,-L 1,0);各个标识点的图像坐标分别为(u 1,v 1),(u 2,v 2),(u 3,v 3)和(u 4,v 4),根据这些已知量,以及上述第一预设公式,可以得到上述旋转矩阵R和平移矩阵t。 For example, when n is 4, the spatial coordinates of each identification point are (0, 0, 0), (L 2 , 0, 0), (L 2 , -L 1 , 0) and (0, -L 1 , 0); the image coordinates of each marker point are (u 1 , v 1 ), (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ), respectively, according to these known quantities, And the first preset formula described above, the rotation matrix R and the translation matrix t can be obtained.
在另一种实施方式中,可以根据上述旋转矩阵R和平移矩阵t,确定待识别充电座相对于移动机器人的空间朝向和空间位置,作为待识别充电座的位置信息。In another embodiment, the spatial orientation and spatial position of the charging stand to be identified relative to the mobile robot may be determined according to the rotation matrix R and the translation matrix t as the position information of the charging stand to be identified.
综上,本实施例中,可以根据预先获取的待识别充电座上的标识点的空间位置,以及图像区域中标识点的图像位置,确定待识别充电座相对于移动机器人的空间位置,能够更准确地确定待识别充电座的位置信息。In summary, in this embodiment, the spatial position of the identification point on the charging stand to be identified and the image position of the identification point in the image area are determined in advance, and the spatial position of the charging stand to be identified relative to the mobile robot can be determined. The position information of the charging stand to be identified is accurately determined.
在本申请的另一实施例中,在图2所示实施例中,步骤S203,根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,可以按照图8所示流程示意图进行,具体可以包括以下步骤S203A、步骤S203B和步骤S203C。In another embodiment of the present application, in the embodiment shown in FIG. 2, in step S203, the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. Specifically, the following steps S203A, S203B, and S203C may be included.
步骤S203A:获取图像区域中第二预设数量个标识点的深度信息。Step S203A: Acquire depth information of a second preset number of identification points in the image area.
其中,第二预设数量可以为预设的数量值,可以为大于3的数量值。第二预设数量可以与第一预设数量相同,也可以不同。The second preset quantity may be a preset quantity value, and may be a quantity value greater than 3. The second preset number may be the same as the first preset number or may be different.
图像区域中第二预设数量个标识点可以为按照预设规则确定的像素点,也可以为预设的像素点。上述预设规则可以为随机选择像素点,也可以为选择预设位置处的像素点。例如,当待识别充电座包含4个矩形标记物时,可以将图像区域中每个矩形标记物对应的矩形区域的中心像素点作为第二预设数量个标识点。当然,也可以从图像区域中多选择几个标识点。较多数量的标识点可以提高确定的待识别充电座位置的准确率。The second preset number of identification points in the image area may be pixels determined according to a preset rule, or may be preset pixels. The foregoing preset rule may be randomly selecting a pixel point, or may be selecting a pixel point at a preset position. For example, when the charging stand to be identified includes four rectangular markers, the central pixel of the rectangular region corresponding to each rectangular marker in the image region may be used as the second predetermined number of identification points. Of course, it is also possible to select a plurality of identification points from the image area. A larger number of identification points can improve the accuracy of the determined position of the charging stand to be identified.
上述深度信息可以包括距离数值、距离误差范围等中的至少一种。上述深度信息可以理解为,各个标识点对应的物体上的点与红外摄像模组之间的距离,即各个标识点对应的物体上的点与移动机器人之间的距离。The depth information may include at least one of a distance value, a distance error range, and the like. The depth information can be understood as the distance between the point on the object corresponding to each identifier point and the infrared camera module, that is, the distance between the point on the object corresponding to each identifier point and the mobile robot.
步骤S203B:根据上述深度信息和图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定图像区域中第二预设数量个标识点的空间位置。Step S203B: Determine, according to the depth information and the image position of the second preset number of identification points in the image region, the spatial position of the second preset number of identification points in the image region according to the second preset formula.
其中,图像区域中第二预设数量个标识点的空间位置可以理解为,图像区域中第二预设数量个标识点对应的物体上的点的空间坐标。The spatial position of the second preset number of identification points in the image area can be understood as the spatial coordinate of the point on the object corresponding to the second preset number of identification points in the image area.
一种实施方式中,本步骤S203B具体可以包括:In an embodiment, the step S203B may specifically include:
按照以下第二预设公式(2),确定图像区域中第二预设数量个标识点的空间位置:The spatial position of the second preset number of identification points in the image area is determined according to the following second preset formula (2):
Figure PCTCN2019076764-appb-000008
Figure PCTCN2019076764-appb-000008
其中,(X i,Y i,Z i)表示图像区域中第i个标识点的空间位置,i可以在1~m之间取值,m表示第二预设数量。(X i,Y i,Z i)所对应的坐标系的原点可以建立在移动机器人上,即(X i,Y i,Z i)所对应的坐标系的原点可以为移动机器人的中心位置。Z i表示图像区域中第i个标识点的深度信息,(X i,Y i)表示图像区域中第i个标识点在空间的平面坐标,K表示预设的红外摄像模组的内参矩阵,(u i,v i)表示图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region, i may take a value between 1 and m, and m represents a second preset number. The origin of the coordinate system corresponding to (X i , Y i , Z i ) can be established on the mobile robot, that is, the origin of the coordinate system corresponding to (X i , Y i , Z i ) can be the center position of the mobile robot. Z i represents the depth information of the i-th identification point in the image area, (X i , Y i ) represents the plane coordinate of the i-th identification point in the image area, and K represents the internal reference matrix of the preset infrared camera module. (u i , v i ) represents the image position of the i-th identification point in the image area.
K具体可以为
Figure PCTCN2019076764-appb-000009
其中,f u和f v分别为红外摄像模组的镜头在图像的u轴和v轴方向上的等效焦距,c u和c v是上述镜头的光轴在图像中的投影中心的图像坐标。
K can be
Figure PCTCN2019076764-appb-000009
Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
例如,第二预设数量个标识点的图像位置分别为{(u 1,v 1),(u 2,v 2),...,(u m,v m)},各个标识点的深度信息可以分别为{Z 1,Z 2,...,Z m},其中,m为第二预设数量,根据这些已知量,以及上述第二预设公式,可以得到图像区域中第二预设数量个标识点的空间坐标。 For example, the image positions of the second predetermined number of identification points are {(u 1 , v 1 ), (u 2 , v 2 ), . . . , (u m , v m )}, respectively, the depth of each identification point. The information may be {Z 1 , Z 2 , . . . , Z m }, respectively, where m is a second preset number, and according to the known quantities, and the second preset formula described above, a second image area may be obtained. The spatial coordinates of a preset number of identification points.
步骤S203C:根据图像区域中第二预设数量个标识点的空间位置,确定待识别充电座的位置信息。Step S203C: Determine location information of the charging stand to be identified according to the spatial position of the second preset number of identification points in the image area.
一种实施方式中,本步骤S203C具体可以包括,将图像区域中第二预设数量个标识点的空间位置的平均值,作为待识别充电座相对于移动机器人的空间位置;确定图像区域中第二预设数量个标识点对应的空间位置所在平面, 将该平面的法向矢量作为待识别充电座相对于移动机器人的空间朝向。In an embodiment, the step S203C may specifically include: using an average value of a spatial position of the second preset number of identification points in the image area as a spatial position of the charging station to be identified relative to the mobile robot; determining the image area The plane where the spatial position corresponding to the preset number of identification points is located, and the normal vector of the plane is used as the spatial orientation of the charging stand to be identified relative to the mobile robot.
综上,本实施例可以根据图像区域中像素点的深度信息确定标识点的空间位置,根据该空间位置确定待识别充电座的位置信息,能够提高确定待识别充电座的位置信息的准确性。In summary, the present embodiment can determine the spatial position of the identification point according to the depth information of the pixel in the image area, and determine the position information of the charging stand to be identified according to the spatial position, thereby improving the accuracy of determining the position information of the charging stand to be identified.
在本申请实施例中,可以采用多种实施方式获取上述实施例步骤S203A中的深度信息。In the embodiment of the present application, the depth information in step S203A of the foregoing embodiment may be obtained by using various embodiments.
一种实施方式中,当红外摄像模组还具有深度感知功能时,步骤S203A,获取图像区域中第二预设数量个标识点的深度信息的步骤,可以包括:In an embodiment, when the infrared camera module further has a depth sensing function, the step of acquiring the depth information of the second preset number of the identifier points in the image region may include:
获取红外摄像模组采集的与上述图像对应的深度图像,从深度图像中获取图像区域中第二预设数量个标识点的深度信息。其中,深度图像包括各个标识点的深度信息。Obtaining a depth image corresponding to the image acquired by the infrared camera module, and acquiring depth information of the second preset number of identifier points in the image region from the depth image. The depth image includes depth information of each identification point.
在本实施例中,红外摄像模组可以包含深度传感器和红外发射器。该深度传感器可以为飞行时间(Time Of Flight,TOF)传感器。该TOF传感器可以利用红外发射器发出的红外光和红外摄像模组镜头接收的红外光之间的时间差来计算物体与镜头之间的深度信息,生成深度图像。在生成深度图像时,该TOF传感器也可以将红外光调制成一定频率后得到调制光,发射该调制光,通过计算接收到的调制光与发射的调制光之间的相位差来计算物体到镜头之间的深度值。In this embodiment, the infrared camera module may include a depth sensor and an infrared emitter. The depth sensor can be a Time Of Flight (TOF) sensor. The TOF sensor can calculate the depth information between the object and the lens by using the time difference between the infrared light emitted by the infrared emitter and the infrared light received by the infrared camera module lens to generate a depth image. When generating the depth image, the TOF sensor can also modulate the infrared light to a certain frequency to obtain modulated light, emit the modulated light, and calculate the object to the lens by calculating the phase difference between the received modulated light and the emitted modulated light. The depth value between.
本实施例中,红外摄像模组在采集图像时也会得到该图像对应的深度图像,该深度图像包含该图像中各个像素点的深度信息,由于标识点是从图像的像素点中确定的,因此该深度图像包括各个标识点的深度信息。In this embodiment, the infrared camera module also obtains a depth image corresponding to the image when the image is acquired, and the depth image includes depth information of each pixel in the image, and the identifier point is determined from the pixel of the image. The depth image thus includes depth information for each of the identified points.
另一种实施方式中,当红外摄像模组为红外双目摄像机时,即红外摄像模组包括左摄像模组和右摄像模组时,上述图像包括左摄像模组采集的第一图像,以及右摄像模组采集的第二图像,上述图像区域为从第一图像中确定的第一图像区域或从第二图像中确定的第二图像区域。第一图像区域为根据预设的充电座图像特征从第一图像中确定的待识别充电座的图像区域,第二图像区域为根据预设的充电座图像特征从第二图像中确定的待识别充电座的图像区域。步骤S203A,获取图像区域中第二预设数量个标识点的深度信息 的步骤,可以包括:In another embodiment, when the infrared camera module is an infrared binocular camera, that is, when the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module, and a second image acquired by the right camera module, wherein the image area is a first image area determined from the first image or a second image area determined from the second image. The first image area is an image area of the charging stand to be identified determined from the first image according to the preset charging stand image feature, and the second image area is to be identified from the second image according to the preset charging stand image feature. The image area of the charging stand. Step S203A: The step of acquiring the depth information of the second preset number of identifier points in the image area may include:
根据第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定图像区域中第二预设数量个标识点的深度信息。Determining depth information of a second predetermined number of identification points in the image area according to different image positions of the corresponding identification points in the first image area and the second image area.
其中,第一图像区域和第二图像区域中的对应标识点为:第一图像区域中的第一标识点和第二图像区域中的第二标识点对应于空间中的同一点;也就是,空间中的同一点分别在第一图像区域和第二图像区域中的成像点。第一标识点为第一图像区域中的任一标识点,第二标识点为第二图像区域中的任一标识点。这里,仅以第一标识点和第二标识点为例进行说明,并不起限定作用。The corresponding identifier points in the first image area and the second image area are: the first identifier point in the first image area and the second identifier point in the second image area correspond to the same point in the space; that is, The same point in space is the imaged point in the first image area and the second image area, respectively. The first identification point is any identification point in the first image area, and the second identification point is any identification point in the second image area. Here, the first identification point and the second identification point are taken as an example for illustration, and are not limiting.
例如,图9a为双目摄像机的成像原理图。红外双目摄像机包括左眼相机和右眼相机,两个相机的中心点之间的连线为基线。点P在左眼相机中的成像像素点为左眼像素,在右眼相机中的成像像素点为右眼像素。左眼像素的和右眼像素互为对应标识点。For example, Figure 9a is an imaging schematic of a binocular camera. The infrared binocular camera includes a left-eye camera and a right-eye camera, and the line between the center points of the two cameras is a baseline. Point P is the left eye pixel in the left eye camera and the right eye pixel in the right eye camera. The left eye pixel and the right eye pixel are corresponding identification points.
一个实施例中,可以按照以下第三预设公式(3),确定图像区域中第二预设数量个标识点的深度信息:In an embodiment, the depth information of the second preset number of identification points in the image area may be determined according to the following third preset formula (3):
Figure PCTCN2019076764-appb-000010
Figure PCTCN2019076764-appb-000010
其中,Z为图像区域中一个标识点的深度信息。这里,以标识点S 1为例。f为左摄像模组镜头的焦距或右摄像模组镜头的焦距。一个示例中,左摄像模组或右摄像模组的焦距相同。b为左摄像模组的镜头中心和右摄像模组的镜头中心之间的基线长度,u L和u R分别是该标识点S 1及标识点S 1的对应标识点S 2的图像坐标。例如,当图像区域为第一图像区域时,该标识点S 1为第一图像区域中的像素点,对应标识点S 2为第二图像区域中的像素点;当图像区域为第二图像区域时,该标识点S 1为第二图像区域中的像素点,对应标识点S 2为第一图像区域中的像素点。 Where Z is the depth information of an identification point in the image area. Here, the identification point S 1 is taken as an example. f is the focal length of the left camera module lens or the focal length of the right camera module lens. In one example, the left camera module or the right camera module has the same focal length. b is a baseline length between the center of the lens and the right lens center of the camera module of the left camera module, u L and u R are the identifier identifying the corresponding point S and point S. 1. 1 identifies image coordinate point S 2. For example, when the image area is the first image area, the identification point S 1 is a pixel point in the first image area, the corresponding identification point S 2 is a pixel point in the second image area; and when the image area is the second image area The identification point S 1 is a pixel point in the second image area, and the corresponding identification point S 2 is a pixel point in the first image area.
针对图像区域中第二预设数量个标识点中的每个标识点,均可以采用上述第三预设公式确定该标识点的深度信息。镜头中心即为光圈中心。For each of the second preset number of identification points in the image area, the third preset formula may be used to determine the depth information of the identification point. The center of the lens is the center of the aperture.
参见图9b,图9b为双目摄像机的深度计算原理图。在该图9b中,O L是左摄像模组的镜头中心,O R是右摄像模组的镜头中心,b为左摄像模组的镜 头中心和右摄像模组的镜头中心之间的基线长度,P L为P点在左摄像模组的成像平面上的成像像素点(即标识点),P R为P点在右摄像模组的成像平面上的成像像素点(即对应标识点)。Z为P点与基线之间的距离,也就是标识点的深度信息。u L是P L的坐标,u R是P R的坐标。 Referring to Figure 9b, Figure 9b is a schematic diagram of the depth calculation of the binocular camera. In Figure 9b, OL is the lens center of the left camera module, O R is the lens center of the right camera module, and b is the baseline length between the lens center of the left camera module and the lens center of the right camera module. P L is the imaging pixel point (ie, the identification point) of the P point on the imaging plane of the left camera module, and P R is the imaging pixel point (ie, the corresponding identification point) of the P point on the imaging plane of the right camera module. Z is the distance between the P point and the baseline, that is, the depth information of the identification point. u L is the coordinate of P L and u R is the coordinate of P R .
再一种实施方式中,当移动机器人还包括惯性感测单元(Inertial Measurement Unit,IMU)时,步骤S203A,获取图像区域中第二预设数量个标识点的深度信息的步骤,可以包括以下步骤1~步骤4。In another embodiment, when the mobile robot further includes an Inertial Measurement Unit (IMU), the step of acquiring the depth information of the second preset number of identification points in the image region may include the following steps. 1 to 4.
步骤1:获取红外摄像模组在采集上述图像之前采集的上一图像。Step 1: Obtain the previous image acquired by the infrared camera module before acquiring the above image.
红外摄像模组可以按照预设的周期采集图像,上一图像为当前采集时刻之前的上一采集时刻该红外摄像模组采集的图像,上一采集时刻为与当前采集时刻相邻的采集时刻,即上一采集时刻为上一个红外摄像模组采集图像的时刻。采集时刻为红外摄像模组采集图像的时刻。为便于理解,下面以上一采集时刻采集的图像为图像A,即上一图像为图像A,当前采集时刻采集的图像为图像B为例进行说明。The infrared camera module can collect images according to a preset period. The previous image is an image captured by the infrared camera module at a previous acquisition time before the current acquisition time, and the last acquisition time is an acquisition time adjacent to the current collection time. That is, the last acquisition time is the time at which the image is acquired by the previous infrared camera module. The acquisition time is the time at which the infrared camera module collects images. For ease of understanding, the image acquired at the above acquisition time is image A, that is, the previous image is image A, and the image acquired at the current acquisition time is image B as an example.
红外摄像模组采集上一图像A时移动机器人的位置为第一位置,红外摄像模组采集上述图像B时移动机器人的位置为第二位置。当红外摄像模组在移动机器人运动过程中采集图像时,上述第一位置和第二位置是不同的。When the infrared camera module collects the previous image A, the position of the mobile robot is the first position, and the position of the mobile robot when the infrared camera module acquires the image B is the second position. When the infrared camera module collects images during the movement of the mobile robot, the first position and the second position are different.
本实施例中,红外摄像模组可以为单目摄像机。In this embodiment, the infrared camera module can be a monocular camera.
步骤2:获取根据预设的充电座图像特征从上一图像中确定的待识别充电座的上一图像区域。Step 2: Acquire a previous image area of the charging stand to be identified determined from the previous image according to the preset charging stand image feature.
本步骤可以参见图2所示实施例中的步骤S202,具体说明内容此处不再详述。For the step, refer to step S202 in the embodiment shown in FIG. 2, and the specific content is not described in detail herein.
步骤3:获取IMU采集的从第一位置到第二位置时的运动参量。Step 3: Acquire the motion parameters collected by the IMU from the first position to the second position.
上述运动参量可以包括旋转量和平移量。在移动机器人的运动过程中,IMU可以采集移动机器人运动过程中任意两个位置之间的旋转量和平移量。The above motion parameters may include the amount of rotation and the amount of translation. During the movement of the mobile robot, the IMU can capture the amount of rotation and the amount of translation between any two positions during the movement of the mobile robot.
其中,旋转量可以理解为旋转矩阵。平移量可以理解为平移矩阵。第一位置为采集上一图像时移动机器人的空间位置,第二位置为采集图像时移动 机器人的空间位置。Among them, the amount of rotation can be understood as a rotation matrix. The amount of translation can be understood as a translation matrix. The first position is the spatial position of the mobile robot when the previous image is acquired, and the second position is the spatial position of the mobile robot when the image is acquired.
步骤4:根据上述运动参量和上一图像区域,确定图像区域中第二预设数量个标识点的深度信息。Step 4: Determine depth information of the second preset number of identification points in the image area according to the motion parameter and the previous image area.
一种实施方式中,本步骤4具体可以包括以下实施方式:针对图像区域中的目标标识点,根据上一图像区域中该目标标识点的对应标识点的图像位置和上述运动参量,按照以下第四预设公式(4),确定该目标标识点的深度信息;其中,目标标识点为图像区域中第二预设数量个标识点中的任意一个标识点,该目标标识点的对应标识点为上一图像区域中标识点。In an embodiment, the step 4 may specifically include the following embodiments: for the target identification point in the image area, according to the image position of the corresponding identification point of the target identification point in the previous image area and the motion parameter, according to the following The fourth preset formula (4) determines the depth information of the target identification point; wherein the target identification point is any one of the second preset number of identification points in the image area, and the corresponding identification point of the target identification point is Identify points in the previous image area.
s Ax A=s Bx BR+t,x A=K -1p′ A,x B=K -1p′ B              (4) s A x A =s B x B R+t,x A =K -1 p' A ,x B =K -1 p' B (4)
其中,p′ A=(u A,v A,1) T,p′ B=(u B,v B,1) T,T为矩阵转置符号。(u B,v B)为目标标识点的图像位置,(u A,v A)为上一图像区域中该目标标识点的对应标识点的图像位置,p′ A为(u A,v A)的归一化坐标,p′ B为(u B,v B)的归一化坐标。R和t分别为上述运动参量中的旋转量和平移量,即R为上述运动参量中的旋转矩阵,t为上述运动参量中的平移矩阵。K为预设的红外摄像模组的内参矩阵。s A为上述目标标识点的对应标识点的深度信息,s B为该目标标识点的深度信息。x A表示上述目标标识点的对应标识点的归一化平面坐标,x B表示该目标标识点的归一化平面坐标。根据上述第四预设公式可以得到图像区域中第二预设数量个标识点中各个标识点的深度信息。 Where p' A = (u A , v A , 1) T , p' B = (u B , v B , 1) T , T is a matrix transpose symbol. (u B , v B ) is the image position of the target marker point, (u A , v A ) is the image position of the corresponding marker point of the target marker point in the previous image region, and p′ A is (u A , v A The normalized coordinates of p) B are the normalized coordinates of (u B , v B ). R and t are respectively the amount of rotation and the amount of translation in the above-mentioned motion parameters, that is, R is a rotation matrix in the above-described motion parameters, and t is a translation matrix in the above-described motion parameters. K is the internal reference matrix of the preset infrared camera module. s A is the depth information of the corresponding identification point of the target identification point, and s B is the depth information of the target identification point. x A represents the normalized plane coordinate of the corresponding identification point of the target identification point, and x B represents the normalized plane coordinate of the target identification point. According to the fourth preset formula, depth information of each of the second preset number of identification points in the image area may be obtained.
K具体可以为
Figure PCTCN2019076764-appb-000011
其中,f u和f v分别为红外摄像模组的镜头在图像的u轴和v轴方向上的等效焦距,c u和c v是上述镜头的光轴在图像中的投影中心的图像坐标。
K can be
Figure PCTCN2019076764-appb-000011
Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
上述目标标识点和目标标识点的对应标识点为:图像区域的该目标标识点和上一图像区域中目标标识点的对应标识点对应于空间中的同一点;也就是,空间中的同一点分别在上一图像区域和图像区域中的成像点。The corresponding identifier points of the target identifier point and the target identifier point are: the target identifier point of the image area and the corresponding identifier point of the target identifier point in the previous image area correspond to the same point in the space; that is, the same point in the space Imaging points in the previous image area and image area, respectively.
参见图9c,该图9c为单目摄像机+IMU实施例中计算深度信息的一种原理示意图。其中,A为第一位置,B为第二位置。在A位置时检测到P点的成像点 为p A(u A,v A),在移动机器人移动至B位置时检测到P点的成像点为p B(u B,v B),O A是A位置处红外摄像模组的镜头中心,O B是B位置处红外摄像模组的镜头中心。根据图9c的示意图,可以采用三角测量法获取成像点的深度信息。 Referring to FIG. 9c, FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera+IMU embodiment. Where A is the first position and B is the second position. The image point at which the P point is detected at the A position is p A (u A , v A ), and the image point at which the point P is detected when the mobile robot moves to the B position is p B (u B , v B ), O A It is the lens center of the infrared camera module at the A position, and O B is the lens center of the infrared camera module at the B position. According to the schematic diagram of Fig. 9c, the depth information of the imaged point can be acquired by triangulation.
A位置到B位置的旋转量和平移量分别为R和t,根据相机模型,令x A=K -1p′ A,x B=K -1p′ B为两个成像点的归一化平面坐标,则可以得到s Ax A=s Bx BR+t,其中,s A和s B分别是两个成像点的深度信息。上述s A和s B为未知数,为了求解s Ax A=s Bx BR+t,可以在s Ax A=s Bx BR+t两侧同时左乘一个x A的反对称矩阵
Figure PCTCN2019076764-appb-000012
由于
Figure PCTCN2019076764-appb-000013
因此,可得到
Figure PCTCN2019076764-appb-000014
通过求解
Figure PCTCN2019076764-appb-000015
方程,可以计算得到深度信息s B
The amount of rotation and the amount of translation from position A to position B are R and t, respectively. According to the camera model, x A = K -1 p' A , x B = K -1 p' B is the normalization of the two imaging points. For the plane coordinates, s A x A = s B x B R+t can be obtained, where s A and s B are the depth information of the two image points, respectively. The above s A and s B are unknowns. To solve s A x A = s B x B R+t, we can simultaneously multiply an antisymmetric matrix of x A on both sides of s A x A = s B x B R+t
Figure PCTCN2019076764-appb-000012
due to
Figure PCTCN2019076764-appb-000013
Therefore, it is available
Figure PCTCN2019076764-appb-000014
By solving
Figure PCTCN2019076764-appb-000015
Equation, the depth information s B can be calculated.
当充电座的某个平面上用反光材料做相应的标记之后,即待识别充电座的标记物使用反光材料,当待识别充电座出现在红外摄像模组的FOV中时,比较容易获取移动机器人与待识别充电座的相对位置关系。但是,当移动机器人在移动过程中移动至红外摄像模组的FOV外时,待识别充电座的标记物没有出现在红外摄像模组的FOV中,则移动机器人很难识别出待识别充电座。After the reflective material is marked on a certain plane of the charging stand, the marker of the charging stand is used to use the reflective material, and when the charging stand to be identified appears in the FOV of the infrared camera module, it is easier to obtain the mobile robot. Relative positional relationship with the charging stand to be identified. However, when the mobile robot moves outside the FOV of the infrared camera module during the movement, the marker of the charging stand to be identified does not appear in the FOV of the infrared camera module, and it is difficult for the mobile robot to recognize the charging stand to be identified.
另外,单一平面上做标记物有相应的可识别角度,即识别FOV,如图10a所示的待识别充电座的FOV。当红外摄像模组移动至图10a所示位置处时,即当移动机器人在移动过程中移动至图10a所示位置处时,红外摄像模组在待识别充电座的标记物的FOV外,则移动机器人很难识别出待识别充电座。In addition, the marker on a single plane has a corresponding identifiable angle, ie, the FOV is identified, as shown in Figure 10a, the FOV of the cradle to be identified. When the infrared camera module moves to the position shown in FIG. 10a, that is, when the mobile robot moves to the position shown in FIG. 10a during the movement, the infrared camera module moves outside the FOV of the marker to be identified by the charging stand. It is difficult for the robot to recognize the charging stand to be identified.
为了在上述情况下也能够识别到待识别充电座,在本申请的另一实施例中,待识别充电座的各个侧面上均可以包括反光材料,各个侧面上反光材料的图案特征不同。具体的,在识别充电座时,可以根据预设的各个侧面的图案特征,确定待识别充电座的朝向。In order to be able to identify the charging stand to be identified in the above case, in another embodiment of the present application, the reflective side of each of the sides of the charging stand to be identified may include a reflective material, and the pattern features of the reflective material on each side are different. Specifically, when the charging stand is identified, the orientation of the charging stand to be identified may be determined according to the preset pattern features of the respective sides.
在本实施例中,可以将待识别充电座划分为若干个侧面,侧面也可以称为断面,每个侧面标记上不同的图案的反光材料。如此,可以增大整个充电座的FOV。参见图10b,图10b为本实施例中待识别充电座的一种断面标记示意图。其中,图10b的上侧图为充电座各个侧面展开后的示意图,包括3个侧面,图10b的下侧图为充电座的俯视图。充电座的各个侧面均采用反光材料被 标记成不同的图案。在充电座靠墙的情况下,充电座可以在180度范围内被识别到。如图10b的下侧图中梯形表示为充电座,梯形的最长边为充电座靠墙的一侧,梯形的其他边表示充电座的其他侧面,其他侧面的反光材料被标记成不同的图案。In this embodiment, the charging stand to be identified may be divided into a plurality of sides, and the side faces may also be referred to as a cross section, and each side is marked with a different pattern of reflective material. In this way, the FOV of the entire charging stand can be increased. Referring to FIG. 10b, FIG. 10b is a schematic cross-sectional view of the charging stand to be identified in the embodiment. The upper side view of FIG. 10b is a schematic view of each side of the charging stand, including three sides, and the lower side view of FIG. 10b is a top view of the charging stand. Each side of the charging stand is marked with a different pattern using a reflective material. In the case of a charging stand against the wall, the charging stand can be identified within a range of 180 degrees. In the lower side view of Fig. 10b, the trapezoid is shown as a charging stand. The longest side of the trapezoid is the side of the charging stand against the wall, the other side of the trapezoid represents the other side of the charging stand, and the reflective materials of the other sides are marked with different patterns. .
图11为本申请实施例提供的移动机器人的一种结构示意图。该实施例与图2所示方法实施例相对应。该移动机器人包括:处理器110、存储器111以及红外摄像模组112。该红外摄像模组112可以安装于移动机器人的前部或靠近前部的位置。红外摄像模组112可以为红外摄像头、红外摄像机等。红外摄像模组112为根据近红外光成像的摄像模组。一般来说,将波长为0.76μm~1.5μm的光称为近红外光。通常,普通摄像机中的光学传感器可以对近红外光区和可见光区的光产生感应,因此红外摄像模组112可以通过在普通摄像机上加装阻挡可见光的滤镜得到。FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. This embodiment corresponds to the method embodiment shown in FIG. 2. The mobile robot includes a processor 110, a memory 111, and an infrared camera module 112. The infrared camera module 112 can be mounted at the front of the mobile robot or near the front. The infrared camera module 112 can be an infrared camera, an infrared camera, or the like. The infrared camera module 112 is a camera module that is imaged according to near-infrared light. Generally, light having a wavelength of 0.76 μm to 1.5 μm is referred to as near-infrared light. Generally, an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module 112 can be obtained by adding a filter that blocks visible light on an ordinary camera.
红外摄像头模组112,用于采集图像,并将图像存储至存储器111;The infrared camera module 112 is configured to collect images and store the images in the memory 111;
处理器110,用于获取存储器111中的图像,根据预设的充电座图像特征,从图像中确定待识别充电座的图像区域,根据确定的图像区域,确定待识别充电座的位置信息;其中,待识别充电座能发出红外光。The processor 110 is configured to acquire an image in the memory 111, determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature, and determine position information of the charging stand to be identified according to the determined image area; The charging stand to be identified can emit infrared light.
上述存储器111可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器111还可以是至少一个位于远离前述处理器的存储装置。The above memory 111 may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage. Optionally, the memory 111 may also be at least one storage device located away from the foregoing processor.
上述处理器110可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。The processor 110 may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or a digital signal processing (DSP), dedicated integration. Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component.
在本申请的另一实施例中,图11所示实施例中,处理器110,具体可以用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to:
根据预设的充电座像素特征和/或预设的充电座尺寸特征,从图像中确定待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
在本申请的另一实施例中,图11所示实施例中,处理器110具体可以用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to:
根据预先获取的待识别充电座上的第一预设数量个标识点,从图像区域中确定第一预设数量个标识点的图像位置;根据第一预设数量个标识点在待识别充电座上的空间位置和第一预设数量个标识点的图像位置,以及第一预设公式,确定待识别充电座相对于移动机器人的位置信息。Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the pre-acquired charging stand; and determining, according to the first preset number of identification points, the charging stand to be identified The spatial position on the upper surface and the image position of the first preset number of identification points, and the first preset formula determine the position information of the charging stand to be identified relative to the mobile robot.
在本申请的另一实施例中,图11所示实施例中,处理器110具体可以用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to:
根据以下第一预设公式,确定待识别充电座相对于移动机器人的旋转矩阵R和平移矩阵t:Determining the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot according to the following first preset formula:
Figure PCTCN2019076764-appb-000016
Figure PCTCN2019076764-appb-000016
其中,(X i,Y i,Z i)表示第一预设数量个标识点中第i个标识点在待识别充电座上的空间位置,(u i,v i)表示第一预设数量个标识点中第i个标识点的图像位置,K表示预设的红外摄像模组的内参矩阵,argmin表示最小化投影误差函数,n表示第一预设数量,(X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在图像的成像平面上的投影坐标。 Wherein, (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity. The image position of the i-th identification point in the identification point, K represents the internal parameter matrix of the preset infrared camera module, argmin represents the minimum projection error function, and n represents the first preset number, (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i ), and (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image Projection coordinates.
在本申请的另一实施例中,图11所示实施例中,处理器110具体可以用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to:
获取图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;
根据深度信息和图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, the spatial position of the second preset number of identification points in the image area according to the second preset formula;
根据图像区域中第二预设数量个标识点的空间位置,确定待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the spatial position of the second preset number of identification points in the image area.
在本申请的另一实施例中,图11所示实施例中,当红外摄像模组112还具有深度感知功能时,红外摄像模组112还用于采集与图像对应的深度图像,并存储至存储器111。处理器110具体用于从存储器中获取深度图像,从深度图像中获取图像区域中第二预设数量个标识点的深度信息;其中,深度图像包括各个标识点的深度信息。In another embodiment of the present application, in the embodiment shown in FIG. 11 , when the infrared camera module 112 further has a depth sensing function, the infrared camera module 112 is further configured to collect a depth image corresponding to the image, and store the image to Memory 111. The processor 110 is specifically configured to acquire a depth image from the memory, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
在本申请的一实施例中,红外摄像模组112可以包含深度传感器(图中未示出),该深度传感器可以为TOF传感器。该深度传感器用于获取深度图像中各个像素点的深度信息。红外摄像模组112,还可以用于采集与图像对应的深度图像,并存储至存储器111。处理器110,具体可以用于从存储器111中获取深度图像,从深度图像中获取图像区域中第二预设数量个标识点的深度信息;其中,深度图像包括各个标识点的深度信息。In an embodiment of the present application, the infrared camera module 112 may include a depth sensor (not shown), which may be a TOF sensor. The depth sensor is used to acquire depth information of each pixel in the depth image. The infrared camera module 112 can also be used to collect a depth image corresponding to the image and store it in the memory 111. The processor 110 is specifically configured to obtain a depth image from the memory 111, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
在本申请的另一实施例中,图11所示实施例中,处理器110具体可以用于:当红外摄像模组包括左摄像模组和右摄像模组时(图中未示出),图像包括左摄像模组采集的第一图像和右摄像模组分别采集的第二图像,图像区域为从第一图像中确定的第一图像区域或从第二图像中确定的第二图像区域。根据第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定图像区域中第二预设数量个标识点的深度信息。In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to: when the infrared camera module includes a left camera module and a right camera module (not shown), The image includes a first image acquired by the left camera module and a second image respectively acquired by the right camera module, and the image region is a first image region determined from the first image or a second image region determined from the second image. Determining depth information of a second predetermined number of identification points in the image area according to different image positions of the corresponding identification points in the first image area and the second image area.
在本申请的另一实施例中,图11所示实施例中,处理器110具体可以用于:当移动机器人还包括IMU(图中未示出)时,获取红外摄像模组112在采集图像之前采集的上一图像,获取根据预设的充电座图像特征从上一图像中确定的待识别充电座的上一图像区域,获取IMU采集的从第一位置到第二位置时的运动参量,根据运动参量和上一图像区域,确定图像区域中第二预设数量个标识点的深度信息。IMU,用于采集从第一位置到第二位置时的运动参量;其中,第一位置为采集上一图像时移动机器人的空间位置,第二位置为采集图像时移动机器人的空间位置。In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 may be specifically configured to: when the mobile robot further includes an IMU (not shown), acquire the infrared camera module 112 to acquire an image. The previously acquired previous image acquires a previous image region of the charging stand to be identified determined from the previous image according to the preset charging stand image feature, and acquires a motion parameter when the IMU collects from the first position to the second position, Determining depth information of a second predetermined number of identification points in the image area according to the motion parameter and the previous image area. The IMU is configured to collect motion parameters when moving from the first position to the second position; wherein the first position is a spatial position of the mobile robot when the previous image is acquired, and the second position is a spatial position of the mobile robot when the image is acquired.
在本申请的另一实施例中,图11所示实施例中,处理器110具体用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 is specifically configured to:
按照以下第二预设公式,确定图像区域中第二预设数量个标识点的空间位置:Determining the spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:
Figure PCTCN2019076764-appb-000017
Figure PCTCN2019076764-appb-000017
其中,(X i,Y i,Z i)表示图像区域中第i个标识点的空间位置,Z i表示图像区域中第i个标识点的深度信息,K表示预设的红外摄像模组的内参矩阵,(u i,v i)表示图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region, Z i represents the depth information of the i-th identification point in the image region, and K represents the preset infrared camera module. The internal reference matrix, (u i , v i ), represents the image position of the i-th identified point in the image region.
在本申请的另一实施例中,图11所示实施例中,处理器110具体用于:In another embodiment of the present application, in the embodiment shown in FIG. 11, the processor 110 is specifically configured to:
根据确定的图像区域,确定待识别充电座相对于移动机器人的空间位置和空间朝向。Based on the determined image area, the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot are determined.
在本申请的另一实施例中,图11所示实施例中,移动机器人还可以包括:能发射红外光的红外发射器(图中未示出)。红外发射器发射的红外光能够照射在待识别充电座上。In another embodiment of the present application, in the embodiment shown in FIG. 11, the mobile robot may further include: an infrared emitter (not shown) capable of emitting infrared light. The infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified.
在本申请的另一实施例中,图11所示实施例中,待识别充电座上可以包括反光材料,反光材料能够使反射光沿入射光的光路返回。In another embodiment of the present application, in the embodiment shown in FIG. 11, the charging stand to be identified may include a reflective material, and the reflective material can return the reflected light along the optical path of the incident light.
在本申请的另一实施例中,图11所示实施例中,待识别充电座的各个侧面上均可以包括反光材料,各个侧面上反光材料的图案特征不同。In another embodiment of the present application, in the embodiment shown in FIG. 11, each side of the charging stand to be identified may include a reflective material, and the pattern features of the reflective materials on the respective sides are different.
由于上述移动机器人实施例是基于方法实施例得到的,与该方法具有相同的技术效果,因此移动机器人实施例的技术效果在此不再赘述。对于移动机器人实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。Since the above-described mobile robot embodiment is obtained based on the method embodiment, and has the same technical effect as the method, the technical effects of the mobile robot embodiment are not described herein again. For the mobile robot embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,计算机程序被处理器执行时实现本申请实施例提供的充电座识别方法。该方法包括:The embodiment of the present application further provides a computer readable storage medium. The computer readable storage medium stores a computer program. When the computer program is executed by the processor, the charging stand identification method provided by the embodiment of the present application is implemented. The method includes:
获取红外摄像模组采集的图像;Obtaining an image acquired by an infrared camera module;
根据预设的充电座图像特征,从图像中确定待识别充电座的图像区域;其中,待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
根据确定的图像区域,确定待识别充电座的位置信息。The position information of the charging stand to be identified is determined according to the determined image area.
本实施例中,移动机器人可以获取红外摄像模组采集的包含能发出红外光的待识别充电座的图像,根据预设的充电座图像特征从图像中确定待识别充电座的图像区域,根据该图像区域,确定待识别充电座的位置信息。由于红外摄像模组的图像采集范围是以红外摄像模组为顶点的圆锥形范围,当移动机器人所在地面存在倾斜时,待识别充电座也能处在红外摄像模组的图像采集范围内,进而能够实现对待识别充电座的位置信息识别。并且,待识别充电座能发出红外光,在红外摄像模组采集的图像中能发出红外光的待识别充电座具有较明显的图像特征,因此从图像中识别待识别充电座的位置信息时,能够提高识别的准确性。In this embodiment, the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature. The image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized. Moreover, the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
本申请实施例还提供了一种计算机程序,计算机程序被处理器执行时实现本申请实施例提供的充电座识别方法。该方法包括:The embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application. The method includes:
获取红外摄像模组采集的图像;Obtaining an image acquired by an infrared camera module;
根据预设的充电座图像特征,从图像中确定待识别充电座的图像区域;其中,待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
根据确定的图像区域,确定待识别充电座的位置信息。The position information of the charging stand to be identified is determined according to the determined image area.
本实施例中,移动机器人可以获取红外摄像模组采集的包含能发出红外光的待识别充电座的图像,根据预设的充电座图像特征从图像中确定待识别充电座的图像区域,根据该图像区域,确定待识别充电座的位置信息。由于红外摄像模组的图像采集范围是以红外摄像模组为顶点的圆锥形范围,当移动机器人所在地面存在倾斜时,待识别充电座也能处在红外摄像模组的图像采集范围内,进而能够实现对待识别充电座的位置信息识别。并且,待识别充电座能发出红外光,在红外摄像模组采集的图像中能发出红外光的待识别充电座具有较明显的图像特征,因此从图像中识别待识别充电座的位置信息时,能够提高识别的准确性。In this embodiment, the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature. The image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized. Moreover, the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者任何其他变体意在涵盖非排他性的包含,从而使得包括一系列 要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that, in this context, relational terms such as first and second are used merely to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply such entities or operations. There is any such actual relationship or order between them. Furthermore, the terms "comprises," "comprising," or "includes" or "includes" or "includes" or "includes" or "includes" or "includes" Other elements, or elements that are inherent to such a process, method, item, or device. In the absence of further limitation, an element defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, article, or device that comprises the element.
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。The various embodiments in the present specification are described in a related manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所做的任何修改、等同替换、改进等,均包含在本申请的保护范围内。The above description is only the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present application are included in the scope of the present application.

Claims (24)

  1. 一种充电座识别方法,其特征在于,应用于移动机器人,所述移动机器人包括:红外摄像模组;所述方法包括:A charging stand recognition method is characterized in that it is applied to a mobile robot, and the mobile robot includes: an infrared camera module; the method includes:
    获取所述红外摄像模组采集的图像;Obtaining an image acquired by the infrared camera module;
    根据预设的充电座图像特征,从所述图像中确定待识别充电座的图像区域;其中,所述待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;
    根据确定的图像区域,确定所述待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the determined image area.
  2. 根据权利要求1所述的方法,其特征在于,所述根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域的步骤,包括:The method according to claim 1, wherein the step of determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature comprises:
    根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
  3. 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:
    根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging stand acquired in advance;
    根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining the to-be-identified charging according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and a first preset formula Position information relative to the mobile robot.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息的步骤,包括:The method according to claim 3, wherein the image according to the first preset number of identification points on the charging stand to be identified and the image of the first preset number of identification points a position, and a first preset formula, determining a position information of the charging stand to be identified relative to the mobile robot, comprising:
    根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:
    Figure PCTCN2019076764-appb-100001
    Figure PCTCN2019076764-appb-100001
    其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述(X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
  5. 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:
    获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;
    根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;
    根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
  6. 根据权利要求5所述的方法,其特征在于,所述获取所述图像区域中第二预设数量个标识点的深度信息的步骤,包括:The method according to claim 5, wherein the step of acquiring the depth information of the second predetermined number of identifier points in the image area comprises:
    当所述红外摄像模组还具有深度感知功能时,获取所述红外摄像模组采集的与所述图像对应的深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,When the infrared camera module further has a depth sensing function, acquiring a depth image corresponding to the image collected by the infrared camera module, and acquiring a second preset number of the image regions from the depth image Identifying depth information of the point; wherein the depth image includes depth information of each identification point; or
    当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,When the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module and a second image captured by the right camera module, the image The region is a first image region determined from the first image or a second image region determined from the second image, according to different images of corresponding identification points in the first image region and the second image region Position, determining depth information of a second preset number of identification points in the image area; or
    当所述移动机器人还包括惯性感测单元IMU时,获取所述红外摄像模组 在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。When the mobile robot further includes an inertial sensing unit IMU, acquiring a previous image acquired by the infrared camera module before acquiring the image, and obtaining, according to a preset charging seat image feature, determining from the previous image And obtaining a motion parameter when the IMU collects the first position to the second position, and determining the image area according to the motion parameter and the previous image area. a depth information of the second predetermined number of identification points; wherein the first position is a spatial position of the mobile robot when the previous image is acquired, and the second position is the movement when the image is collected The spatial position of the robot.
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置的步骤,包括:The method according to claim 5, wherein the determining the image region according to the second preset formula according to the depth information and an image position of a second predetermined number of identification points in the image region The step of the second preset number of spatial locations of the identification points includes:
    按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:
    Figure PCTCN2019076764-appb-100002
    Figure PCTCN2019076764-appb-100002
    其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
  8. 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:
    根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
  9. 根据权利要求1~8任一项所述的方法,其特征在于,所述移动机器人还包括:能发射红外光的红外发射器。The method according to any one of claims 1 to 8, wherein the mobile robot further comprises: an infrared emitter capable of emitting infrared light.
  10. 根据权利要求9所述的方法,其特征在于,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。The method according to claim 9, wherein the charging stand to be identified comprises a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
  11. 根据权利要求10所述的方法,其特征在于,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。The method according to claim 10, wherein the reflective material is included on each side of the charging stand to be identified, and the pattern characteristics of the reflective material on each side are different.
  12. 一种移动机器人,其特征在于,包括:处理器、存储器以及红外摄像模组;A mobile robot, comprising: a processor, a memory, and an infrared camera module;
    所述红外摄像模组,用于采集图像,并将所述图像存储至所述存储器;The infrared camera module is configured to collect an image and store the image to the memory;
    所述处理器,用于获取所述存储器中的所述图像,根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域,根据确定的图像区域,确定所述待识别充电座的位置信息;其中,所述待识别充电座能发出红外光。The processor is configured to acquire the image in the memory, determine an image area of the charging stand to be identified from the image according to a preset charging stand image feature, and determine, according to the determined image area, Determining the location information of the charging cradle; wherein the charging cradle to be identified can emit infrared light.
  13. 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:
    根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
  14. 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:
    根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging cradle obtained in advance; according to the first preset quantity Determining a spatial position of the identification point on the charging stand to be identified and an image position of the first preset number of identification points, and a first preset formula determining the charging stand to be identified relative to the mobile robot location information.
  15. 根据权利要求14所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 14, wherein the processor is specifically configured to:
    根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:
    Figure PCTCN2019076764-appb-100003
    Figure PCTCN2019076764-appb-100003
    其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述 (X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
  16. 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:
    获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;
    根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;
    根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
  17. 根据权利要求16所述的机器人,其特征在于,The robot according to claim 16, wherein
    当所述红外摄像模组还具有深度感知功能时,所述红外摄像模组,还用于采集与所述图像对应的深度图像,并存储至所述存储器;所述处理器,具体用于从所述存储器中获取所述深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,When the infrared camera module further has a depth sensing function, the infrared camera module is further configured to collect a depth image corresponding to the image and store the image to the memory; the processor is specifically configured to Acquiring the depth image in the memory, and acquiring depth information of the second preset number of identifier points in the image area from the depth image; wherein the depth image includes depth information of each identifier point; or
    所述处理器,具体用于当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,The processor is specifically configured to: when the infrared camera module includes a left camera module and a right camera module, the image includes the first image captured by the left camera module and the right camera module. a second image, the image area being a first image area determined from the first image or a second image area determined from the second image, according to the first image area and the second image area Determining depth information of a second preset number of identification points in the image area; or
    所述处理器,具体用于当所述移动机器人还包括惯性感测单元IMU时,获取所述红外摄像模组在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;所述IMU,用于采集从所述第一位置到所述第二位置时的运动参量;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。The processor is specifically configured to acquire, when the mobile robot further includes an inertial sensing unit IMU, acquire a previous image acquired by the infrared camera module before acquiring the image, and acquire image characteristics according to a preset charging stand Acquiring, from the previous image area of the to-be-identified charging stand determined in the previous image, a motion parameter when the IMU collects from the first position to the second position, according to the motion parameter and the previous one An image area, where depth information of a second predetermined number of identification points in the image area is determined; the IMU is configured to collect motion parameters from the first position to the second position; A position is a spatial position of the mobile robot when the previous image is acquired, and the second position is a spatial position of the mobile robot when the image is acquired.
  18. 根据权利要求16所述的机器人,其特征在于,所述处理器,具体用于:The robot according to claim 16, wherein the processor is specifically configured to:
    按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:
    Figure PCTCN2019076764-appb-100004
    Figure PCTCN2019076764-appb-100004
    其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
  19. 根据权利要求12所述的机器人,其特征在于,所述处理器,具体用于:The robot according to claim 12, wherein the processor is specifically configured to:
    根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
  20. 根据权利要求12~19任一项所述的机器人,其特征在于,所述移动机器人还包括:能发射红外光的红外发射器。The robot according to any one of claims 12 to 19, wherein the mobile robot further comprises: an infrared emitter capable of emitting infrared light.
  21. 根据权利要求20所述的机器人,其特征在于,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。The robot according to claim 20, wherein said charging stand to be identified comprises a light reflecting material, and said light reflecting material is capable of returning the reflected light along the optical path of the incident light.
  22. 根据权利要求21所述的机器人,其特征在于,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。The robot according to claim 21, wherein each of the side faces of the charging stand to be identified includes the reflective material, and the pattern features of the reflective materials on the respective sides are different.
  23. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-11任一所述的方法步骤。A computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program being executed by a processor to implement the method steps of any of claims 1-11.
  24. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-11任一所述的方法步骤。A computer program, characterized in that the computer program, when executed by a processor, implements the method steps of any of claims 1-11.
PCT/CN2019/076764 2018-03-12 2019-03-01 Charging base identification method and mobile robot WO2019174484A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810202018.8 2018-03-12
CN201810202018.8A CN110263601A (en) 2018-03-12 2018-03-12 A kind of cradle recognition methods and mobile robot

Publications (1)

Publication Number Publication Date
WO2019174484A1 true WO2019174484A1 (en) 2019-09-19

Family

ID=67907304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076764 WO2019174484A1 (en) 2018-03-12 2019-03-01 Charging base identification method and mobile robot

Country Status (2)

Country Link
CN (1) CN110263601A (en)
WO (1) WO2019174484A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625005A (en) * 2020-06-10 2020-09-04 浙江欣奕华智能科技有限公司 Robot charging method, robot charging control device and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110707773B (en) * 2019-10-10 2021-05-14 南方电网科学研究院有限责任公司 Charging control method and device of inspection equipment and inspection equipment
CN111596694B (en) * 2020-07-21 2020-11-17 追创科技(苏州)有限公司 Automatic recharging method, device, storage medium and system
CN113625226A (en) * 2021-08-05 2021-11-09 美智纵横科技有限责任公司 Position determination method and device, household appliance and storage medium
CN114794992B (en) * 2022-06-07 2024-01-09 深圳甲壳虫智能有限公司 Charging seat, recharging method of robot and sweeping robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013169222A (en) * 2012-02-17 2013-09-02 Sharp Corp Self-propelled electronic device
CN104516352A (en) * 2015-01-25 2015-04-15 无锡桑尼安科技有限公司 Robot system for detecting rectangular target
CN106647747A (en) * 2016-11-30 2017-05-10 北京智能管家科技有限公司 Robot charging method and device
CN106826821A (en) * 2017-01-16 2017-06-13 深圳前海勇艺达机器人有限公司 The method and system that robot auto-returned based on image vision guiding charges
CN107291084A (en) * 2017-08-08 2017-10-24 小狗电器互联网科技(北京)股份有限公司 Sweeping robot charging system, sweeping robot and cradle

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100999078A (en) * 2006-01-09 2007-07-18 田角峰 Automatic charging method of robot and its automatic charging device
JP2010016985A (en) * 2008-07-03 2010-01-21 Sanyo Electric Co Ltd Method of data transmission in electric power transmission, and charging stand and battery built-in device using the method
CN101648377A (en) * 2008-08-11 2010-02-17 悠进机器人股份公司 Automatic charging self-regulation mobile robot device and automatic charging method thereof
TW201125256A (en) * 2010-01-06 2011-07-16 Kye Systems Corp Wireless charging device and its charging method.
KR102095817B1 (en) * 2013-10-31 2020-04-01 엘지전자 주식회사 Mobile robot, charging apparatus for the mobile robot, and mobile robot system
CN106204516B (en) * 2015-05-06 2020-07-03 Tcl科技集团股份有限公司 Automatic charging method and device for robot
CN104950889A (en) * 2015-06-24 2015-09-30 美的集团股份有限公司 Robot charging stand and robot provided with same
CN106712160B (en) * 2015-07-30 2019-05-21 安徽啄木鸟无人机科技有限公司 A kind of charging method of unmanned plane quick charging system
CN105375574A (en) * 2015-12-01 2016-03-02 纳恩博(北京)科技有限公司 Charging system and charging method
CN105978114A (en) * 2016-05-03 2016-09-28 青岛众海汇智能源科技有限责任公司 Wireless charging system, method and sweeping robot
CN205986255U (en) * 2016-08-29 2017-02-22 湖南万为智能机器人技术有限公司 Automatic alignment device that charges of robot
CN106885514B (en) * 2017-02-28 2019-04-30 西南科技大学 A kind of Deep Water Drilling Riser automatic butt position and posture detection method based on machine vision
CN107284270A (en) * 2017-07-05 2017-10-24 天津工业大学 A kind of wireless electric vehicle charging device Automatic Alignment System and method
CN107608358A (en) * 2017-09-30 2018-01-19 爱啃萝卜机器人技术(深圳)有限责任公司 High-efficiency and low-cost based on outline identification technology recharges system and method automatically

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013169222A (en) * 2012-02-17 2013-09-02 Sharp Corp Self-propelled electronic device
CN104516352A (en) * 2015-01-25 2015-04-15 无锡桑尼安科技有限公司 Robot system for detecting rectangular target
CN106647747A (en) * 2016-11-30 2017-05-10 北京智能管家科技有限公司 Robot charging method and device
CN106826821A (en) * 2017-01-16 2017-06-13 深圳前海勇艺达机器人有限公司 The method and system that robot auto-returned based on image vision guiding charges
CN107291084A (en) * 2017-08-08 2017-10-24 小狗电器互联网科技(北京)股份有限公司 Sweeping robot charging system, sweeping robot and cradle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625005A (en) * 2020-06-10 2020-09-04 浙江欣奕华智能科技有限公司 Robot charging method, robot charging control device and storage medium

Also Published As

Publication number Publication date
CN110263601A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
WO2019174484A1 (en) Charging base identification method and mobile robot
US11080876B2 (en) Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera
EP3422955B1 (en) System and method for assisted 3d scanning
WO2020102944A1 (en) Point cloud processing method and device and storage medium
US9710919B2 (en) Image-based surface tracking
Remondino et al. State of the art in high density image matching
US10582188B2 (en) System and method for adjusting a baseline of an imaging system with microlens array
JP3624353B2 (en) Three-dimensional shape measuring method and apparatus
US10602059B2 (en) Method for generating a panoramic image
US20040066500A1 (en) Occupancy detection and measurement system and method
JP2015535337A (en) Laser scanner with dynamic adjustment of angular scan speed
WO2018227576A1 (en) Method and system for detecting ground shape, method for drone landing, and drone
JP2013101045A (en) Recognition device and recognition method of three-dimensional position posture of article
JP6880822B2 (en) Equipment, mobile equipment and methods
WO1997006406A1 (en) Distance measuring apparatus and shape measuring apparatus
US11398085B2 (en) Systems, methods, and media for directly recovering planar surfaces in a scene using structured light
WO2022078488A1 (en) Positioning method and apparatus, self-moving device, and storage medium
US10055881B2 (en) Video imaging to assess specularity
US20210374978A1 (en) Capturing environmental scans using anchor objects for registration
CN103206926B (en) A kind of panorama three-dimensional laser scanner
WO2022078442A1 (en) Method for 3d information acquisition based on fusion of optical scanning and smart vision
JP2005157779A (en) Distance-measuring apparatus
WO2022077238A1 (en) Imaging display method, remote control terminal, device, system, and storage medium
JP2006220603A (en) Imaging apparatus
WO2022078433A1 (en) Multi-location combined 3d image acquisition system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19767586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19767586

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19767586

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 19.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19767586

Country of ref document: EP

Kind code of ref document: A1