WO2019174484A1 - Charging base identification method and mobile robot - Google Patents
Charging base identification method and mobile robot Download PDFInfo
- Publication number
- WO2019174484A1 WO2019174484A1 PCT/CN2019/076764 CN2019076764W WO2019174484A1 WO 2019174484 A1 WO2019174484 A1 WO 2019174484A1 CN 2019076764 W CN2019076764 W CN 2019076764W WO 2019174484 A1 WO2019174484 A1 WO 2019174484A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- charging stand
- identified
- image area
- preset
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Definitions
- the present application relates to the field of mobile robot control technologies, and in particular, to a charging stand recognition method and a mobile robot.
- a mobile robot is a machine that can perform work in accordance with a predetermined program.
- the mobile robot has a mobile function.
- Mobile robots are capable of performing many types of tasks during the move. For example, the cleaning robot can clean the road surface during the movement, and the care robot can transport the medical device or the patient during the movement.
- automatic refill technology can improve the intelligence of mobile robots.
- the automatic recharging process of the mobile robot specifically includes: when the battery power of the mobile robot is lower than the threshold, the mobile robot can move to the charging base according to the program, complete the charging task, and continue to perform the task after the charging is completed.
- the mobile robot is required to recognize the position of the charging stand.
- the mobile robot can scan the mark of the charging stand by a laser radar mounted on the mobile robot to identify the position of the charging stand.
- the laser radar on the mobile robot in FIG. 1a can emit a laser scanning line.
- the laser radar can receive the laser signal reflected by the charging stand and determine according to the reflected laser signal. The location of the charging stand.
- the position of the charging stand can be identified in the above manner.
- the laser scanning line is swept in a plane, when the ground on which the mobile robot is located is tilted, the laser scanning line may not be irradiated onto the mark of the charging stand, and the position of the charging stand may not be recognized.
- the ground on which the mobile robot is located is tilted downward, and the laser scanning line cannot be irradiated onto the mark of the charging stand, which causes the mobile robot to not recognize the position of the charging stand. Therefore, when the position of the charging stand is recognized in the above manner, the recognition success rate is not high enough.
- the purpose of the embodiment of the present application is to provide a charging stand identification method and a mobile robot to improve the recognition success rate of the charging stand.
- an embodiment of the present application provides a charging stand identification method, which is applied to a mobile robot, and the mobile robot includes: an infrared camera module; the method includes:
- Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
- the step of determining an image area of the charging stand to be identified from the image according to the preset charging stand image feature includes:
- the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
- the step of determining location information of the charging stand to be identified according to the determined image area includes:
- the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, Determining the location information of the charging stand to be identified relative to the mobile robot includes:
- (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function,
- the n represents the first preset number
- the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i )
- (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
- the step of determining location information of the charging stand to be identified according to the determined image area includes:
- the step of acquiring the depth information of the second preset number of identifier points in the image area includes:
- the infrared camera module further has a depth sensing function, acquiring a depth image corresponding to the image collected by the infrared camera module, and acquiring a second preset number of the image regions from the depth image Identifying depth information of the point; wherein the depth image includes depth information of each identification point; or
- the image includes a first image captured by the left camera module and a second image captured by the right camera module
- the image The region is a first image region determined from the first image or a second image region determined from the second image, according to different images of corresponding pixel points in the first image region and the second image region Position, determining depth information of a second preset number of identification points in the image area; or
- the mobile robot further includes an Inertial Measurement Unit (IMU), acquiring a previous image acquired by the infrared camera module before acquiring the image, acquiring a feature according to a preset charging stand image Obtaining a previous image region of the to-be-identified charging cradle determined in an image, acquiring a motion parameter when the IMU is collected from the first position to the second position, according to the motion parameter and the previous image region Determining depth information of a second predetermined number of identification points in the image area; wherein the first position is a spatial position of the mobile robot when the previous image is collected, and the second location is a collection location The spatial position of the mobile robot when the image is described.
- IMU Inertial Measurement Unit
- the second predetermined number of identification points in the image area are determined according to the second preset formula according to the depth information and the image position of the second preset number of identification points in the image area.
- the steps of the spatial location include:
- (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area
- the Z i is depth information of an i-th identification point in the image area
- K represents a preset internal reference matrix of the infrared camera module
- the (u i , v i ) represents an image position of the i-th identification point in the image region.
- the step of determining location information of the charging stand to be identified according to the determined image area includes:
- the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
- an infrared emitter capable of emitting infrared light
- the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
- the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
- the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
- an embodiment of the present application further provides a mobile robot, including: a processor, a memory, and an infrared camera module;
- the infrared camera module is configured to collect an image and store the image to the memory
- the processor is configured to acquire the image in the memory, determine an image area of the charging stand to be identified from the image according to a preset charging stand image feature, and determine, according to the determined image area, Determining the location information of the charging cradle; wherein the charging cradle to be identified can emit infrared light.
- the processor is specifically configured to:
- the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
- the processor is specifically configured to:
- the processor is specifically configured to:
- (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function,
- the n represents the first preset number
- the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i )
- (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
- the processor is specifically configured to:
- the infrared camera module when the infrared camera module further has a depth sensing function, the infrared camera module is further configured to collect a depth image corresponding to the image, and store the image to the memory; the processor, Specifically, the depth image is obtained from the memory, and the depth information of the second preset number of identifier points in the image area is obtained from the depth image, where the depth image includes depths of each identifier point. Information; or,
- the processor is specifically configured to: when the infrared camera module includes a left camera module and a right camera module, the image includes the first image captured by the left camera module and the right camera module. a second image, the image area being a first image area determined from the first image or a second image area determined from the second image, according to the first image area and the second image area Determining depth information of a second preset number of identification points in the image area; or
- the processor is specifically configured to acquire, when the mobile robot further includes an IMU, a previous image acquired by the infrared camera module before acquiring the image, and acquire an image according to a preset charging stand image from the upper image Obtaining, in an image, a previous image region of the charging stand to be identified, acquiring a motion parameter when the IMU is collected from the first position to the second position, and determining, according to the motion parameter and the previous image region, a depth information of a second predetermined number of identifier points in the image area; the IMU is configured to collect motion parameters when the first position is to the second position; wherein the first position is an acquisition The spatial position of the mobile robot when the previous image is, and the second position is a spatial position of the mobile robot when the image is acquired.
- the processor is specifically configured to:
- (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area
- the Z i is depth information of an i-th identification point in the image area
- K represents a preset internal reference matrix of the infrared camera module
- the (u i , v i ) represents an image position of the i-th identification point in the image region.
- the processor is specifically configured to:
- the mobile robot further includes: an infrared emitter capable of emitting infrared light; and the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
- an infrared emitter capable of emitting infrared light
- the infrared light emitted by the infrared emitter can be irradiated on the charging stand to be identified.
- the charging stand to be identified includes a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
- the reflective material is included on each side of the charging stand to be identified, and the pattern features of the reflective materials on each side are different.
- an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the charging stand provided by the embodiment of the present application is implemented. recognition methods.
- the method includes:
- Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
- the embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application.
- the method includes:
- Determining the location information of the charging stand to be identified according to the determined image area Determining the location information of the charging stand to be identified according to the determined image area.
- the charging stand identification method and the mobile robot provided by the embodiment of the present application can obtain an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine the charging to be identified from the image according to the preset charging stand image feature.
- the image area of the seat determines the position information of the charging stand to be identified based on the image area. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
- the charging stand to be identified can emit infrared light
- the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
- implementing any of the products or methods of the present application does not necessarily require that all of the advantages described above be achieved at the same time.
- Figure 1a and Figure 1b are several reference diagrams of a mobile robot using a lidar to identify a charging stand;
- FIG. 2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application.
- FIG. 3 is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application
- Figure 3b is a reference diagram of the infrared camera captured containing the marker of Figure 3a;
- Figure 3c is a schematic view of the relative position between the mobile robot and the charging stand
- Figure 3d is a schematic diagram of a refill path determined by the mobile robot
- FIG. 4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module according to an embodiment of the present application
- Figure 5a is a schematic structural view of a reflective sticker coated with glass beads
- Figure 5b is a schematic diagram of a principle of crystal reflection of light
- Figure 5c is a schematic diagram of a principle of reflecting light by a prism
- FIG. 6 is a schematic flow chart of step S203 in FIG. 2;
- FIG. 7 is a schematic diagram of locations of respective identification points on a charging stand to be identified according to an embodiment of the present application.
- FIG. 8 is another schematic flowchart of step S203 in FIG. 2;
- Figure 9a and Figure 9b are schematic diagrams of the imaging principle and depth calculation of the binocular camera, respectively;
- FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera + IMU embodiment
- 10a is a schematic diagram of a relative position of an infrared camera module and a charging stand to be identified;
- FIG. 10b is a schematic cross-sectional view of a charging stand to be identified according to an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
- the embodiment of the present application provides a charging stand identification method and a mobile robot.
- the charging stand identification method provided by the embodiment of the present application will be described in detail below through specific embodiments.
- FIG. 2 is a schematic flow chart of a charging stand identification method according to an embodiment of the present application.
- the method embodiment is applied to a mobile robot comprising: an infrared camera module.
- the infrared camera module can be mounted at the front of the mobile robot or near the front.
- the infrared camera module can be an infrared camera, an infrared camera, and the like.
- the infrared camera module is a camera module based on near-infrared light imaging. Generally, light having a wavelength of 0.76 ⁇ m to 1.5 ⁇ m is referred to as near-infrared light.
- an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module can be obtained by adding a filter that blocks visible light to an ordinary camera.
- the charging stand identification method provided in this embodiment includes the following steps S201 to S203.
- Step S201 Acquire an image acquired by the infrared camera module.
- the infrared camera module can collect images according to a preset period, and the mobile robot can acquire images collected by the infrared camera module according to a preset period.
- the mobile robot can acquire an image of the moving direction of the mobile robot collected by the infrared camera module.
- the image captured by the infrared camera module can be understood as an image of an environmental object surrounding the mobile robot. Since the mobile robot is movable, the mobile robot may be farther away from the charging stand to be identified, or may be closer; the charging stand to be identified may be within the image capturing range of the infrared camera module, or may not be in the infrared camera module. The image is captured within range. Therefore, the image may include the charging stand to be identified, or may not include the charging stand to be identified; when the image contains the charging stand to be identified, the charging stand to be identified may be located at any position in the image.
- the infrared camera module can be a monocular camera or a binocular camera.
- Step S202 Determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
- the charging stand to be identified can emit infrared light.
- the charging stand to be recognized emits infrared light
- the image area of the image to be recognized in the image is displayed as a highlighted area, which makes the feature of the charging stand to be identified in the image more obvious and easier to be recognized.
- the area where the infrared light is to be emitted on the charging stand may be the entire area of the charging stand to be identified, or may be the mark area on the charging stand to be identified.
- the marker By setting the marker to a specific shape or pattern, the charging stand can be made recognizable.
- the marker may be a rectangular block of four shape rules arranged in a predetermined order or the like.
- the infrared light emitted by the charging stand to be identified is: infrared light emitted by the charging stand itself to be identified.
- the inside of the charging stand to be identified may have an infrared light emitter, so that the charging stand to be identified emits infrared light outward, so that the charging stand to be identified has obvious highlight features in the image.
- the infrared light emitted by the charging stand to be identified is: infrared light emitted by other external devices reflected by the charging stand.
- the charging stand to be identified can reflect the infrared light emitted by the external infrared light emitter, so that the charging stand to be identified can emit infrared light.
- it can also present a highlight feature in the image, which can be realized by setting some special materials on the charging stand to be identified.
- the image area of the image to be recognized in the charging stand is a highlighted area.
- the shape of the marker on the charging stand may be a preset shape.
- FIG. 3a is a reference diagram of a marker on a charging stand to be identified according to an embodiment of the present application, wherein the black to-be-identified charging stand has four white rectangular markers, and the markers can emit infrared light.
- FIG. 3b is a reference diagram of the marker included in FIG. 3a collected by the infrared camera module, wherein four rectangular highlight areas are visible as markers on the charging stand to be identified.
- the area of the charging stand in the image is a highlighted area of the preset shape.
- the preset charging stand image feature may be the preset shape.
- the area of the outer frame of each marker can be used as the image area of the charging stand to be identified.
- the determined image area includes image areas of four rectangular markers.
- the step S202 may specifically include: determining, according to a preset charging stand image feature, whether there is a charging stand to be identified in the image; if present, determining an image area of the charging stand to be identified from the image; If it does not exist, it can be left unprocessed or the image can be discarded.
- the charging stand image feature may include a charging stand pixel feature.
- the charging stand pixel feature may include: the pixel value is greater than a preset pixel threshold.
- the charging stand image feature may also include a charging stand size feature.
- the charging stand pixel feature may include at least one of an aspect ratio feature, a length range feature, and a width range feature of the image area.
- the preset pixel threshold may be determined in advance according to the pixel value of the highlight area portion of the sample charging stand in the image, for example, the preset pixel threshold may be 200 or other values.
- the above pixel feature is a feature determined based on the size of the pixel value.
- the cradle pixel feature is a feature determined based on the size of the pixel value of the cradle in the image.
- the image area of the charging stand to be identified is determined from the image according to the preset charging stand pixel feature and/or the preset charging stand size feature. Specifically, the image may be scanned to detect an area having a charging stand pixel feature and/or a charging stand size feature, and the area is used as an image area of the charging stand to be identified.
- Step S203 Determine location information of the charging stand to be identified according to the determined image area.
- the location information of the charging stand to be identified may include: a spatial location and a spatial orientation of the charging stand to be identified.
- Spatial position ie space coordinates.
- the spatial orientation can be represented by the normal vector of the plane in which the charging dock is to be identified.
- the spatial position and spatial orientation of the charging stand to be identified may be the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot, that is, the spatial position and spatial orientation of the charging stand in the mobile robot coordinate system to be identified.
- the mobile robot coordinate system can be understood as the coordinate system whose coordinate origin is located on the mobile robot. For example, the coordinate origin of the mobile robot coordinate system is the center position of the mobile robot.
- the step S203 may include: determining a spatial position of each pixel in the image region relative to the mobile robot, and using an average value of spatial positions of the respective pixel points in the image region as the charging stand to be identified relative to the mobile
- the spatial position of the robot the target plane is determined according to the spatial position of each pixel in the image region, and the normal vector of the target plane is taken as the spatial orientation of the charging stand to be identified relative to the mobile robot.
- the spatial position of each pixel in the image region relative to the mobile robot is the position of each pixel in the image region in the mobile robot coordinate system.
- the mobile robot may further determine a refill path from the mobile robot to the charging stand to be identified according to the determined spatial position and spatial orientation, by controlling the mobile robot.
- the driving component drives the mobile robot to move along the recharging path to the charging stand to be identified.
- the above refilling path can enable the mobile robot to treat the charging stand when the charging stand is to be recognized, and realize automatic charging of the mobile robot.
- Figure 3c shows the spatial position and spatial orientation of the charging stand relative to the mobile robot, wherein the front of the mobile robot is not facing the charging stand.
- a refill path from the mobile robot to the charging cradle can be planned.
- Figure 3d shows a schematic diagram of a refill path from the mobile robot to the charging cradle.
- the charging component on the mobile robot is located at the front of the mobile robot.
- the mobile robot is treating the charging stand, that is, the line connecting the front part of the mobile robot and the charging stand to be identified is perpendicular to the target plane.
- the image obtained by the infrared camera module and containing the infrared charging light to be recognized can be obtained, and the image area of the charging stand to be identified is determined from the image according to the preset charging seat image feature, according to The image area determines location information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
- the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
- the mobile robot may further include: an infrared emitter capable of emitting infrared light.
- the infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified.
- the infrared emitter can be mounted close to the infrared camera module.
- the marker on the charging stand to be identified may be a mirror material.
- the charging stand to be identified may be emitted by the infrared emitter.
- the infrared light is reflected into the lens of the infrared camera module, and the reflection is specular reflection. In this way, the infrared camera module can collect an image containing the highlighted charging stand to be recognized, and the charging stand to be identified is more easily recognized.
- the infrared emitter acts as a fill light to provide illumination to the surrounding environment of the mobile robot.
- the charging stand to be recognized is illuminated by infrared light
- the charging stand to be identified in the image captured by the infrared camera module can be made clearer.
- the charging stand to be identified in order to make the mobile robot at any position relative to the charging stand to be identified, can display a highlight feature in the image, and the reflective seat can be used for reflection.
- the material for example, the charging stand to be identified is covered with a reflective material, or the marker of the charging stand to be identified uses a reflective material. Among them, the reflective material can return the reflected light along the optical path of the incident light.
- FIG. 4 is a schematic diagram of a mounting position of an infrared emitter and an infrared camera module in a mobile robot according to an embodiment of the present application.
- an object in the field of view (FOV) of the infrared emitter reflects a certain amount of infrared light into the lens of the infrared camera module, thereby generating an infrared image in the lens.
- FOV field of view
- the charging stand to be identified appears in the overlapping area of the infrared emitter FOV and the infrared camera lens FOV, since the reflective material can almost completely reflect the infrared light, a bright area appears in the image, that is, the charging stand to be identified It can be highlighted in the image, and the charging stand to be identified can have higher recognition in the image than the object around the charging stand to be identified.
- the reflective material can be a reflective sticker, and the reflective sticker is a high reflectivity material.
- the surface of the reflective material is coated with a high refractive index layer.
- the high refractive index layer may comprise high refractive index glass beads, crystals or prisms. These high reflectivity layers are capable of reflecting light from different directions and reflecting the light back in the direction of incidence.
- Figure 5a is a schematic view of a structure of a reflective sticker coated with high refractive index glass beads.
- the reflective sticker comprises a surface resin layer, a high refractive index glass bead, an adhesive layer, a reflective layer and a sticker layer, and the incident light is projected on the high refractive index glass microsphere through the surface resin layer, and after being reflected by the reflective layer, It is reflected back from the surface resin layer.
- the glass beads in the reflective sticker of Figure 5a can also be crystals or lenses.
- Figure 5b is a schematic diagram of the crystal reflecting light
- Figure 5c is a schematic diagram of the prism reflecting light. It can be seen that after the incident light is reflected and refracted after being projected onto the crystal or the lens, the light can be emitted in the opposite direction of the incident light.
- a reflective sticker having a special shape is pasted on the charging stand to be identified, and the special shape can be used to identify the charging stand.
- the charging stand to be recognized can reflect the infrared light emitted by the infrared emitter of the mobile robot back to the infrared camera module, so that the charging stand to be recognized can present a high-brightness pattern in the image. .
- step S203 the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. The following steps S203a and S203b may be included.
- Step S203a Determine an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the charging stand to be identified that are acquired in advance.
- the first preset quantity may be a preset quantity value.
- the first predetermined number may be a value greater than three.
- the first preset number of identification points are preset points, and the relative positions between the points are fixed.
- the first preset number of identification points on the charging stand to be identified which may be understood as: pre-acquiring the spatial position of the first preset number of identification points on the charging stand to be identified, that is, the first preset number of identification points The spatial position on the charging stand to be identified.
- the spatial position of the first preset number of identification points may be a space coordinate in a coordinate system established by using one of the first preset number of identification points as a coordinate origin, or may be any space The space coordinate in the coordinate system established when the fixed point is the origin of the coordinate.
- Determining the image position of the first preset number of identification points may be understood as determining the coordinates of the first preset number of identification points in the image. Specifically, for each identification point, when determining the image position of the identification point, the pixel point corresponding to the identification point may be determined from the image area, and the image position of the identification point is determined according to the coordinates of the pixel point. The pixel corresponding to the identifier point may be one or more.
- the image coordinate of the pixel point may be directly determined as the image position of the identifier point; when the pixel point corresponding to the identifier point is multiple, the plurality of pixel points may be The average of the image coordinates is determined as the image position of the marker point.
- the image coordinates are coordinates in the image coordinate system.
- the four rectangular frames are the markers on the charging stand to be identified, and the center points of the four rectangular frames are used as preset marking points, and the first preset number is 4.
- the space coordinates of each marker point are determined in the counterclockwise direction to be (0, 0, 0), (L 2 , 0, 0), ( L 2 , -L 1 , 0) and (0, -L 1 , 0).
- the central pixel point of each rectangular area in the image area is the first preset number of identification points in the image area, and the image coordinates of the identification points are (u 1 , v 1 ), respectively (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ).
- the infrared camera module can be a monocular camera.
- Step S203b determining, according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and the first preset formula, determining the charging stand to be identified relative to the mobile robot Location information.
- the step S203b may specifically include: determining, according to the following first preset formula (1), a rotation matrix R and a translation matrix t of the charging stand to be identified relative to the mobile robot, the rotation matrix R and the translation matrix t That is, the position information of the charging stand to be identified relative to the mobile robot:
- (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity.
- K represents the internal parameter matrix of the preset infrared camera module
- argmin represents the minimum projection error function
- n is the first preset number
- (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i )
- (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image
- the image coordinates of the projection is
- K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
- the spatial coordinates of each identification point are (0, 0, 0), (L 2 , 0, 0), (L 2 , -L 1 , 0) and (0, -L 1 , 0); the image coordinates of each marker point are (u 1 , v 1 ), (u 2 , v 2 ), (u 3 , v 3 ) and (u 4 , v 4 ), respectively, according to these known quantities,
- the rotation matrix R and the translation matrix t can be obtained.
- the spatial orientation and spatial position of the charging stand to be identified relative to the mobile robot may be determined according to the rotation matrix R and the translation matrix t as the position information of the charging stand to be identified.
- the spatial position of the identification point on the charging stand to be identified and the image position of the identification point in the image area are determined in advance, and the spatial position of the charging stand to be identified relative to the mobile robot can be determined.
- the position information of the charging stand to be identified is accurately determined.
- step S203 the step of determining the location information of the charging stand to be identified according to the determined image area may be performed according to the flow diagram shown in FIG. Specifically, the following steps S203A, S203B, and S203C may be included.
- Step S203A Acquire depth information of a second preset number of identification points in the image area.
- the second preset quantity may be a preset quantity value, and may be a quantity value greater than 3.
- the second preset number may be the same as the first preset number or may be different.
- the second preset number of identification points in the image area may be pixels determined according to a preset rule, or may be preset pixels.
- the foregoing preset rule may be randomly selecting a pixel point, or may be selecting a pixel point at a preset position.
- the central pixel of the rectangular region corresponding to each rectangular marker in the image region may be used as the second predetermined number of identification points.
- the depth information may include at least one of a distance value, a distance error range, and the like.
- the depth information can be understood as the distance between the point on the object corresponding to each identifier point and the infrared camera module, that is, the distance between the point on the object corresponding to each identifier point and the mobile robot.
- Step S203B Determine, according to the depth information and the image position of the second preset number of identification points in the image region, the spatial position of the second preset number of identification points in the image region according to the second preset formula.
- the spatial position of the second preset number of identification points in the image area can be understood as the spatial coordinate of the point on the object corresponding to the second preset number of identification points in the image area.
- the step S203B may specifically include:
- the spatial position of the second preset number of identification points in the image area is determined according to the following second preset formula (2):
- (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region
- i may take a value between 1 and m
- m represents a second preset number.
- the origin of the coordinate system corresponding to (X i , Y i , Z i ) can be established on the mobile robot, that is, the origin of the coordinate system corresponding to (X i , Y i , Z i ) can be the center position of the mobile robot.
- Z i represents the depth information of the i-th identification point in the image area
- (X i , Y i ) represents the plane coordinate of the i-th identification point in the image area
- K represents the internal reference matrix of the preset infrared camera module.
- (u i , v i ) represents the image position of the i-th identification point in the image area.
- K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
- the image positions of the second predetermined number of identification points are ⁇ (u 1 , v 1 ), (u 2 , v 2 ), . . . , (u m , v m ) ⁇ , respectively, the depth of each identification point.
- the information may be ⁇ Z 1 , Z 2 , . . . , Z m ⁇ , respectively, where m is a second preset number, and according to the known quantities, and the second preset formula described above, a second image area may be obtained.
- the spatial coordinates of a preset number of identification points are ⁇ (u 1 , v 1 ), (u 2 , v 2 ), . . . , (u m , v m ) ⁇ , respectively, the depth of each identification point.
- the information may be ⁇ Z 1 , Z 2 , . . . , Z m ⁇ , respectively, where m is a second preset number, and according to the known quantities, and
- Step S203C Determine location information of the charging stand to be identified according to the spatial position of the second preset number of identification points in the image area.
- the step S203C may specifically include: using an average value of a spatial position of the second preset number of identification points in the image area as a spatial position of the charging station to be identified relative to the mobile robot; determining the image area The plane where the spatial position corresponding to the preset number of identification points is located, and the normal vector of the plane is used as the spatial orientation of the charging stand to be identified relative to the mobile robot.
- the present embodiment can determine the spatial position of the identification point according to the depth information of the pixel in the image area, and determine the position information of the charging stand to be identified according to the spatial position, thereby improving the accuracy of determining the position information of the charging stand to be identified.
- the depth information in step S203A of the foregoing embodiment may be obtained by using various embodiments.
- the step of acquiring the depth information of the second preset number of the identifier points in the image region may include:
- the depth image includes depth information of each identification point.
- the infrared camera module may include a depth sensor and an infrared emitter.
- the depth sensor can be a Time Of Flight (TOF) sensor.
- the TOF sensor can calculate the depth information between the object and the lens by using the time difference between the infrared light emitted by the infrared emitter and the infrared light received by the infrared camera module lens to generate a depth image.
- the TOF sensor can also modulate the infrared light to a certain frequency to obtain modulated light, emit the modulated light, and calculate the object to the lens by calculating the phase difference between the received modulated light and the emitted modulated light. The depth value between.
- the infrared camera module also obtains a depth image corresponding to the image when the image is acquired, and the depth image includes depth information of each pixel in the image, and the identifier point is determined from the pixel of the image.
- the depth image thus includes depth information for each of the identified points.
- the infrared camera module is an infrared binocular camera, that is, when the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module, and a second image acquired by the right camera module, wherein the image area is a first image area determined from the first image or a second image area determined from the second image.
- the first image area is an image area of the charging stand to be identified determined from the first image according to the preset charging stand image feature
- the second image area is to be identified from the second image according to the preset charging stand image feature.
- the image area of the charging stand is an infrared binocular camera, that is, when the infrared camera module includes a left camera module and a right camera module
- the image includes a first image captured by the left camera module, and a second image acquired by the right camera module, wherein the image area is a first image area determined from the first image or a second image area determined from the second image.
- the corresponding identifier points in the first image area and the second image area are: the first identifier point in the first image area and the second identifier point in the second image area correspond to the same point in the space; that is, The same point in space is the imaged point in the first image area and the second image area, respectively.
- the first identification point is any identification point in the first image area
- the second identification point is any identification point in the second image area.
- the first identification point and the second identification point are taken as an example for illustration, and are not limiting.
- Figure 9a is an imaging schematic of a binocular camera.
- the infrared binocular camera includes a left-eye camera and a right-eye camera, and the line between the center points of the two cameras is a baseline.
- Point P is the left eye pixel in the left eye camera and the right eye pixel in the right eye camera.
- the left eye pixel and the right eye pixel are corresponding identification points.
- the depth information of the second preset number of identification points in the image area may be determined according to the following third preset formula (3):
- Z is the depth information of an identification point in the image area.
- the identification point S 1 is taken as an example.
- f is the focal length of the left camera module lens or the focal length of the right camera module lens.
- the left camera module or the right camera module has the same focal length.
- b is a baseline length between the center of the lens and the right lens center of the camera module of the left camera module,
- u L and u R are the identifier identifying the corresponding point S and point S. 1. 1 identifies image coordinate point S 2.
- the identification point S 1 is a pixel point in the first image area
- the corresponding identification point S 2 is a pixel point in the second image area
- the image area is the second image area
- the identification point S 1 is a pixel point in the second image area
- the corresponding identification point S 2 is a pixel point in the first image area.
- the third preset formula may be used to determine the depth information of the identification point.
- the center of the lens is the center of the aperture.
- Figure 9b is a schematic diagram of the depth calculation of the binocular camera.
- OL is the lens center of the left camera module
- O R is the lens center of the right camera module
- b is the baseline length between the lens center of the left camera module and the lens center of the right camera module.
- P L is the imaging pixel point (ie, the identification point) of the P point on the imaging plane of the left camera module
- P R is the imaging pixel point (ie, the corresponding identification point) of the P point on the imaging plane of the right camera module.
- Z is the distance between the P point and the baseline, that is, the depth information of the identification point.
- u L is the coordinate of P L and u R is the coordinate of P R .
- the step of acquiring the depth information of the second preset number of identification points in the image region may include the following steps. 1 to 4.
- Step 1 Obtain the previous image acquired by the infrared camera module before acquiring the above image.
- the infrared camera module can collect images according to a preset period.
- the previous image is an image captured by the infrared camera module at a previous acquisition time before the current acquisition time
- the last acquisition time is an acquisition time adjacent to the current collection time. That is, the last acquisition time is the time at which the image is acquired by the previous infrared camera module.
- the acquisition time is the time at which the infrared camera module collects images.
- the image acquired at the above acquisition time is image A, that is, the previous image is image A
- the image acquired at the current acquisition time is image B as an example.
- the position of the mobile robot is the first position
- the position of the mobile robot when the infrared camera module acquires the image B is the second position.
- the infrared camera module collects images during the movement of the mobile robot, the first position and the second position are different.
- the infrared camera module can be a monocular camera.
- Step 2 Acquire a previous image area of the charging stand to be identified determined from the previous image according to the preset charging stand image feature.
- step S202 For the step, refer to step S202 in the embodiment shown in FIG. 2, and the specific content is not described in detail herein.
- Step 3 Acquire the motion parameters collected by the IMU from the first position to the second position.
- the above motion parameters may include the amount of rotation and the amount of translation.
- the IMU can capture the amount of rotation and the amount of translation between any two positions during the movement of the mobile robot.
- the amount of rotation can be understood as a rotation matrix.
- the amount of translation can be understood as a translation matrix.
- the first position is the spatial position of the mobile robot when the previous image is acquired, and the second position is the spatial position of the mobile robot when the image is acquired.
- Step 4 Determine depth information of the second preset number of identification points in the image area according to the motion parameter and the previous image area.
- the step 4 may specifically include the following embodiments: for the target identification point in the image area, according to the image position of the corresponding identification point of the target identification point in the previous image area and the motion parameter, according to the following The fourth preset formula (4) determines the depth information of the target identification point; wherein the target identification point is any one of the second preset number of identification points in the image area, and the corresponding identification point of the target identification point is Identify points in the previous image area.
- p' A (u A , v A , 1) T
- p' B (u B , v B , 1) T
- T is a matrix transpose symbol.
- (u B , v B ) is the image position of the target marker point
- (u A , v A ) is the image position of the corresponding marker point of the target marker point in the previous image region
- p′ A is (u A , v A
- the normalized coordinates of p) B are the normalized coordinates of (u B , v B ).
- R and t are respectively the amount of rotation and the amount of translation in the above-mentioned motion parameters, that is, R is a rotation matrix in the above-described motion parameters, and t is a translation matrix in the above-described motion parameters.
- K is the internal reference matrix of the preset infrared camera module.
- s A is the depth information of the corresponding identification point of the target identification point
- s B is the depth information of the target identification point.
- x A represents the normalized plane coordinate of the corresponding identification point of the target identification point
- x B represents the normalized plane coordinate of the target identification point. According to the fourth preset formula, depth information of each of the second preset number of identification points in the image area may be obtained.
- K can be Where f u and f v are the equivalent focal lengths of the lens of the infrared camera module in the u-axis and the v-axis direction of the image, respectively, and c u and c v are the image coordinates of the projection center of the optical axis of the lens in the image. .
- the corresponding identifier points of the target identifier point and the target identifier point are: the target identifier point of the image area and the corresponding identifier point of the target identifier point in the previous image area correspond to the same point in the space; that is, the same point in the space Imaging points in the previous image area and image area, respectively.
- FIG. 9c is a schematic diagram of a principle for calculating depth information in a monocular camera+IMU embodiment.
- A is the first position and B is the second position.
- the image point at which the P point is detected at the A position is p A (u A , v A ), and the image point at which the point P is detected when the mobile robot moves to the B position is p B (u B , v B ), O A
- It is the lens center of the infrared camera module at the A position
- O B is the lens center of the infrared camera module at the B position.
- the depth information of the imaged point can be acquired by triangulation.
- the amount of rotation and the amount of translation from position A to position B are R and t, respectively.
- x A K -1 p' A
- the above s A and s B are unknowns.
- the marker of the charging stand is used to use the reflective material, and when the charging stand to be identified appears in the FOV of the infrared camera module, it is easier to obtain the mobile robot. Relative positional relationship with the charging stand to be identified. However, when the mobile robot moves outside the FOV of the infrared camera module during the movement, the marker of the charging stand to be identified does not appear in the FOV of the infrared camera module, and it is difficult for the mobile robot to recognize the charging stand to be identified.
- the marker on a single plane has a corresponding identifiable angle, ie, the FOV is identified, as shown in Figure 10a, the FOV of the cradle to be identified.
- the FOV is identified
- the infrared camera module moves to the position shown in FIG. 10a, that is, when the mobile robot moves to the position shown in FIG. 10a during the movement, the infrared camera module moves outside the FOV of the marker to be identified by the charging stand. It is difficult for the robot to recognize the charging stand to be identified.
- the reflective side of each of the sides of the charging stand to be identified may include a reflective material, and the pattern features of the reflective material on each side are different. Specifically, when the charging stand is identified, the orientation of the charging stand to be identified may be determined according to the preset pattern features of the respective sides.
- the charging stand to be identified may be divided into a plurality of sides, and the side faces may also be referred to as a cross section, and each side is marked with a different pattern of reflective material. In this way, the FOV of the entire charging stand can be increased.
- FIG. 10b is a schematic cross-sectional view of the charging stand to be identified in the embodiment.
- the upper side view of FIG. 10b is a schematic view of each side of the charging stand, including three sides, and the lower side view of FIG. 10b is a top view of the charging stand.
- Each side of the charging stand is marked with a different pattern using a reflective material. In the case of a charging stand against the wall, the charging stand can be identified within a range of 180 degrees.
- the trapezoid is shown as a charging stand.
- the longest side of the trapezoid is the side of the charging stand against the wall, the other side of the trapezoid represents the other side of the charging stand, and the reflective materials of the other sides are marked with different patterns. .
- FIG. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. This embodiment corresponds to the method embodiment shown in FIG. 2.
- the mobile robot includes a processor 110, a memory 111, and an infrared camera module 112.
- the infrared camera module 112 can be mounted at the front of the mobile robot or near the front.
- the infrared camera module 112 can be an infrared camera, an infrared camera, or the like.
- the infrared camera module 112 is a camera module that is imaged according to near-infrared light. Generally, light having a wavelength of 0.76 ⁇ m to 1.5 ⁇ m is referred to as near-infrared light.
- an optical sensor in an ordinary camera can sense light in a near-infrared light region and a visible light region, and thus the infrared camera module 112 can be obtained by adding a filter that blocks visible light on an ordinary camera.
- the infrared camera module 112 is configured to collect images and store the images in the memory 111;
- the processor 110 is configured to acquire an image in the memory 111, determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature, and determine position information of the charging stand to be identified according to the determined image area;
- the charging stand to be identified can emit infrared light.
- the above memory 111 may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
- RAM random access memory
- NVM non-volatile memory
- the memory 111 may also be at least one storage device located away from the foregoing processor.
- the processor 110 may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or a digital signal processing (DSP), dedicated integration.
- CPU central processing unit
- NP network processor
- DSP digital signal processing
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the processor 110 may be specifically configured to:
- the image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
- the processor 110 may be specifically configured to:
- the spatial position on the upper surface and the image position of the first preset number of identification points, and the first preset formula determine the position information of the charging stand to be identified relative to the mobile robot.
- the processor 110 may be specifically configured to:
- (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, and (u i , v i ) represents the first preset quantity.
- K represents the internal parameter matrix of the preset infrared camera module
- argmin represents the minimum projection error function
- n represents the first preset number
- (X i ', Y i ' , Z i ') represents the coordinates obtained by coordinate transformation (X i , Y i , Z i )
- (u i ', v i ') represents (X i , Y i , Z i ) on the imaging plane of the image Projection coordinates.
- the processor 110 may be specifically configured to:
- the infrared camera module 112 when the infrared camera module 112 further has a depth sensing function, the infrared camera module 112 is further configured to collect a depth image corresponding to the image, and store the image to Memory 111.
- the processor 110 is specifically configured to acquire a depth image from the memory, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
- the infrared camera module 112 may include a depth sensor (not shown), which may be a TOF sensor.
- the depth sensor is used to acquire depth information of each pixel in the depth image.
- the infrared camera module 112 can also be used to collect a depth image corresponding to the image and store it in the memory 111.
- the processor 110 is specifically configured to obtain a depth image from the memory 111, and obtain depth information of a second preset number of identifier points in the image region from the depth image, where the depth image includes depth information of each identifier point.
- the processor 110 may be specifically configured to: when the infrared camera module includes a left camera module and a right camera module (not shown), The image includes a first image acquired by the left camera module and a second image respectively acquired by the right camera module, and the image region is a first image region determined from the first image or a second image region determined from the second image. Determining depth information of a second predetermined number of identification points in the image area according to different image positions of the corresponding identification points in the first image area and the second image area.
- the processor 110 may be specifically configured to: when the mobile robot further includes an IMU (not shown), acquire the infrared camera module 112 to acquire an image.
- the previously acquired previous image acquires a previous image region of the charging stand to be identified determined from the previous image according to the preset charging stand image feature, and acquires a motion parameter when the IMU collects from the first position to the second position, Determining depth information of a second predetermined number of identification points in the image area according to the motion parameter and the previous image area.
- the IMU is configured to collect motion parameters when moving from the first position to the second position; wherein the first position is a spatial position of the mobile robot when the previous image is acquired, and the second position is a spatial position of the mobile robot when the image is acquired.
- the processor 110 is specifically configured to:
- (X i , Y i , Z i ) represents the spatial position of the i-th identification point in the image region
- Z i represents the depth information of the i-th identification point in the image region
- K represents the preset infrared camera module.
- the internal reference matrix, (u i , v i ), represents the image position of the i-th identified point in the image region.
- the processor 110 is specifically configured to:
- the spatial position and spatial orientation of the charging stand to be identified relative to the mobile robot are determined.
- the mobile robot may further include: an infrared emitter (not shown) capable of emitting infrared light.
- the infrared light emitted by the infrared emitter can be illuminated on the charging stand to be identified.
- the charging stand to be identified may include a reflective material, and the reflective material can return the reflected light along the optical path of the incident light.
- each side of the charging stand to be identified may include a reflective material, and the pattern features of the reflective materials on the respective sides are different.
- the embodiment of the present application further provides a computer readable storage medium.
- the computer readable storage medium stores a computer program.
- the charging stand identification method provided by the embodiment of the present application is implemented. The method includes:
- the position information of the charging stand to be identified is determined according to the determined image area.
- the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
- the image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
- the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
- the embodiment of the present application further provides a computer program, which is implemented by the processor to implement the charging stand identification method provided by the embodiment of the present application.
- the method includes:
- the position information of the charging stand to be identified is determined according to the determined image area.
- the mobile robot can acquire an image of the charging stand to be recognized that can be emitted by the infrared camera module, and determine an image area of the charging stand to be identified from the image according to the preset charging stand image feature.
- the image area determines the position information of the charging stand to be identified. Since the image acquisition range of the infrared camera module is a conical range with the apex of the infrared camera module, when the location of the mobile robot is tilted, the charging stand to be identified can also be within the image acquisition range of the infrared camera module, and further The position information recognition of the charging stand to be identified can be realized.
- the charging stand to be identified can emit infrared light, and the charging stand to be identified in the image collected by the infrared camera module can have a relatively obvious image feature, so when the position information of the charging stand to be identified is identified from the image, Can improve the accuracy of recognition.
Abstract
Description
Claims (24)
- 一种充电座识别方法,其特征在于,应用于移动机器人,所述移动机器人包括:红外摄像模组;所述方法包括:A charging stand recognition method is characterized in that it is applied to a mobile robot, and the mobile robot includes: an infrared camera module; the method includes:获取所述红外摄像模组采集的图像;Obtaining an image acquired by the infrared camera module;根据预设的充电座图像特征,从所述图像中确定待识别充电座的图像区域;其中,所述待识别充电座能发出红外光;Determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature; wherein the charging stand to be identified can emit infrared light;根据确定的图像区域,确定所述待识别充电座的位置信息。Determining the location information of the charging stand to be identified according to the determined image area.
- 根据权利要求1所述的方法,其特征在于,所述根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域的步骤,包括:The method according to claim 1, wherein the step of determining an image area of the charging stand to be identified from the image according to a preset charging stand image feature comprises:根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
- 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging stand acquired in advance;根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining the to-be-identified charging according to the spatial position of the first preset number of identification points on the charging stand to be identified and the image position of the first preset number of identification points, and a first preset formula Position information relative to the mobile robot.
- 根据权利要求3所述的方法,其特征在于,所述根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息的步骤,包括:The method according to claim 3, wherein the image according to the first preset number of identification points on the charging stand to be identified and the image of the first preset number of identification points a position, and a first preset formula, determining a position information of the charging stand to be identified relative to the mobile robot, comprising:根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述(X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
- 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
- 根据权利要求5所述的方法,其特征在于,所述获取所述图像区域中第二预设数量个标识点的深度信息的步骤,包括:The method according to claim 5, wherein the step of acquiring the depth information of the second predetermined number of identifier points in the image area comprises:当所述红外摄像模组还具有深度感知功能时,获取所述红外摄像模组采集的与所述图像对应的深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,When the infrared camera module further has a depth sensing function, acquiring a depth image corresponding to the image collected by the infrared camera module, and acquiring a second preset number of the image regions from the depth image Identifying depth information of the point; wherein the depth image includes depth information of each identification point; or当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,When the infrared camera module includes a left camera module and a right camera module, the image includes a first image captured by the left camera module and a second image captured by the right camera module, the image The region is a first image region determined from the first image or a second image region determined from the second image, according to different images of corresponding identification points in the first image region and the second image region Position, determining depth information of a second preset number of identification points in the image area; or当所述移动机器人还包括惯性感测单元IMU时,获取所述红外摄像模组 在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。When the mobile robot further includes an inertial sensing unit IMU, acquiring a previous image acquired by the infrared camera module before acquiring the image, and obtaining, according to a preset charging seat image feature, determining from the previous image And obtaining a motion parameter when the IMU collects the first position to the second position, and determining the image area according to the motion parameter and the previous image area. a depth information of the second predetermined number of identification points; wherein the first position is a spatial position of the mobile robot when the previous image is acquired, and the second position is the movement when the image is collected The spatial position of the robot.
- 根据权利要求5所述的方法,其特征在于,所述根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置的步骤,包括:The method according to claim 5, wherein the determining the image region according to the second preset formula according to the depth information and an image position of a second predetermined number of identification points in the image region The step of the second preset number of spatial locations of the identification points includes:按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
- 根据权利要求1所述的方法,其特征在于,所述根据确定的图像区域,确定所述待识别充电座的位置信息的步骤,包括:The method according to claim 1, wherein the determining the location information of the charging stand to be identified according to the determined image area comprises:根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
- 根据权利要求1~8任一项所述的方法,其特征在于,所述移动机器人还包括:能发射红外光的红外发射器。The method according to any one of claims 1 to 8, wherein the mobile robot further comprises: an infrared emitter capable of emitting infrared light.
- 根据权利要求9所述的方法,其特征在于,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。The method according to claim 9, wherein the charging stand to be identified comprises a reflective material, and the reflective material is capable of returning the reflected light along the optical path of the incident light.
- 根据权利要求10所述的方法,其特征在于,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。The method according to claim 10, wherein the reflective material is included on each side of the charging stand to be identified, and the pattern characteristics of the reflective material on each side are different.
- 一种移动机器人,其特征在于,包括:处理器、存储器以及红外摄像模组;A mobile robot, comprising: a processor, a memory, and an infrared camera module;所述红外摄像模组,用于采集图像,并将所述图像存储至所述存储器;The infrared camera module is configured to collect an image and store the image to the memory;所述处理器,用于获取所述存储器中的所述图像,根据预设的充电座图像特征,从所述图像中确定所述待识别充电座的图像区域,根据确定的图像区域,确定所述待识别充电座的位置信息;其中,所述待识别充电座能发出红外光。The processor is configured to acquire the image in the memory, determine an image area of the charging stand to be identified from the image according to a preset charging stand image feature, and determine, according to the determined image area, Determining the location information of the charging cradle; wherein the charging cradle to be identified can emit infrared light.
- 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:根据预设的充电座像素特征和/或预设的充电座尺寸特征,从所述图像中确定所述待识别充电座的图像区域。The image area of the charging stand to be identified is determined from the image according to a preset charging stand pixel feature and/or a preset charging stand size feature.
- 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:根据预先获取的所述待识别充电座上的第一预设数量个标识点,从所述图像区域中确定所述第一预设数量个标识点的图像位置;根据所述第一预设数量个标识点在所述待识别充电座上的空间位置和所述第一预设数量个标识点的图像位置,以及第一预设公式,确定所述待识别充电座相对于所述移动机器人的位置信息。Determining an image position of the first preset number of identification points from the image area according to the first preset number of identification points on the to-be-identified charging cradle obtained in advance; according to the first preset quantity Determining a spatial position of the identification point on the charging stand to be identified and an image position of the first preset number of identification points, and a first preset formula determining the charging stand to be identified relative to the mobile robot location information.
- 根据权利要求14所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 14, wherein the processor is specifically configured to:根据以下第一预设公式,确定所述待识别充电座相对于所述移动机器人的旋转矩阵R和平移矩阵t,得到所述待识别充电座相对于所述移动机器人的位置信息:Determining, according to the following first preset formula, the rotation matrix R and the translation matrix t of the charging stand to be identified relative to the mobile robot, and obtaining position information of the charging stand to be identified relative to the mobile robot:其中,所述(X i,Y i,Z i)表示所述第一预设数量个标识点中第i个标识点在所述待识别充电座上的空间位置,所述(u i,v i)表示所述第一预设数量个标识点中第i个标识点的图像位置,所述K表示预设的所述红外摄像模组的内参矩阵,所述argmin表示最小化投影误差函数,所述n表示所述第一预设数量,所述 (X i',Y i',Z i')表示(X i,Y i,Z i)进行坐标变换后得到的坐标,(u i',v i')表示(X i,Y i,Z i)在所述图像的成像平面上的投影坐标。 Wherein (X i , Y i , Z i ) represents a spatial position of the i-th identification point of the first preset number of identification points on the charging stand to be identified, (u i , v i ) representing an image position of the i-th identification point of the first preset number of identification points, the K represents a preset internal parameter matrix of the infrared camera module, and the argmin represents a minimum projection error function, The n represents the first preset number, and the (X i ', Y i ', Z i ') represents coordinates obtained by coordinate transformation (X i , Y i , Z i ), (u i ' , v i ') represents the projected coordinates of (X i , Y i , Z i ) on the imaging plane of the image.
- 根据权利要求12所述的机器人,其特征在于,所述处理器具体用于:The robot according to claim 12, wherein the processor is specifically configured to:获取所述图像区域中第二预设数量个标识点的深度信息;Obtaining depth information of a second preset number of identification points in the image area;根据所述深度信息和所述图像区域中第二预设数量个标识点的图像位置,按照第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置;Determining, according to the depth information and the image position of the second preset number of the identification points in the image area, a spatial position of the second preset number of identification points in the image area according to the second preset formula;根据所述图像区域中第二预设数量个标识点的空间位置,确定所述待识别充电座的位置信息。And determining location information of the charging stand to be identified according to a spatial position of the second preset number of identification points in the image area.
- 根据权利要求16所述的机器人,其特征在于,The robot according to claim 16, wherein当所述红外摄像模组还具有深度感知功能时,所述红外摄像模组,还用于采集与所述图像对应的深度图像,并存储至所述存储器;所述处理器,具体用于从所述存储器中获取所述深度图像,从所述深度图像中获取所述图像区域中第二预设数量个标识点的深度信息;其中,所述深度图像包括各个标识点的深度信息;或者,When the infrared camera module further has a depth sensing function, the infrared camera module is further configured to collect a depth image corresponding to the image and store the image to the memory; the processor is specifically configured to Acquiring the depth image in the memory, and acquiring depth information of the second preset number of identifier points in the image area from the depth image; wherein the depth image includes depth information of each identifier point; or所述处理器,具体用于当所述红外摄像模组包括左摄像模组和右摄像模组时,所述图像包括所述左摄像模组采集的第一图像和所述右摄像模组采集的第二图像,所述图像区域为从所述第一图像中确定的第一图像区域或从所述第二图像中确定的第二图像区域,根据所述第一图像区域和第二图像区域中的对应标识点的不同图像位置,确定所述图像区域中第二预设数量个标识点的深度信息;或者,The processor is specifically configured to: when the infrared camera module includes a left camera module and a right camera module, the image includes the first image captured by the left camera module and the right camera module. a second image, the image area being a first image area determined from the first image or a second image area determined from the second image, according to the first image area and the second image area Determining depth information of a second preset number of identification points in the image area; or所述处理器,具体用于当所述移动机器人还包括惯性感测单元IMU时,获取所述红外摄像模组在采集所述图像之前采集的上一图像,获取根据预设的充电座图像特征从所述上一图像中确定的所述待识别充电座的上一图像区域,获取所述IMU采集的从第一位置到第二位置时的运动参量,根据所述运动参量和所述上一图像区域,确定所述图像区域中第二预设数量个标识点的深度信息;所述IMU,用于采集从所述第一位置到所述第二位置时的运动参量;其中,所述第一位置为采集所述上一图像时所述移动机器人的空间位置,所述第二位置为采集所述图像时所述移动机器人的空间位置。The processor is specifically configured to acquire, when the mobile robot further includes an inertial sensing unit IMU, acquire a previous image acquired by the infrared camera module before acquiring the image, and acquire image characteristics according to a preset charging stand Acquiring, from the previous image area of the to-be-identified charging stand determined in the previous image, a motion parameter when the IMU collects from the first position to the second position, according to the motion parameter and the previous one An image area, where depth information of a second predetermined number of identification points in the image area is determined; the IMU is configured to collect motion parameters from the first position to the second position; A position is a spatial position of the mobile robot when the previous image is acquired, and the second position is a spatial position of the mobile robot when the image is acquired.
- 根据权利要求16所述的机器人,其特征在于,所述处理器,具体用于:The robot according to claim 16, wherein the processor is specifically configured to:按照以下第二预设公式,确定所述图像区域中第二预设数量个标识点的空间位置:Determining a spatial location of the second predetermined number of identification points in the image area according to the following second preset formula:其中,所述(X i,Y i,Z i)表示所述图像区域中第i个标识点的空间位置,所述Z i为所述图像区域中第i个标识点的深度信息,所述K表示预设的所述红外摄像模组的内参矩阵,所述(u i,v i)表示所述图像区域中第i个标识点的图像位置。 Wherein, (X i , Y i , Z i ) represents a spatial position of an i-th identification point in the image area, and the Z i is depth information of an i-th identification point in the image area, K represents a preset internal reference matrix of the infrared camera module, and the (u i , v i ) represents an image position of the i-th identification point in the image region.
- 根据权利要求12所述的机器人,其特征在于,所述处理器,具体用于:The robot according to claim 12, wherein the processor is specifically configured to:根据确定的图像区域,确定所述待识别充电座相对于所述移动机器人的空间位置和空间朝向。Determining, according to the determined image area, a spatial position and a spatial orientation of the charging stand to be identified with respect to the mobile robot.
- 根据权利要求12~19任一项所述的机器人,其特征在于,所述移动机器人还包括:能发射红外光的红外发射器。The robot according to any one of claims 12 to 19, wherein the mobile robot further comprises: an infrared emitter capable of emitting infrared light.
- 根据权利要求20所述的机器人,其特征在于,所述待识别充电座上包括反光材料,所述反光材料能够使反射光沿入射光的光路返回。The robot according to claim 20, wherein said charging stand to be identified comprises a light reflecting material, and said light reflecting material is capable of returning the reflected light along the optical path of the incident light.
- 根据权利要求21所述的机器人,其特征在于,所述待识别充电座的各个侧面上均包括所述反光材料,各个侧面上反光材料的图案特征不同。The robot according to claim 21, wherein each of the side faces of the charging stand to be identified includes the reflective material, and the pattern features of the reflective materials on the respective sides are different.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-11任一所述的方法步骤。A computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program being executed by a processor to implement the method steps of any of claims 1-11.
- 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-11任一所述的方法步骤。A computer program, characterized in that the computer program, when executed by a processor, implements the method steps of any of claims 1-11.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810202018.8 | 2018-03-12 | ||
CN201810202018.8A CN110263601A (en) | 2018-03-12 | 2018-03-12 | A kind of cradle recognition methods and mobile robot |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019174484A1 true WO2019174484A1 (en) | 2019-09-19 |
Family
ID=67907304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/076764 WO2019174484A1 (en) | 2018-03-12 | 2019-03-01 | Charging base identification method and mobile robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110263601A (en) |
WO (1) | WO2019174484A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625005A (en) * | 2020-06-10 | 2020-09-04 | 浙江欣奕华智能科技有限公司 | Robot charging method, robot charging control device and storage medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110707773B (en) * | 2019-10-10 | 2021-05-14 | 南方电网科学研究院有限责任公司 | Charging control method and device of inspection equipment and inspection equipment |
CN111596694B (en) * | 2020-07-21 | 2020-11-17 | 追创科技(苏州)有限公司 | Automatic recharging method, device, storage medium and system |
CN113625226A (en) * | 2021-08-05 | 2021-11-09 | 美智纵横科技有限责任公司 | Position determination method and device, household appliance and storage medium |
CN114794992B (en) * | 2022-06-07 | 2024-01-09 | 深圳甲壳虫智能有限公司 | Charging seat, recharging method of robot and sweeping robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013169222A (en) * | 2012-02-17 | 2013-09-02 | Sharp Corp | Self-propelled electronic device |
CN104516352A (en) * | 2015-01-25 | 2015-04-15 | 无锡桑尼安科技有限公司 | Robot system for detecting rectangular target |
CN106647747A (en) * | 2016-11-30 | 2017-05-10 | 北京智能管家科技有限公司 | Robot charging method and device |
CN106826821A (en) * | 2017-01-16 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | The method and system that robot auto-returned based on image vision guiding charges |
CN107291084A (en) * | 2017-08-08 | 2017-10-24 | 小狗电器互联网科技(北京)股份有限公司 | Sweeping robot charging system, sweeping robot and cradle |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100999078A (en) * | 2006-01-09 | 2007-07-18 | 田角峰 | Automatic charging method of robot and its automatic charging device |
JP2010016985A (en) * | 2008-07-03 | 2010-01-21 | Sanyo Electric Co Ltd | Method of data transmission in electric power transmission, and charging stand and battery built-in device using the method |
CN101648377A (en) * | 2008-08-11 | 2010-02-17 | 悠进机器人股份公司 | Automatic charging self-regulation mobile robot device and automatic charging method thereof |
TW201125256A (en) * | 2010-01-06 | 2011-07-16 | Kye Systems Corp | Wireless charging device and its charging method. |
KR102095817B1 (en) * | 2013-10-31 | 2020-04-01 | 엘지전자 주식회사 | Mobile robot, charging apparatus for the mobile robot, and mobile robot system |
CN106204516B (en) * | 2015-05-06 | 2020-07-03 | Tcl科技集团股份有限公司 | Automatic charging method and device for robot |
CN104950889A (en) * | 2015-06-24 | 2015-09-30 | 美的集团股份有限公司 | Robot charging stand and robot provided with same |
CN106712160B (en) * | 2015-07-30 | 2019-05-21 | 安徽啄木鸟无人机科技有限公司 | A kind of charging method of unmanned plane quick charging system |
CN105375574A (en) * | 2015-12-01 | 2016-03-02 | 纳恩博(北京)科技有限公司 | Charging system and charging method |
CN105978114A (en) * | 2016-05-03 | 2016-09-28 | 青岛众海汇智能源科技有限责任公司 | Wireless charging system, method and sweeping robot |
CN205986255U (en) * | 2016-08-29 | 2017-02-22 | 湖南万为智能机器人技术有限公司 | Automatic alignment device that charges of robot |
CN106885514B (en) * | 2017-02-28 | 2019-04-30 | 西南科技大学 | A kind of Deep Water Drilling Riser automatic butt position and posture detection method based on machine vision |
CN107284270A (en) * | 2017-07-05 | 2017-10-24 | 天津工业大学 | A kind of wireless electric vehicle charging device Automatic Alignment System and method |
CN107608358A (en) * | 2017-09-30 | 2018-01-19 | 爱啃萝卜机器人技术(深圳)有限责任公司 | High-efficiency and low-cost based on outline identification technology recharges system and method automatically |
-
2018
- 2018-03-12 CN CN201810202018.8A patent/CN110263601A/en active Pending
-
2019
- 2019-03-01 WO PCT/CN2019/076764 patent/WO2019174484A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013169222A (en) * | 2012-02-17 | 2013-09-02 | Sharp Corp | Self-propelled electronic device |
CN104516352A (en) * | 2015-01-25 | 2015-04-15 | 无锡桑尼安科技有限公司 | Robot system for detecting rectangular target |
CN106647747A (en) * | 2016-11-30 | 2017-05-10 | 北京智能管家科技有限公司 | Robot charging method and device |
CN106826821A (en) * | 2017-01-16 | 2017-06-13 | 深圳前海勇艺达机器人有限公司 | The method and system that robot auto-returned based on image vision guiding charges |
CN107291084A (en) * | 2017-08-08 | 2017-10-24 | 小狗电器互联网科技(北京)股份有限公司 | Sweeping robot charging system, sweeping robot and cradle |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625005A (en) * | 2020-06-10 | 2020-09-04 | 浙江欣奕华智能科技有限公司 | Robot charging method, robot charging control device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110263601A (en) | 2019-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019174484A1 (en) | Charging base identification method and mobile robot | |
US11080876B2 (en) | Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera | |
EP3422955B1 (en) | System and method for assisted 3d scanning | |
WO2020102944A1 (en) | Point cloud processing method and device and storage medium | |
US9710919B2 (en) | Image-based surface tracking | |
Remondino et al. | State of the art in high density image matching | |
US10582188B2 (en) | System and method for adjusting a baseline of an imaging system with microlens array | |
JP3624353B2 (en) | Three-dimensional shape measuring method and apparatus | |
US10602059B2 (en) | Method for generating a panoramic image | |
US20040066500A1 (en) | Occupancy detection and measurement system and method | |
JP2015535337A (en) | Laser scanner with dynamic adjustment of angular scan speed | |
WO2018227576A1 (en) | Method and system for detecting ground shape, method for drone landing, and drone | |
JP2013101045A (en) | Recognition device and recognition method of three-dimensional position posture of article | |
JP6880822B2 (en) | Equipment, mobile equipment and methods | |
WO1997006406A1 (en) | Distance measuring apparatus and shape measuring apparatus | |
US11398085B2 (en) | Systems, methods, and media for directly recovering planar surfaces in a scene using structured light | |
WO2022078488A1 (en) | Positioning method and apparatus, self-moving device, and storage medium | |
US10055881B2 (en) | Video imaging to assess specularity | |
US20210374978A1 (en) | Capturing environmental scans using anchor objects for registration | |
CN103206926B (en) | A kind of panorama three-dimensional laser scanner | |
WO2022078442A1 (en) | Method for 3d information acquisition based on fusion of optical scanning and smart vision | |
JP2005157779A (en) | Distance-measuring apparatus | |
WO2022077238A1 (en) | Imaging display method, remote control terminal, device, system, and storage medium | |
JP2006220603A (en) | Imaging apparatus | |
WO2022078433A1 (en) | Multi-location combined 3d image acquisition system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19767586 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19767586 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19767586 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 19.03.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19767586 Country of ref document: EP Kind code of ref document: A1 |