WO2022027252A1 - 移动机器人的标记、关联和控制方法、系统及存储介质 - Google Patents

移动机器人的标记、关联和控制方法、系统及存储介质 Download PDF

Info

Publication number
WO2022027252A1
WO2022027252A1 PCT/CN2020/106899 CN2020106899W WO2022027252A1 WO 2022027252 A1 WO2022027252 A1 WO 2022027252A1 CN 2020106899 W CN2020106899 W CN 2020106899W WO 2022027252 A1 WO2022027252 A1 WO 2022027252A1
Authority
WO
WIPO (PCT)
Prior art keywords
landmark
pattern
mobile robot
data
light intensity
Prior art date
Application number
PCT/CN2020/106899
Other languages
English (en)
French (fr)
Inventor
崔彧玮
Original Assignee
苏州珊口智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州珊口智能科技有限公司 filed Critical 苏州珊口智能科技有限公司
Priority to PCT/CN2020/106899 priority Critical patent/WO2022027252A1/zh
Publication of WO2022027252A1 publication Critical patent/WO2022027252A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present application relates to the technical field of mobile robots, and in particular to a method, system and storage medium for marking, associating and controlling mobile robots.
  • Mobile robots are widely used in large indoor occasions such as airports, railway stations, warehouses, hotels, etc. due to their autonomous mobility.
  • commercial cleaning robots, handling/distribution robots, welcome robots, etc. use the autonomous movement function to realize functions such as cleaning, handling, and guiding.
  • mobile robots Affected by external factors such as the complexity of indoor venues and internal factors such as the functions that mobile robots can provide, mobile robots sometimes cannot fully and efficiently perform the functions they can provide, resulting in the use of such mobile robots in the corresponding market. limited.
  • the purpose of the present application is to provide a method for marking, associating and controlling a mobile robot, a control system and a storage medium, so as to overcome the inconvenience of the mobile robot in the above-mentioned related art.
  • a first aspect disclosed in the present application provides a method for marking a landmark pattern in map data, including: acquiring light intensity lattice data; A pattern area of a landmark pattern; wherein, the landmark pattern is used to correspond to at least one working mode of the mobile robot; the landmark position of the landmark pattern in the map data is determined according to at least one type of location information recorded in the map data information; wherein, the recorded at least one type of position information corresponds to the landmark data in the light intensity lattice data and/or corresponds to the positioning information when the mobile robot acquires the light intensity lattice data.
  • a second aspect disclosed in the present application provides a method for establishing an association between a working mode and a landmark pattern, including: acquiring a landmark pattern corresponding to landmark location information in map data; and acquiring at least one path passing through the landmark location information of the landmark pattern and Its behavior operation; wherein, the landmark position information is marked using the marking method described in the first aspect; establish the association relationship between the landmark pattern and each working mode and save it, wherein the working mode Include one of the paths and its behavior action.
  • a third aspect disclosed in the present application provides a control method for a mobile robot, including: acquiring light intensity lattice data; extracting a pattern area corresponding to a landmark pattern from the light intensity lattice data; wherein the landmark pattern is displayed on a map Corresponding landmark position information is marked in the data; based on the working mode corresponding to the determined landmark pattern, the control information of the mobile robot is generated; wherein, the control information is used to control the mobile robot according to the working mode
  • the path containing the location information of the landmark performs the corresponding behavior operation.
  • a fourth aspect disclosed in the present application provides a control device for a mobile robot, comprising: a storage unit storing map data and at least one program; a processing unit calling and executing the at least one program to coordinate the storage device to execute and execute the program. Any one of the following methods is implemented: the marking method as described in the first aspect, the association method as described in the second aspect, or the control method as described in the third aspect.
  • a fifth aspect disclosed in the present application provides a mobile robot, including: a light sensing device for acquiring light intensity lattice data; a mobile device for performing a movement operation under control; the control device according to the fourth aspect, with The light sensing device is connected to the mobile device to control the mobile device based on the light intensity lattice data obtained from the light sensing device.
  • a sixth aspect disclosed in the present application provides a computer-readable storage medium, characterized by storing at least one program, and the at least one program is executed by a device where the storage medium is controlled by a processor to execute as described in the first aspect.
  • the marking, association and control method, control system and storage medium of a mobile robot disclosed in the present application by marking the landmark pattern in the map data, and constructing the association relationship between the landmark pattern and the working mode, are more useful.
  • the association relationship controls the mobile robot to autonomously perform corresponding behavior operations in the working mode, so that the mobile robot can work efficiently and autonomously in a complex working environment, in other words, it can complete tasks with high quality within an effective range in the workplace, so as to avoid Perform full-coverage autonomous work in the workplace.
  • Figures 1A-1D show examples of using geometric shapes to form landmark patterns.
  • Figure 2 shows a simplified schematic diagram of an aquatic product area in a supermarket.
  • FIG. 3 shows a structural block diagram of the mobile robot of the present application.
  • FIG. 4 is a schematic structural diagram of a complete mobile robot equipped with driving components according to the present application.
  • FIG. 5 is a schematic diagram showing the structure of the driving components on the mobile robot according to the present application.
  • FIG. 6 is a schematic structural diagram of a charging pile provided with a landmark pattern according to the present application.
  • FIG. 7 is a schematic flowchart of a method for marking the first landmark data of the present application.
  • Figures 8A-8B show light intensity lattice data captured as a landmark pattern.
  • Figures 9A-9C respectively show a landmark pattern, a corresponding pattern mask, and a pattern outline taking a cartoon pattern as an example.
  • 10A-10B respectively show the one-dimensional light intensity lattice data and the two-dimensional light intensity lattice data obtained for the mobile robot corresponding to FIG. 1A .
  • FIG. 11 shows a landmark pattern constructed by a combination of triangles for each second landmark data represented by a star after statistical filtering in two map regions.
  • Figure 12 shows three historical paths selected from multiple historical paths recorded by the mobile robot.
  • FIG. 13 is a schematic flowchart of a control method of a mobile robot.
  • FIG. 14 is a schematic diagram of a scene of a mobile robot in a working mode including a homing operation.
  • FIG. 15 is a schematic diagram showing the architecture of a map updating system for marking landmark patterns in map data.
  • FIG. 16 is a schematic diagram showing the structure of a control system for controlling a mobile robot by utilizing the relationship between the landmark pattern and the working mode.
  • A, B or C or “A, B and/or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C” . Exceptions to this definition arise only when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the decoration environment of the waiting room is very similar.
  • the commercial cleaning robot uses such as VSLAM technology to extract the landmark data formed by the decoration environment from the image and combine it with inertial navigation for positioning.
  • the map data constructed by it contains relatively dense landmark data with high similarity, which makes it difficult for commercial cleaning robots to obtain information when the surrounding environment of the commercial cleaning robot is occluded and the light is strong/dark.
  • Landmark data that effectively represents the shooting location of the mobile robot is matched with the image and map data of the mobile robot. In this way, the commercial cleaning robot may interrupt the navigation movement due to the inability to obtain the positioning data, thereby interrupting the cleaning work performed.
  • an identification code is pre-affixed in the workplace, and the encoded binary data in the identification code is used, Provide location data to the mobile robot.
  • the identification code includes, for example, a pattern code that uses color blocks to describe encoded binary data, such as a barcode or a two-dimensional code.
  • the operator preliminarily affixes each identification code to the designated position of the workplace where the mobile robot works according to the label corresponding to the identification code.
  • each The identification code is a pattern code obtained by encoding the binary data of each tag.
  • the operator sticks the corresponding identification code on the corresponding water dispenser and marks the location information of the corresponding water dispenser in the map data.
  • the identification code is obtained, the information of the water dispenser numbered 001 is obtained, and the location information of the water dispenser is obtained by association.
  • the mobile robot can determine that the identification code is located near the corresponding position information when the identification code is captured, and based on the determination result, the mobile robot can roughly determine whether it is still on the preset navigation route, or whether it has seriously deviated from the preset navigation route.
  • the method of decoding the two-dimensional code data in the identification code to confirm the label represented by the identification code, and the method of determining the position where the identification code is attached according to the label improves the mobile robot to a certain extent in complex work.
  • the accuracy of navigation movements in the venue is still not conducive to balancing the work efficiency and work range of mobile robots.
  • Taking commercial cleaning robots as an example in order to improve the working range of commercial cleaning robots, it is necessary to paste more identification codes in the workplace, thereby assisting the positioning of the robot by effectively covering the corresponding working range, which makes the identification codes affect the corresponding work.
  • the method of manually marking the identification code in the map data causes a large error between the actual position of the identification code and the marked position, which is not conducive to accurate positioning. For example, if the identification code is set near a fork, the identification code may mislead the positioning of the mobile robot.
  • the present application also provides a method for controlling the operations performed by the mobile robot by using the pre-built correlation between the landmark pattern and the working mode. and system.
  • the association relationship is used for the mobile robot to determine the corresponding behavior operation according to the location of the landmark pattern in the map and the path passing through the location.
  • the landmark pattern is arranged in the functional area associated with the work mode in the workplace.
  • the workplace is a supermarket
  • the mobile robot is a commercial cleaning robot
  • items with landmark patterns are arranged in areas of different types of commodities in the supermarket; examples of the areas include fresh food areas, snack areas, and toiletries area, etc.; according to the types of goods placed in different areas, the paths and cleaning methods of commercial cleaning robots are different.
  • the commercial cleaning robot performs dragging around the path of the fresh food counter.
  • commercial cleaning robots perform sweeping operations and mopping operations around the path where snacks and toiletries are placed on the counter.
  • the workplace is a waiting hall
  • the mobile robot is a commercial cleaning robot
  • items with landmark patterns are arranged in the waiting hall and the check-in area in the waiting hall;
  • commercial cleaning robots perform sweeping/mopping operations along different paths.
  • the landmark patterns are placed on the paths traveled by the mobile robot in various areas of the workplace.
  • the landmark pattern can uniquely identify a position on the path corresponding to the working mode of the mobile robot, or identify the intersection of multiple paths corresponding to multiple working modes, and the like.
  • the landmark pattern is on items pre-arranged at different locations in the workplace.
  • the landmark pattern is obviously different from the pattern formed by the decoration environment in the workplace, or is obviously different from the indication logo pattern in the workplace, etc., which is mainly formed by at least one or more combinations of graphics, textures and colors formed pattern.
  • the patterns in the decoration environment include patterns set to express the decoration style, such as wallpaper, wall enclosures, patterns on seats, and the like.
  • Examples of the indication pattern include: an inquiry pattern in a waiting hall, a trademark pattern of each airline, a discount pattern in a supermarket, and the like.
  • the items are stickers, stickers, standing signs, listings, etc.; or decorative items such as tiles incorporating landmark patterns.
  • the landmark pattern includes at least one preset geometric shape.
  • the geometric shapes include basic geometric shapes such as at least one or a combination of rectangles, polylines, triangles, circles, ellipses, line segments, and curves.
  • Each geometric shape on the landmark pattern is made up of at least one material with reflectance coefficient.
  • the geometric shape may be formed by using a highly reflective material, or a low reflective material, or a combination of a high reflective material and a low reflective material.
  • the high reflective material and the low reflective material are selected based on the light intensity of the wavelength of light to which the light sensing device of the mobile robot is sensitive.
  • the physical size of the geometric shape is preset, for example, the physical size of each of the following example geometric shapes is preset: the length of each side of a triangle, two sides and one corner in a triangle, a radius of a circle, a rectangle Edge length, line segment length, curve length and curvature, etc.
  • one geometric shape made of highly reflective material and another geometric shape made of low reflective material are spliced into a landmark pattern, wherein the two geometric shapes can be the same or different, and the splicing methods include seamless splicing or seam stitching etc.
  • FIGS. 1A-1D are exemplary diagrams of using geometric shapes to form landmark patterns
  • the geometric shapes in FIG. 1A include rectangles A1 and A2, and a gap is left between the rectangles A1 and A2 to facilitate the identification of the rectangle A1 , A2, and the gap between rectangles A1 and A2, rectangles A1 and A2 use relatively high reflective material, the gap uses relatively low reflective material; or rectangles A1 and A2 use relatively low reflective material, the gap uses relatively high reflective material.
  • the geometric shape in FIG. 1B includes a rectangle B1 and a circle B2, and a gap is left between the rectangle B1 and the circle B2.
  • the geometric shape in Fig. 1C includes rings C1 and C2, the outer contour of the ring C1 is a rectangle and the inner contour is an ellipse, the outer contour and the inner contour of the ring C2 are both circular, and there is a gap between the rings C1 and C2. There are gaps.
  • the rings C1 and C2 are made of relatively high-reflective materials, and the outer periphery, the inner ring and the gap are relatively low-reflective. or different from the illustration, the rings C1 and C2 are made of relatively first low-reflection materials, their peripheral peripheries are made of relatively second low-reflection materials, and the shapes formed by their inner contours are made of relatively high-reflection materials.
  • a low-reflection material has a lower light reflectivity than the second low-reflection material, so as to highlight the boundary of the shape enclosed by the inner contour.
  • 1D includes an unclosed triangle surrounded by lines D1, D2 and D3.
  • lines D1, D2 and D3 are made of relatively highly reflective materials.
  • the resulting non-closed triangle uses relatively low-reflective materials.
  • the landmark pattern is a pattern determined by the mobile robot by identifying objects placed in an area within the workplace and a plurality of landmarks in the environment surrounding the objects.
  • the landmark pattern is a pattern formed by a plurality of landmarks at a specific location in the corresponding workplace obtained by image processing of the light intensity lattice data by the mobile robot.
  • the workplace is a supermarket, where fish tanks, faucets, fish commodity signs, lampshades, etc. are placed in the aquatic product area in the supermarket, and one or more combinations of shapes and textures recognized by the mobile robot through images etc. to obtain a landmark pattern consisting of at least one landmark such as dots/lines on the surface of the above-mentioned article.
  • the landmark pattern is determined by using a plurality of landmarks to construct at least one geometric figure or a combination of the above-mentioned geometric figures.
  • FIG. 2 is a simple schematic diagram of an aquatic product area in a supermarket, in which the star-shaped mark is a triangular landmark pattern constructed by the mobile robot, wherein the star-shaped mark in this example represents the location of the mobile robot. Graphical marking of the location of the identified landmark in the aquatic area does not indicate that a star pattern is located in the aquatic area.
  • the mobile robot extracts stable and stable landmark data at specific locations in the functional area according to the images and/or measured measurement data taken at different locations in the same functional area in different periods of time.
  • the landmarks corresponding to these landmark data constitute Landmark pattern.
  • the light reflection ability of the landmark is different from the light reflection ability of its surrounding environment, so as to form a light reflection difference with the surrounding environment.
  • Examples of the landmarks include, but are not limited to: dots/lines on the frame of a publicity board, strokes/lines/dots of printed text, dots/lines on the outline of a lampshade, and the like.
  • the mobile robot is equipped with a fill light to cooperate with the acquisition of light intensity lattice data of each shooting position in the workplace during multiple navigation movements, so as to facilitate the use of the light wave in the workplace with strong reflection ability to the fill light. /Landmarks with weak reflection ability, obtain the light intensity lattice data containing the landmarks, and obtain the corresponding landmark data through data processing of the light intensity lattice data.
  • the multiple navigation movements include at least one of the following: navigation movements performed during the closing/opening period of the workplace, navigation movements performed when the sunlight is sufficient/insufficient, and the like.
  • the landmark data includes: light intensity lattice data including landmarks, feature vectors describing the landmarks (referred to as landmark features), landmark location information of the landmarks in the map data, etc.
  • the landmark data is also associated with photography location information and light intensity lattice data containing landmarks, etc.
  • the landmark pattern is obtained based on a geometric figure or a combination of geometric figures enclosed by the landmark position information of each landmark data in the map data; or the landmark pattern is obtained based on the light intensity of each landmark data. It is obtained from the geometric figures or the combination of geometric figures enclosed in the lattice data. It can be seen that the landmark pattern is determined based on the selection of the acquired landmark data by the mobile robot.
  • the method for selecting the landmark data includes at least one of the following: selecting the landmark data that can capture the most shooting positions of the same landmark by counting the shooting positions of each landmark data; Landmark data corresponding to the same light intensity data. For example, by using the relationship between the landmark data established in the map data, the shooting position, and the captured light intensity data, statistics are performed on the shooting positions corresponding to each landmark data, the positional relationship between the shooting positions, etc. In the processing, the counted number of photographing positions for photographing the same landmark data, the positional relationship between the photographing positions, the position angle, etc. are evaluated, and a plurality of landmark data for constituting the landmark pattern is selected.
  • the shooting location uses the relationship between the landmark data established in the map data, the shooting location, and the captured light intensity data, determine multiple sets of landmark data ⁇ P11, P12, P13, P14 ⁇ , ⁇ P21, P22, P13, P14, P15 ⁇ , ⁇ P11, P21, P12, P13 ⁇ , ⁇ P21, P12, P13, P16 ⁇ , the determined landmark data were photographed at different positions Statistical processing is carried out on the number of times that can fall into the same light intensity data, and the number of photographing positions counted, the probability of falling into the same light intensity data, etc. are evaluated, and a plurality of landmark patterns are selected from among them. Landmark data ⁇ P21, P12, P13 ⁇ .
  • the plurality of landmark data selected in the above examples can correspond to a plurality of landmarks in the workplace, and the geometric figures (or a combination of geometric figures) enclosed by the plurality of landmarks constitute a landmark pattern in the workplace.
  • the landmark pattern is obtained by combining the two preceding examples.
  • the aforementioned landmark pattern that is clearly different from the environment and function indication patterns of the workplace is referred to as a first landmark pattern
  • the aforementioned landmark pattern formed by using landmarks in the workplace is referred to as a second landmark pattern.
  • the landmark pattern obtained by combining the first landmark pattern and the second landmark pattern is called the third landmark pattern.
  • the landmarks constituting the third landmark pattern include points/lines on at least part of the geometric shapes in the first landmark pattern, and landmarks in the second landmark pattern.
  • first landmark patterns are not exactly the same, and the locations where the same first landmark patterns are distributed must be sparse to prevent the mobile robot from moving.
  • the corresponding working mode cannot be accurately determined according to the identified first landmark pattern. For example, within a preset radius range (eg, a range of 5m), the first landmark patterns of the same pattern will not be arranged.
  • the landmark data corresponding to the pattern and the landmark data corresponding to other landmarks is referred to as the first landmark data in the subsequent description, and the landmark data corresponding to other landmarks is referred to as the second landmark data.
  • the identified landmark data is the first landmark data, it is also referred to as the second landmark data.
  • landmarks include landmarks in the workplace that exist through environmental decoration, architectural layout, and functional area division, and can be recognized by mobile robots and are associated with geographic locations.
  • a landmark formed on the roof/surroundings of the mobile robot's second location can be identified by the mobile robot as a landmark feature.
  • the mobile robot includes hardware devices and software devices that provide the corresponding working modes and identify each landmark pattern.
  • FIG. 3 shows a block diagram of the structure of the mobile robot.
  • the mobile robot includes: a light sensing device 12 , a moving device 14 , a control device 11 and the like.
  • the mobile robot further includes: a behavior operation device 13 .
  • the behavior operation device 13 includes at least one of the following: a cleaning component, a mopping component, or a polishing component.
  • the light sensing device is used for acquiring light intensity lattice data.
  • the corresponding obtained light intensity lattice data may be one-dimensional data or two-dimensional data.
  • the light-sensing device is a one-dimensional light-sensing device that uses the principle of light reflection for measurement.
  • the light sensing device includes at least one of the following line laser sensing devices, or a movable single-point laser sensing device, and the like.
  • the line laser sensing device acquires one-dimensional light intensity lattice data with a certain detectable range.
  • the line laser sensing device includes a plurality of laser transceivers arranged in a straight line/arc shape, and the lattice laser beams emitted by each laser transceiver are located in a plane.
  • the movable single-point laser sensing device obtains one-dimensional light intensity lattice data with a corresponding detectable range according to its motion stroke. For example, the single-point laser sensing device is rotated within a detectable range greater than 0 degrees and less than or equal to 360 degrees, so as to obtain one-dimensional light intensity lattice data acquired in its rotation plane, the one-dimensional light intensity lattice data The light intensity value of each point in the data corresponds to each angle value within its detectable range.
  • the light sensing device is a two-dimensional light sensing device that uses the principle of light reflection to perform image imaging.
  • the light sensing device includes at least one of the following multi-line laser sensing devices, imaging devices, infrared imaging devices, and the like.
  • the multi-line laser sensing device acquires multiple sets of one-dimensional light intensity lattice data with a certain detectable range to form two-dimensional light intensity lattice data.
  • the light sensing device includes a monocular camera device or a binocular camera device integrated with CCD and CMOS, etc., to obtain a color image (ie, two-dimensional light intensity lattice data).
  • the infrared camera device includes infrared light-emitting diodes and CCD chips to sense the color image of infrared light (ie, two-dimensional light intensity lattice data), which can be integrated with other measurement sensors such as ToF sensors to simultaneously Other measurement data such as depth image data and two-dimensional light intensity lattice data are output.
  • infrared light-emitting diodes and CCD chips to sense the color image of infrared light (ie, two-dimensional light intensity lattice data), which can be integrated with other measurement sensors such as ToF sensors to simultaneously Other measurement data such as depth image data and two-dimensional light intensity lattice data are output.
  • the two-dimensional light-sensing device disposed on the mobile robot is further configured with a driving component to drive the light-sensing device to rotate and/or translate within the detectable range provided by the mobile robot.
  • the detectable range includes the natural viewing angle range of the two-dimensional light sensing device and the sum of the rotation range and/or translation range generated by the driving component.
  • FIG. 4 and FIG. 5 are respectively a schematic structural diagram of a complete mobile robot equipped with a driving component, and a structural schematic diagram of the driving component.
  • the mobile robot 1 includes a main body 10 , a visible casing 100 , and a carrier 102 for assembling driving components, wherein the carrier 102 is provided with a hollow structure 103 for exposing the light sensing device 101 .
  • the driving component 202 includes a movable member 2021 and a driver 2022 .
  • the movable element 2021 is connected and movable to drive the light sensing device (not shown).
  • the light sensing device and the movable member 2021 may be connected by positioning or by a transmission structure.
  • the positioning connection includes any one or more of snap connection, riveting, bonding, and welding.
  • the movable member 2021 is, for example, a drive rod that can be rotated laterally, and the light sensing device has a recessed hole (not shown) that fits with the drive rod in a form-fitting manner.
  • the light sensing device can rotate laterally with the driving rod;
  • the seat should translate with the rotation of the screw rod, and the connecting seat is fixed to the light-sensing device 201 so that the light-sensing device 201 can move accordingly.
  • the light sensing device 201 and the movable member 2021 may also be connected by one or more of teeth, gears, racks, toothed chains, etc., so as to realize the movable member 2021 For the driving of the light sensing device 201 .
  • the line laser sensing device disposed on the driving part can obtain one-dimensional light intensity lattice data, then under the driving of the driving part, the line laser sensing device can obtain one-dimensional light intensity with a larger detectable range dot matrix data. If the light sensing device disposed on the driving part can obtain two-dimensional light intensity lattice data, then under the driving of the driving part, the light sensing device can obtain multiple two-dimensional overlapping viewing angle regions within the detectable range The light intensity lattice data, or obtain a two-dimensional light intensity lattice data with a detectable range.
  • the mobile device is used for controlled execution of a mobile operation.
  • the mobile device may include a walking mechanism and a driving mechanism, wherein the walking mechanism may be disposed on the bottom of the mobile robot, and the driving mechanism is built in the casing of the mobile robot.
  • the traveling mechanism may adopt the mode of traveling wheels, and in an implementation manner, the traveling mechanism may include, for example, at least two universal traveling wheels, and the at least two universal traveling wheels realize forward, backward, Steering, and rotation, etc.
  • the running mechanism may, for example, include a combination of two straight-traveling running wheels and at least one auxiliary steering wheel, wherein, in the case where the at least one auxiliary steering wheel is not involved, the two straight-running running wheels It is mainly used for forward and backward, and when the at least one auxiliary steering wheel participates and cooperates with the two straight traveling wheels, the movement such as steering and rotation can be realized.
  • the drive mechanism can be, for example, a drive motor, and the drive motor can drive the traveling wheels in the traveling mechanism to move.
  • the drive motor may be, for example, a reversible drive motor, and a speed change mechanism may also be provided between the drive motor and the axle of the traveling wheel.
  • the behavior operation device is used to perform behavior operation in a certain working mode according to the position moved by the mobile robot.
  • the behavior operation device is also called a cleaning device, which has functions of cleaning and mopping floors.
  • the cleaning device includes a mopping assembly (not shown), wherein the mopping assembly is used to perform a mopping operation in a controlled manner.
  • the mopping assembly includes: a mopping pad, a mopping pad carrier, a spray device, a sprinkler device, and the like.
  • the mopping component is used for controlled mopping operation in mopping mode.
  • the cleaning device further includes a sweeping assembly (not shown), the sweeping assembly is used to perform a sweeping operation in a controlled manner.
  • the cleaning assembly may include a side brush at the bottom of the housing, a rolling brush, and a side brush motor for controlling the side brush and a rolling brush motor for controlling the rolling brush, wherein the number of the side brushes There can be at least two, which are symmetrically arranged on opposite sides of the front end of the mobile robot housing.
  • the side brushes can be rotary side brushes, which can be rotated under the control of the side brush motors.
  • the roller brush is located in the middle of the bottom of the mobile robot, and can be rotated and rotated under the control of the roller brush motor to perform cleaning work, sweeping the garbage from the clean ground and conveying it into the vacuum assembly through the collection inlet.
  • the dust collecting assembly may include a dust collecting chamber and a fan, wherein the dust collecting chamber is placed in the housing, and the fan is used for providing suction to suck the garbage into the dust collecting chamber.
  • the cleaning device is not limited to this.
  • the supporting use of the mobile robot also includes: a charging pile and/or a garbage collection device.
  • the garbage recycling device includes a solid garbage recycling device and/or a sewage recycling device.
  • the above-mentioned landmark pattern is also provided at the location where the charging pile and the garbage collection device are located, and the landmark pattern is used to associate with the homing working mode of the mobile robot.
  • the mobile robot also includes a household cleaning robot, which is used to perform cleaning operations in domestic premises, as well as performing homing operations.
  • FIG. 6 is a schematic structural diagram of a charging pile with a landmark pattern, wherein the charging pile includes: a main body and an electrical signal terminal 22 exposed outside the main body; There is a landmark pattern 21.
  • the landmark pattern corresponds to the homing operation of the mobile robot, and the mobile robot constructs an association relationship between the work mode to which the classified operation belongs and the corresponding landmark pattern through the preset correspondence between the classified operation and the landmark pattern.
  • the structure of the charging pile is only an example, and the charging pile may also be a charging pile that provides a charging function using the principle of wireless charging, and a landmark pattern for identifying the location of the charging pile may also be provided on the body thereof.
  • the surface of the body of the garbage collection device is also provided with the landmark pattern, so as to correlate the working mode and the landmark pattern that include operations.
  • the control device is used for marking the landmark patterns, constructing the association relationship between each landmark pattern and the working mode, and using the association relationship to control the mobile device, behavior operation device and the like of the mobile robot.
  • the control system includes a storage unit, an interface unit, and a processing unit.
  • the interface unit is used for receiving the light intensity lattice data captured by the light sensing device.
  • the interface unit is connected with at least one light-sensing device, and is used for reading the light intensity lattice formed by the reflected light from the surface of the object within its detectable range from the corresponding light-sensing device data.
  • the interface unit is also connected with the mobile device and/or the behavior execution device to output control instructions for controlling the mobile device and/or the behavior execution device.
  • the driving motor of the traveling mechanism is connected to output the control command to control the rotation of the side brush, the rolling brush, or the traveling mechanism.
  • the control instruction is generated by the processing unit based on the association relationship.
  • the interface unit includes, but is not limited to, a serial interface such as an HDMI interface or a USB interface, or a parallel interface.
  • the storage unit is used for storing at least one kind of program, and the at least one kind of program can be used by the processing unit to execute instructions such as a method for controlling a mobile robot, a method for marking landmarks, and the like that cause various hardware devices to execute cooperatively.
  • the storage unit also stores map data.
  • the map data is a digital expression that maps the workplace moved by the mobile robot to a coordinate system using a grid/vector, etc., including but not limited to: various landmark data and their landmark location information, mobile robot
  • the various types of landmark data include the aforementioned first landmark data and second landmark data.
  • a data association relationship is established between the above-mentioned data/information and the like according to data association requirements, so that other data/information can be indexed by using some of the data/information.
  • all kinds of landmark data have a data association relationship with the light intensity lattice data, and the light intensity lattice data and the shooting position of the corresponding landmark data can be queried by using the various landmark data.
  • the storage unit includes but is not limited to: read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), and nonvolatile memory (Nonvolatile RAM, NVRAM for short).
  • storage units include flash memory devices or other non-volatile solid state storage devices.
  • the storage unit may also include memory remote from the one or more processing units, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the Internet, one or more An intranet, local area network, wide area network, storage area network, etc., or a suitable combination thereof.
  • a memory controller controls access to memory by other components of the device, such as the CPU and peripheral interfaces.
  • the processing unit is connected with the interface unit and the storage unit.
  • the processing unit includes one or more processors (CPUs).
  • the processing unit is operable to perform data read and write operations with the memory unit.
  • the processing unit performs operations such as acquiring light intensity lattice data, temporarily storing landmark features, performing landmark feature matching calculations, and outputting control instructions according to measurement data fed back from the mobile device and/or behavior execution device.
  • the processing unit includes one or more general-purpose microprocessors, one or more special-purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), one or more field programmable logic Array (Field Programmable Gate Array, FPGA for short), or any combination thereof.
  • the processing unit is also operably coupled to the I/O port and a human-computer interaction device that can enable a user to interact with the mobile robot.
  • a human-computer interaction device that can enable a user to interact with the mobile robot.
  • configuration operations such as entering a preset altitude value.
  • the human-computer interaction device may include buttons, keyboards, mice, trackpads, and the like.
  • the I/O port may enable the mobile robot to interact with various other electronic devices such as mobile devices and/or behavior execution devices, including but not limited to: motors in the mobile device in the mobile robot, or A slave processor in a mobile robot dedicated to controlling mobile devices and cleaning devices, such as a Microcontroller Unit (MCU for short).
  • MCU Microcontroller Unit
  • the mobile robot pre-stores that the map data contains the first landmark data, and pre-builds data between at least part of the first landmark data and the working mode.
  • the association relationship is determined, thereby determining the association relationship between the landmark pattern corresponding to the first landmark data and the working mode.
  • FIG. 7 is a schematic flowchart of a method for marking the first landmark data, which is used to convert the landmark pattern recognized by the mobile robot during movement into the first landmark data and record it in the map data.
  • the method for marking the first landmark data may be performed in the stage of constructing map data by the mobile robot; or in the stage of updating the map data by the mobile robot.
  • the mobile robot maps the initial position of its movement to the preset positioning position in the map data, such as the coordinate origin in the map data, and starts to obtain the light intensity lattice data at different positions, and measures the movement
  • the mobile robot captures the first light intensity data at the second position and extracts a plurality of landmark features A therein, and compares the first light intensity data and its plurality of landmark features A with those in the map data.
  • Each second light intensity data and its first landmark data are matched with landmark features in each second landmark data, and the second position is determined according to the matched landmark features and their pixel deviations in each light intensity lattice data
  • the mobile robot also executes in real time or in a delayed manner: according to the multiple landmark features A captured at different locations
  • update the map data for example, add the corresponding first landmark data or the second landmark data to the map data
  • match according to the map data For the first landmark data or the second landmark data whose number is lower than the preset matching number threshold, the map data is updated (eg, the corresponding first landmark data or the second landmark data in the map data is deleted).
  • the mobile robot executes the following steps S110-S130, so as to mark the first landmark data in the map data.
  • the landmark pattern corresponding to the first landmark data is usually provided at a position associated with a work area in the workplace, or at a position associated with a work path within the work area, for which the first landmark data is marked In the map data, it is helpful for the mobile robot to locate and choose to perform related actions.
  • step S110 light intensity lattice data is acquired.
  • the light intensity lattice data comes from the storage device.
  • the light intensity lattice data in a storage address in the storage device is read by a read operation of the database or by a read command of calling the memory.
  • the light intensity lattice data may come from a light sensing device configured in the mobile robot.
  • the light intensity lattice data is one-dimensional data or two-dimensional data.
  • the detectable range of the light sensing device configured on the mobile robot is greater than or equal to the viewing angle range of the light sensing device, the light intensity lattice data is obtained according to at least one shot.
  • the captured light intensity lattice data contains a landmark pattern
  • the landmark pattern is a geometric shape made of a highly reflective material. If E11 and E12 are combined, the light intensity lattice data is shown in FIG. 8A .
  • the captured light intensity lattice data includes a landmark pattern
  • the landmark pattern is formed by combining geometric shapes E21 and E22 made of highly reflective materials.
  • the light intensity lattice data is shown in Figure 8B.
  • one light intensity lattice data is formed by splicing a plurality of light intensity lattice data acquired during the movement of the light sensing device, one-dimensional or two-dimensional data similar to FIGS. 8A and 8B with a wider viewing angle range can be obtained. Not shown here.
  • the landmark pattern can also be made of the ground reflective material.
  • the light intensity lattice data obtained by the mobile robot is opposite to the light intensity shown in FIGS. 8A and 8B , and will not be described in detail here. . It should also be noted that, in order to facilitate the description of the one-dimensional light intensity lattice data and the two-dimensional light intensity lattice data, Figures 8A and 8B have been exaggerated/intercepted/filtered, rather than the one-dimensional light intensity lattice data. and 2D light intensity lattice data representation is shown only.
  • step S120 it is confirmed that the acquired light intensity lattice data includes a pattern area corresponding to the landmark pattern.
  • the mobile robot selects the pattern to be confirmed formed by each pixel position in the light intensity lattice data that meets the pattern screening conditions by performing pattern screening on the light intensity value of each pixel position in the acquired light intensity lattice data Determine whether it matches a landmark pattern by performing image analysis on the to-be-confirmed pattern region, and if it matches, determine that the light intensity lattice data contains the pattern region corresponding to the landmark pattern, and execute step S130, otherwise Then step S110 is re-executed to acquire new light intensity lattice data.
  • the pattern screening condition is used to remove the light intensity values that obviously do not conform to the pattern rules of the landmark pattern in the light intensity lattice data, so that the filtered light intensity lattice data is formed according to the light intensity value of each pixel position at least A connected domain (also known as a pattern region to be confirmed), for this purpose, the pattern screening conditions at least include screening conditions set based on the intensity of light. It can also include filter conditions set to remove image noise. For example, the filtering condition of the pattern area to be confirmed whose connected domain area is smaller than the preset connected domain area threshold is removed. For another example, the filter condition for removing the pattern region to be confirmed whose shape of the connected domain is significantly different from the shape of the pattern template corresponding to the landmark pattern is removed.
  • the mobile robot obtains the pattern area to be confirmed based on the reserved/filtered pixel positions, which includes, for example, a pattern area formed by a closed outline. A patterned area, or a patterned area formed by a combination of multiple closed contours. The mobile robot performs image analysis on the pattern area to be confirmed to determine the landmark pattern corresponding to the pattern area.
  • the above-mentioned image screening process can also be omitted according to the data volume of the light intensity point cloud data provided by the light sensing device of the mobile robot, or the actual environment of the operation performed by the mobile robot.
  • the mobile robot divides the acquired light intensity point cloud data into a pattern area or a combination of pattern areas based on the mutation condition of the light intensity values of adjacent pixel positions, and uses the recognition conditions to identify the landmark pattern.
  • the mobile robot identifies the to-be-determined pattern area filtered from the light intensity lattice data according to a pattern template reflecting the landmark pattern, so as to confirm whether the acquired light intensity lattice data contains the corresponding landmark pattern. pattern area.
  • the pattern template includes, for example, the outline in the landmark pattern, and/or the image features that constitute the landmark pattern and presented in the light intensity lattice data.
  • the pattern template includes: image features represented by the contour of the landmark pattern in the light intensity lattice data, such as corner features, line features, contour features and any combination thereof; and/or according to the contour of the landmark pattern Set of pattern lines or pattern masks.
  • the light reflection intensity of the material used in the landmark pattern is different from the light reflection intensity of its surrounding materials, so as to facilitate the mobile robot to identify.
  • a pattern template corresponding to the landmark pattern is preset in the mobile robot.
  • the mobile robot obtains the light intensity lattice data, it is divided according to the light intensity value to obtain the pattern area to be determined.
  • the similarity of the pattern area of obtains the confirmation result of whether the acquired light intensity lattice data contains the landmark pattern.
  • the pattern template is image data set based on the outline of the landmark pattern, which includes but is not limited to pattern lines or at least one pattern mask describing the outline of the landmark pattern.
  • FIG. 9A and FIG. 9B are respectively a landmark pattern taking a cartoon pattern as an example, and a corresponding pattern mask.
  • the corresponding pattern template includes a pattern mask corresponding to the outline of the cartoon pattern and a pattern mask corresponding to the cartoon pattern. Pattern masks for local contours in the middle.
  • the corresponding pattern template is the contour lines forming the cartoon pattern.
  • the mobile robot extracts at least one element from the acquired light intensity lattice data according to the mutation conditions on the light intensity values of adjacent pixels in the light intensity lattice data or according to the pattern filtering conditions
  • the mobile robot uses the pattern template to match the obtained similarity of each to-be-determined pattern area, so as to determine the pattern area that includes the corresponding landmark pattern in the light intensity lattice data.
  • the method of determining the pattern area containing the corresponding landmark pattern in the light intensity lattice data by using the similarity degree can use an image similarity algorithm, for example, using a histogram comparison algorithm, or using an image feature comparison algorithm, etc. .
  • the landmark pattern and the manner in which it is determined that the landmark data is included in the light intensity lattice data described in the above example can also be used in the case where the landmark data is formed using a combination of one or more geometric shapes.
  • the mobile robot stores pattern templates corresponding to multiple landmark patterns.
  • the number of different image templates stored by the mobile robot includes the number of landmark patterns that can be adapted to be arranged in various workplaces, so that the mobile robot and the accessories equipped with the landmark patterns can be They are matched to each other at the factory.
  • the landmark pattern is a pattern composed of a plurality of geometric shapes made of materials with a certain light reflection coefficient.
  • the mobile robot can also pre-store the geometric features of each geometric shape in the light intensity lattice data. It is determined by the lattice dimension of the light intensity lattice data.
  • examples of the geometric features include: features of the contour of the geometric shape in the image.
  • the geometric features include each pattern template set according to the outline of each geometric figure in the landmark pattern.
  • the landmark pattern includes a rectangle
  • the geometric features of the rectangle in the one-dimensional light intensity lattice data include: image features of line segments whose length is not less than a preset number of pixels.
  • the geometric features also include, for example, features in the image that reflect the geometric data in the geometric shape or the relationship between the geometric data, and the like.
  • the landmark pattern includes triangles
  • the geometric features of the triangles in the two-dimensional light intensity lattice data include: corresponding image features that are consistent with at least two angles of the triangles.
  • the mobile robot determines a pattern area that includes a corresponding landmark pattern in the light intensity lattice data by performing the following steps.
  • step S121 at least one geometric feature is extracted from the acquired light intensity lattice data according to the pre-stored geometric feature; wherein, the geometric feature corresponds to the geometric shape constituting the landmark pattern.
  • step S122 a corresponding landmark pattern is determined according to the extracted geometric features.
  • the mobile robot performs image matching on the acquired light intensity lattice data according to the pre-stored geometric features, according to the geometric shapes corresponding to the matched geometric features, or according to the geometric shapes corresponding to the matched geometric features
  • the shape and the placement relationship between them determine the corresponding landmark pattern.
  • the mobile robot has pre-stored geometric features including right-angled isosceles triangles, squares, rectangles, and circles and other geometric shapes, and uses the stored geometric features to perform traversal similarity calculation on the acquired light intensity lattice data.
  • the geometric features of the right-angled isosceles triangle and the circle are extracted, and according to the pixel position relationship between the right-angled isosceles triangle and the circle in the light intensity lattice data, the right-angled isosceles triangle and the circle in the landmark pattern are determined.
  • the placement relationship of the circle is that the longest side of the right isosceles triangle is adjacent to the circle, so the landmark pattern is determined to be composed of a right isosceles triangle and a circle, and the longest side of the right isosceles triangle is adjacent to the circle. adjacent in shape.
  • the method of determining the pattern area corresponding to the landmark pattern in the light intensity lattice data may also adopt the solution of steps S123-S125, that is, by converting the landmark pattern and the light intensity lattice data to the same coordinate system (such as a map coordinate system or an image coordinate system), similarity matching is performed to determine the pattern area that contains the corresponding landmark pattern in the light intensity lattice data.
  • steps S123-S125 that is, by converting the landmark pattern and the light intensity lattice data to the same coordinate system (such as a map coordinate system or an image coordinate system), similarity matching is performed to determine the pattern area that contains the corresponding landmark pattern in the light intensity lattice data.
  • step S123 the pattern area to be determined is extracted from the light intensity lattice data.
  • step S124 it is judged whether the similarity between the landmark pattern and the to-be-determined pattern area in the same coordinate system meets the matching condition, and if so, step S125 is executed, that is, it is determined that the light intensity lattice data contains the corresponding landmark pattern If not, perform step S110 again to acquire new light intensity lattice data.
  • the mobile robot pre-stores the physical size of the landmark pattern, and acquires the shooting parameters of the photosensitive device when shooting the light intensity lattice data, such as the flight time of the single-point beam/structured light, or the optical focal length.
  • the mobile robot extracts the pattern area to be determined in the light intensity lattice data according to the preset mutation conditions or pattern screening conditions, and uses the above data to convert each pattern area to the world coordinate system, or convert the landmark pattern to a pattern based on each pattern. Under the image coordinate system constructed by the pattern area, similarity matching is performed under the same coordinate system to determine the pattern area that includes the corresponding landmark pattern in the light intensity lattice data.
  • FIG. 10A shows the light intensity lattice data obtained by the mobile robot.
  • the landmark pattern is, for example, a pattern formed by splicing a plurality of rectangles with different widths.
  • the mobile robot prestores at least one of the physical dimensions of the widths of the rectangles, the physical dimensions of the intervals between the matrices, and the arrangement order of the rectangles of each width.
  • the mobile robot extracts the pattern area to be determined in the light intensity lattice data according to the preset mutation conditions or pattern screening conditions, including: a first line segment area composed of pixels whose light intensity value is greater than the preset light intensity threshold, extracting the light intensity The second line segment area spaced between the first line segment areas in the dot matrix data, wherein both the first line segment area and the second line segment area belong to the pattern area.
  • the mobile robot also obtains the beam flight time corresponding to the current captured light intensity lattice data, and uses the flight time to calculate the distance D and the angle ⁇ between the mobile robot and the object surface measurement point of the reflected beam, and the internal parameters of the light sensing device (such as pixel spacing, etc.), build the mapping relationship between the pre-stored landmark pattern and the extracted pattern areas, so as to map the physical line segments represented by the physical dimensions in the landmark pattern to the one-dimensional coordinate axis where the pattern area is located, or map Each extracted pattern area is mapped under the one-dimensional coordinate axis of the physical line segment; by counting the positional deviation between each physical line segment and each pattern area, it is determined whether the light intensity lattice data contains or does not contain landmark patterns.
  • an example of a method for calculating the positional deviation between each physical line segment and each pattern area includes: according to the positional relationship between each first line segment area and each second line segment area, respectively Statistical calculations such as mean square error/mean value are performed between the two ends of the line segment area and the two ends of each physical line segment. If the obtained statistical results satisfy the preset matching conditions, it is determined that the light intensity lattice data contains the pattern corresponding to the landmark pattern. area, otherwise, it is determined that the light intensity lattice data does not contain a landmark pattern.
  • FIG. 10B is the light intensity lattice data obtained by the mobile robot.
  • the landmark pattern is, for example, a pattern formed by splicing a plurality of rectangles with different widths.
  • the mobile robot pre-stores at least one of the physical dimensions of the width and length of each rectangle, the physical dimension of the interval between each matrix, and the arrangement order of each width rectangle, and the like.
  • the mobile robot extracts the to-be-determined pattern area in the light intensity lattice data according to the preset mutation conditions or pattern screening conditions, including: a rectangular area composed of pixels whose light intensity value is greater than the preset light intensity threshold, wherein the rectangular area belongs to the pattern area.
  • the mobile robot also obtains the focal length of the object side and the focal length of the image side corresponding to the current captured light intensity lattice data, as well as the internal parameters of the light sensing device (such as the focal length of the image side, the focal length of the object side, the viewing angle range, etc.)
  • the mapping relationship between the pre-stored landmark patterns and the extracted pattern areas so as to map the landmark patterns represented by the preset physical sizes to the two-dimensional coordinate system of the image where the pattern area is located, or to map the extracted patterns.
  • the area is mapped to the two-dimensional coordinate system of the landmark pattern; by counting the positional deviations between each rectangle in the landmark pattern and each pattern area, it is determined that the light intensity lattice data contains/does not contain the landmark pattern.
  • an example of a method for calculating the positional deviation between each rectangle and each pattern area includes: according to the positional relationship between each rectangle, the distance-based variance or mean square of the end point coordinates of each pattern area and the end point coordinates of each rectangle are respectively performed. If the obtained statistical result satisfies the preset matching conditions, it is determined that the light intensity lattice data contains the pattern area corresponding to the landmark pattern; otherwise, it is determined that the light intensity lattice data does not contain The pattern area corresponding to the landmark pattern.
  • the matching condition is determined based on a statistical method. For example, if variance is used for distance statistics, the matching condition is that the deviation between the variance statistical result of the distances between the coordinates of the endpoints and the mean value is less than a preset deviation threshold.
  • the physical sizes of several landmark patterns are selected from the various landmark patterns for the above processing. If it is determined according to each statistical result that the pattern area to be determined satisfies the minimum position deviation and meets the matching conditions, then determine The light intensity lattice data includes a pattern area corresponding to the landmark pattern. Examples of the selected physical sizes of several landmark patterns include: selecting all landmark patterns, or selecting landmark patterns including corresponding geometric features/pattern templates based on geometric features/pattern templates matching the extracted pattern area, etc.
  • the mobile robot uses the physical size of the selected at least one landmark pattern to perform coordinate calculation and position deviation statistics one by one, and judge whether the minimum position deviation value in each statistical result matches the condition, and if so, then determine the light intensity lattice data in the light intensity lattice data. If the pattern area corresponding to the landmark pattern is included, step S130 is performed; otherwise, step S110 is performed to acquire new light intensity lattice data.
  • the mobile robot selects as much second landmark data as possible from the landmark data located in each map region based on the map regions pre-divided in the map data to combine into first landmark data uniquely representing the corresponding map region.
  • examples of the way of dividing the map area include any one or more of the following: manual division, dividing according to the way that the distance between the center positions of the map area is not less than a preset distance threshold, according to the historical path/designation of the mobile robot The degree of overlap and/or intersection of the paths is divided, etc.
  • the method of selecting the second landmark data includes at least one of the following: selecting the second landmark data with a unique description; Second landmark data, performing statistics on the second landmark data that can be located in the single light intensity lattice data to select the second landmark data within the statistical peak interval.
  • the manner of selecting the second landmark data also includes, for example, combining the orientation of each map area in the map data and the second landmark data in the map area to obtain first landmark data uniquely representing the corresponding map area.
  • the mobile robot performs statistics on each second landmark data and its corresponding light intensity lattice data in the map area to filter out multiple second landmark data that can be acquired by a single light intensity lattice data, which is beneficial to
  • the mobile robot moves to the vicinity of the corresponding position, it can obtain as complete a landmark pattern as possible in one light intensity lattice data; use the geometric shapes formed in the map data by multiple second landmark data filtered out in different map areas /Geometric shape combination to get a unique landmark pattern that identifies the map area.
  • FIG. 11 shows a landmark pattern constructed by a combination of triangles for each second landmark data (represented by a star) that has been statistically filtered in two map areas.
  • the landmark pattern is composed of at least one triangle, and the shapes of the triangles and their positional relationships in different landmark patterns are different. Then the mobile robot uses the second landmark data to construct different triangles and their positions in the two map areas respectively. relationship, it is determined that the combination of the second landmark data of different map areas that have been screened out is used as the first combination of landmark data corresponding to the map area, that is, the combination of the first landmark data is formed by the measurement points in the corresponding physical space. Corresponding to the landmark pattern in the physical space, the remaining second landmark data (represented by circles) are still landmark data for positioning.
  • the mobile robot encodes each map area, and counts each second landmark data and its corresponding light intensity lattice data in the corresponding map area, so as to filter out the data that is easier to be acquired by the single light intensity lattice data.
  • the code of the map area and the filtered second landmark data are combined to form a unique landmark pattern that identifies the map area.
  • the landmark pattern thus obtained is a landmark pattern limited to a map area, which makes it possible that the same landmark pattern exists in different map areas.
  • first landmark data that tolerates different map areas corresponding to the same landmark pattern according to the distance between map areas, such as non-adjacent map areas; or, based on geometric
  • the second landmark data of each of the selected map regions are screened for a second time by means of the shape (or combination), and/or retaining the second landmark data with a unique description, so as to obtain the first landmark data or the first landmark data describing the landmark pattern.
  • a combination of landmark data is also possible to select the first landmark data that tolerates different map areas corresponding to the same landmark pattern according to the distance between map areas, such as non-adjacent map areas; or, based on geometric
  • the second landmark data of each of the selected map regions are screened for a second time by means of the shape (or combination), and/or retaining the second landmark data with a unique description, so as to obtain the first landmark data or the first landmark data describing the landmark pattern.
  • the mobile robot When the landmark patterns are distributed in different positions of the workplace, the mobile robot records landmarks located in various areas of the workplace as landmark patterns in map data during map construction or map update, so as to correspond to different work modes.
  • the landmark patterns determined based on the unique second landmark data in the map data or the combination of the second landmark data can make the mobile robot more adaptable to unpredictable situations.
  • workplace For example, when a commercial cleaning robot leaves the factory, it cannot be predicted that its workplace is a supermarket, a waiting hall or a shopping mall.
  • a commercial cleaning robot builds its workplace map data, it can be set by the mobile robot and distributed in the workplace according to the actual situation of the workplace. of at least one landmark pattern.
  • step S130 the landmark position information of the landmark pattern in the map data is determined according to at least one kind of position information that has been recorded in the map data; wherein, the at least one kind of position information that has been recorded corresponds to the light intensity lattice
  • the new data added to the map data by the mobile robot during the movement is determined on the basis of various data recorded in the map data, and the new data includes The landmark location information of the landmark pattern in the map data, the shooting position when the mobile robot obtains the light intensity lattice data, the light intensity lattice data, the description information describing the landmark pattern in the light intensity lattice data, etc.
  • each landmark data recorded in the map data is regarded as the aforementioned second landmark data.
  • the mobile robot constructs the mapping relationship between the coordinate system (including the coordinate axis) of the light intensity lattice data and the world coordinate system based on at least one kind of position information recorded in the map data, and uses the mapping relationship to establish the light intensity point
  • the location information of the pattern area corresponding to the landmark pattern in the array data in the map data is to determine the landmark location information of the corresponding landmark pattern in the map data.
  • the location information is determined based on the positioning information of the mobile robot when performing step S110 and/or the second landmark data in the light intensity lattice data acquired when performing step S110.
  • the mapping relationship is represented by at least one of matrix parameters, scale coefficients between distances, and coordinate offsets.
  • the environmental measurement data is the data measured by the mobile robot in time and/or space to reflect the distance, orientation, etc. between the mobile robot and the surrounding environment obstacles, and/or reflect the distance the mobile robot moves in the workplace , heading angle and other data.
  • the environmental measurement data is exemplified but not limited to one or more of the following: light intensity lattice data captured at the first position and the second position respectively, depth data captured at the first position and the second position respectively, from At least one of movement data (eg, movement distance and/or heading angle) from the first position to the second position, and the like.
  • the first position is an example of the shooting position information/measurement position information obtained by querying from the map data, a certain shooting position information when the light intensity lattice data is obtained successively during the movement of the mobile robot, and during the movement of the mobile robot.
  • the second position is the shooting position information corresponding to the execution of step S110.
  • the mobile robot taking the positioning information corresponding to the first position and the second position recorded in the map data as an example, the mobile robot obtains the physical distance between the first position and the second position, and determines the location of the two positions.
  • the pixel positions of the image area are mapped into the map data to obtain landmark position information of the landmark pattern.
  • the mobile robot extracts landmark features corresponding to the second landmark data from the light intensity lattice data captured at the second location, and uses the extracted landmark features and the mapping relationship constructed by the landmark position information in the map data, and the pixel position deviation between the landmark feature and the pattern feature corresponding to the landmark pattern, and map the pixel position of the image area to the map data. , to obtain the landmark location information of the landmark pattern.
  • the mobile robot also pre-stores the physical size of the landmark pattern, and the way for the mobile robot to mark the landmark pattern in the map data includes S131: by constructing the The mapping relationship between the landmark pattern and the pattern area in the light intensity lattice data at two positions (ie, positioning information), it is determined that the mobile robot and/or the landmark features in the light intensity lattice data are respectively in the map data The relative positional relationship between the location information of the , and the landmark pattern; and the landmark location information that marks the landmark pattern in the map data.
  • the mapping relationship is determined based on the pose of the mobile robot relative to the landmark pattern at the second position and the photographing parameters when photographing by the light sensing device.
  • the mapping relationship includes but is not limited to at least one of the following: based on the mapping relationship between the contour in the landmark pattern and the contour of the corresponding pattern area in the light intensity lattice data; based on the edge/corner on each geometric figure in the landmark pattern The mapping relationship with the geometric features in the corresponding pattern area in the light intensity lattice data; based on the mapping relationship between the measurement points in the landmark pattern and the pixel blocks in the corresponding pattern area in the light intensity lattice data.
  • the measurement points include measurement points that can be identified as landmarks, or measurement points extracted according to preset rules, etc., wherein the measurement points extracted according to preset rules include, for example, measurement points determined based on equal intervals on the contour and/or Or measurement points, etc. determined based on contour feature lines/points that can be extracted.
  • the shooting parameters of the light sensing device include parameters adjusted in the light sensing device for obtaining light intensity lattice data, such as the rotation angle, etc.; according to the type of the light sensing device, the shooting parameters also include any of the following beams for example flight duration, focal length, etc.
  • the parameters of the light-sensing device also include fixed parameters that reflect the inherent characteristics of the light-sensing device, such as the viewing angle range, the focal length of the image square, and the pixel position of the optical axis of the light-sensing device in the light intensity lattice data.
  • the mobile robot also obtains the shooting parameters in the light sensing device when performing step S110, and uses the shooting parameters to construct the coordinate position of each measurement point i on the landmark pattern in the camera coordinate system, and uses the camera coordinate system and the location of the light intensity lattice data.
  • the conversion relationship between the coordinate systems, the measurement point i is mapped to the pixel position Pc i ' in the coordinate system where the light intensity lattice data is located; wherein, the pixel position Pc i ' and the obtained light intensity lattice data in the corresponding measurement point
  • the relative positional relationship between the landmark and the landmark pattern corresponding to the landmark feature in the array data corresponds to the relative positional relationship between the mobile robot and the landmark pattern.
  • the pixel position deviation and the second position of the mobile robot are used. and/or the location information of each landmark feature in the light intensity lattice data in the map data, and the landmark location information of the measurement point i is marked in the map data.
  • the landmark pattern is marked in the map data.
  • the map data includes first landmark data corresponding to the landmark pattern.
  • ui is the coordinate value on the coordinate axis where the measurement point i is mapped to the light intensity lattice data
  • ui ' is the coordinate value on the coordinate axis of the pixel position corresponding to the measurement point i in the pattern area in the light intensity lattice data
  • f x is the measurement distance of the light sensing device
  • c x is the position of the optical center of the light sensing device on the coordinate axis
  • (x i , y i ) is the coordinate position of the measurement point i on the landmark pattern.
  • y i are all constants.
  • N is the number of measurement points
  • ui is the coordinate value of the measurement point i mapped to the coordinate system where the light intensity lattice data is located
  • u i ′ is the pixel position of the corresponding measurement point i in the pattern area in the light intensity lattice data
  • the coordinate values on the coordinate system, f x , f y are the focal lengths of the light sensing device
  • c x and cy are the positions of the optical center of the light sensing device on the coordinate system
  • (x i , y i ) is the coordinate position of the measurement point i on the landmark pattern.
  • the relative positional relationship and the positioning information are used to determine the landmark position of each measurement point in the map data If the landmark position information of any measurement point in the landmark pattern in the map data is known, the relative position relationship and the landmark position information of the landmark are used to determine the landmark position information of other measurement points in the map data. On this basis, if the physical size of the landmark pattern is also known, the determined landmark location information and the physical size corresponding to some of the measurement points in the map data are used to determine the landmarks of the remaining measurement points in the map data. location information, etc.
  • the mobile robot When the landmark patterns are distributed in different positions of the workplace, using the above steps, the mobile robot records the identified landmark patterns located at various positions in the workplace in the map data during the construction of the map or the update of the map, so as to correspond to different locations. working mode.
  • the landmark pattern is determined from unique second landmark data or a combination of second landmark data that the mobile robot can recognize within a range of locations in the workplace.
  • the mobile robot extracts landmark features from the light intensity lattice data acquired along the way, and determines the landmark location information of the landmark features in the map data according to the positioning information of the mobile robot to form a second landmark. data.
  • the specific method is exemplified but not limited to using positioning schemes such as publication numbers CN107907131A, CN111220148A, or CN109643127A to determine the positioning information of the second position of the mobile robot in the map data, and with the help of the landmark features determined during positioning, determine relative positioning.
  • step S210 is executed: select unique second landmark data or a combination of second landmark data from the map data to form the first landmark data or The combination of the first landmark data, the first landmark data or the combination of the first landmark data corresponds to a landmark pattern. In this way, a plurality of landmark patterns distributed in the map data are obtained.
  • each landmark pattern is arranged on a path corresponding to the working mode.
  • stickers with landmark patterns are manually affixed to the corresponding paths in advance.
  • the landmark patterns located on the corresponding paths are associated according to the paths included in the working mode.
  • the present application provides a method for establishing an association between a work mode and a landmark pattern, which includes: acquiring a landmark pattern corresponding to landmark location information in map data; and acquiring at least one path passing through the landmark location information of the landmark pattern and its behavior operations; An association relationship between the landmark pattern and each working mode is established and saved, wherein the working mode includes one of the paths and its behavior operations.
  • the association relationship can be constructed by means of human-computer interaction.
  • the map data used by the mobile robot to navigate and move includes landmark data
  • the mobile robot displays the map data to the user in the form of a map interface, and the user generates a job by setting paths and behavior operations on the map interface. mode, and establishes an association relationship between the landmark pattern and at least one working mode for the mobile robot.
  • the user performs human-computer interaction by using an interaction device on the mobile robot; or the user performs human-computer interaction with the mobile robot through a terminal device.
  • the method of constructing the association relationship by means of human-computer interaction includes: displaying a map interface including location information of at least one landmark; and acquiring a working mode input by the user; wherein, the path in the working mode Passing through at least one of the landmark location information.
  • the mobile robot also includes a wireless communication module.
  • the user establishes a communication connection with the mobile robot by configuring the application program (APP) in the terminal device, and the terminal device obtains the The map data is displayed on the map interface.
  • the terminal device also uses landmark labels such as icons and symbols to mark the landmark location information corresponding to the landmark pattern in the map interface.
  • the user operates the terminal device to construct/confirm the path including the landmark label on the map interface, and set the behavior operation corresponding to the path. Among them, the path confirmed by the user's operation and its behavior operation constitute a working mode, thereby establishing the relationship between the landmark pattern and the working mode.
  • the association relationship is fed back to the mobile robot through the terminal device.
  • the mobile robot also displays the landmark pattern and its location in the map data to the user through the map interface when recognizing the landmark pattern.
  • the user operates the terminal device to construct a path with the location of at least one landmark pattern as a starting point/terminal and at least one behavior operation thereof, thereby constructing an association relationship between the landmark pattern and at least one working mode.
  • a map interface including at least one route and the location information of at least one landmark it passes through is displayed; and the behavior operation information corresponding to each route input by the user is obtained; and a job including a route and its corresponding behavior operation is generated. model.
  • the map interface mentioned in the previous specific example also displays at least one path passing through the landmark pattern, and the user inputs behavior operation information for one by selecting the corresponding path, thereby determining the working mode of the mobile robot on the corresponding path.
  • the selected path may be input through human-computer interaction or obtained through machine learning.
  • the mobile robot uses machine learning to obtain a path in a working mode, or a path and a behavior operation in the working mode, to establish an association relationship between a corresponding landmark pattern and at least one working mode.
  • the mobile robot selects a plurality of historical paths including the location of the landmark pattern and their respective behavior operations from each historical path and its corresponding at least one behavior operation, according to the selected multiple historical paths and their corresponding behavior operations.
  • the combination of the respective behavior operations setting the association between each working mode and the landmark pattern.
  • the method of selecting multiple historical paths passing through the landmark pattern and their behavior operations includes: selecting by counting the historical paths passing through the landmark pattern and their behavior operations, and/or selecting from each historical path passing through at least one landmark pattern Select multiple history paths and their behavior actions.
  • the historical path passing between at least one landmark pattern includes a circular path formed by at least one landmark pattern, and/or a non-circular path formed by using at least two different landmark patterns, and the like.
  • an example of the method of selecting by counting the historical paths passing through the landmark pattern and its behavior operations includes: counting the repetition times of multiple historical paths passing through the landmark pattern, and selecting the history whose repetition times are greater than the preset repetition times threshold. Paths and their behavioral operations; and/or remove historical paths generated when behavioral operations were not performed.
  • the mobile robot removes the historical paths that only pass through the landmark pattern during movement, and establishes an association relationship between the working mode and the landmark pattern by counting the historical paths containing valid behavior operations.
  • Examples of ways of selecting a plurality of historical paths and their behavior operations from each historical path passing through at least one landmark pattern include: selecting a plurality of historical paths and their behavior operations from each historical path passing through at least two landmark patterns; and/or Selects multiple historical paths and their behavior actions from historical paths containing circular path segments that pass through a landmark pattern. For example, taking any one or two landmark patterns located in adjacent and/or a single work area as the starting point and the end point, intercepting the path segments corresponding to the starting point and the ending point and their corresponding path segments from the historical paths passing through the one or two landmark patterns Behavior operation; where the selected path segment contains a circular path, or a non-circular path.
  • FIG. 12 shows a plurality of historical paths recorded by the mobile robot, at least some of the historical paths correspond to at least one behavioral operation, and the number of repetitions of the historical paths L1, L2 and L3 is greater than the preset repetition number threshold according to statistics.
  • the mobile robot selects the historical path L1 according to the above selection method with any landmark pattern in Pic1 and Pic2 as the starting point and another landmark pattern as the terminal.
  • the behavior operations used in each historical path where the non-ring path segment L1' is located include cleaning operations.
  • the used behavior operations in each historical path where the annular path segment L2' is located include a sweeping operation or a mopping operation, then the working mode associated with the landmark pattern Pic1 includes: performing a sweeping operation along the annular path segment L2', The mopping operation is performed along the circular path segment L2', and the sweeping operation is performed along the non-circular path segment L1'; the working mode associated with the landmark pattern Pic2 includes performing the cleaning operation along the non-circular path segment L1'.
  • the mobile robot triggers to perform the operation of learning the corresponding working mode when the identified landmark pattern, so as to construct the association relationship.
  • the mobile robot establishes at least one working mode associated with the landmark pattern when it is in the identified landmark pattern and in a gear where the association relationship can be established.
  • the mobile robot determines whether it is in a gear that can establish the associated relationship according to the gear of a switchable gear selector.
  • the gear selector is provided with a plurality of gears, which include at least two of the learning gears of the working mode, the construction map gears, the manual working gears, the automatic working gears, and the like.
  • the map marking scheme exemplified in the aforementioned steps S110-S130, S210, etc. can be executed by the mobile robot in a learning gear such as a working mode, a map building gear, or a manual working gear.
  • the mobile robot uses the identified landmark pattern as an example.
  • the location of the landmark pattern is the starting point, and the path when at least one behavior operation is performed is learned until a landmark pattern is recognized again, and the location of the re-identified landmark pattern is set as the end point of the path.
  • the re-identified landmark pattern may be the same as or different from the landmark pattern at the starting point, so that the working mode corresponding to the landmark pattern includes the learned path and at least one behavior operation thereof.
  • the mobile robot takes the location of the identified landmark pattern as the starting point, and learns that the mobile robot moves under manual operation. until a landmark pattern is recognized again, and the location of the re-identified landmark pattern is set as the end point of the path.
  • the re-identified landmark pattern may be the same as or different from the landmark pattern at the starting point, and through the human-computer interaction between the mobile robot and the user, the user inputs at least one behavior operation corresponding to the path, thus, the mobile robot At least one working mode corresponding to the landmark pattern is determined. Among them, the user can input one or two behavior operations belonging to one working mode.
  • the mobile robot takes the location of the identified landmark pattern as the starting point, and shows the user whether A new working mode is added, when receiving an input operation selected by the user to increase, the mobile robot learns the path from the starting point to perform at least one behavior operation until a landmark pattern is recognized again, and the re-identified The location of the landmark pattern is set as the end point of the path.
  • the re-identified landmark pattern may be the same as or different from the landmark pattern at the starting point, so that the working mode corresponding to the landmark pattern includes the learned path and at least one behavior operation thereof.
  • the switchable gear selector is provided on the mobile robot is only an example.
  • the gear selector of the mobile robot can be set on the side of the terminal device and send the corresponding gear through wireless communication.
  • the gear selector may be a hardware and software structure including mechanical buttons/knobs and circuits, or a hardware and software structure that enables the mobile robot to determine the gear by detecting user operation instructions by means of a human-computer interaction device.
  • the relationship between the landmark pattern and the working state may or may not consider the starting and ending positions of its path; in other words, the path in the working mode may be directed or undirected.
  • the paths in the corresponding multiple working modes can be a path starting from the landmark pattern and a path ending with the landmark pattern, as long as the paths in the working mode and behavior operations are not identical Completely overlap.
  • its corresponding at least one working mode only includes a path starting from the landmark pattern.
  • the mobile robot also interacts with peripheral devices, wherein the peripheral device refers to a device used in conjunction with the mobile robot to supply energy, pick up/place items, release/recycle waste, etc.
  • the positioning component used by the robot to help the mobile robot accurately determine the relative positional relationship between it and the peripheral device.
  • the peripheral devices include parking devices (such as charging piles or garbage collection devices, etc.), or mobile replenishment/recycling devices, and the like.
  • the positioning part is provided with a landmark pattern, the geometric feature/pattern template corresponding to the landmark pattern is pre-stored in the mobile robot, the corresponding working mode and the functions that the peripheral device can provide, and the internal generation of the mobile robot.
  • the event interrupt mechanism is related, the landmark pattern is pre-associated with the working mode.
  • the mobile robot switches from one gear to another gear corresponding to the corresponding event interruption mechanism based on the internally generated event interruption mechanism, and when recognizing the landmark pattern on the corresponding peripheral device, the associated working mode is: adjust the movement The positional relationship between the robot and the positioning component and the establishment of the interface connection with the peripheral device.
  • An example of the event interruption mechanism includes a mechanism for generating an interruption based on at least one of the following events: an event generated by the power management system of the mobile robot when the detected power level is lower than a preset power level threshold, and/or an event generated by the mobile robot's power management system.
  • the garbage monitoring system is based on the events generated when the collected garbage volume reaches the preset garbage volume threshold.
  • the preset designated landmark pattern is a preset landmark pattern identified as a parking device (such as a charging pile or a garbage collection device, etc.) of the mobile robot.
  • a pattern template or geometric shape corresponding to the landmark pattern of the parking device and its positional relationship are preset in the mobile robot.
  • a corresponding event interruption mechanism ie, switching to the gear corresponding to the event interruption mechanism
  • the navigation movement to find the parking device is performed.
  • the mobile robot recognizes the corresponding landmark pattern, it is determined that the working mode includes from recognizing the location of the landmark pattern to the execution of the homing operation. path and perform a homing operation.
  • the mobile robot predetermines that at least one landmark pattern in the map data is the landmark pattern of the parking device through interaction with the user, and when a corresponding event interruption mechanism is generated inside the mobile robot, the current position is used as the starting point and the corresponding landmark pattern is located The position is the end point, and the navigation movement to find the parking device is performed.
  • the mobile robot recognizes the corresponding landmark pattern, it is determined that the working mode includes the path from the location where the landmark pattern is recognized to the execution of the homing operation and the execution of the homing operation.
  • FIG. 13 is a schematic flowchart of a control method for the mobile robot of the present application , the mobile robot executes the control method shown in FIG. 13 based on the acquired light intensity lattice data.
  • step S310 light intensity lattice data is acquired.
  • the light sensing device provided on the mobile robot is the light sensing device mentioned in the foregoing examples.
  • the light sensing device is a one-dimensional light sensing device that uses the light reflection principle for measurement, or a two-dimensional light sensing device that uses the light reflection principle to perform image imaging; correspondingly, the acquired light intensity lattice data is one-dimensional data. or two-dimensional data.
  • the light sensing device can be fixed on the mobile robot, or can be moved on the mobile robot.
  • the light sensing device is arranged on a driving part, so as to acquire the light intensity lattice data in the corresponding detection range during the movement of the driving part; wherein, the quantity of the acquired light intensity lattice data can be the same as that during the movement of the driving part.
  • the acquired one or more, or the acquired light intensity lattice data is formed by splicing a plurality of light intensity lattice data obtained during the movement of the driving component.
  • the mobile robot further performs the step of obtaining a switchable control gear, so as to perform step S310 according to the obtained switchable control gear. For example, when the mobile robot receives a signal corresponding to the automatic cleaning gear, step S310 is executed. For another example, when the power supply system/garbage collection system in the mobile robot sends out a corresponding event interruption, the navigation movement of homing is automatically performed, and when the mobile robot moves to the vicinity of the corresponding peripheral device, step S310 is performed. In other examples, the mobile robot performs step S310 in real time.
  • the mobile robot can control the driving part to move; or the mobile robot can move as a whole, so that the light sensing device performs step S310 at least once within the corresponding detection range, and obtains: At least one light intensity lattice data that can be used when the subsequent step S320 is executed.
  • the step S310 includes the step S311.
  • step S311 using map data, the pose information between the mobile robot and its nearby landmark patterns is determined, so as to obtain the light intensity lattice data according to the pose information.
  • the mobile robot determines the current positioning information by using the aforementioned positioning schemes such as publication numbers CN107907131A, CN111220148A, or CN109643127A; in the map data, the current positioning information of the mobile robot is used as the center within a preset radius range Query the first landmark position information of the landmark pattern, and determine the orientation information between the mobile robot and the corresponding first landmark positioning information, if the orientation information falls within the viewing angle range of the light sensing device, obtain a light intensity point or control the whole mobile robot or the driving component to move in the corresponding direction according to the orientation information to obtain at least one light intensity lattice data during the movement. Thereby, the efficiency of acquiring the landmark pattern is improved.
  • the aforementioned positioning schemes such as publication numbers CN107907131A, CN111220148A, or CN109643127A
  • the current positioning information of the mobile robot is used as the center within a preset radius range Query the first landmark position information of the landmark pattern, and determine the orientation information between the
  • step S320 a pattern area corresponding to the landmark pattern is extracted from the acquired light intensity lattice data.
  • the execution process of step S320 is the same as or similar to the execution process of step S120 in the foregoing examples, and is matched with the pre-marked landmark pattern in the map data to confirm that the extracted landmark pattern is consistent with the map
  • the landmark patterns marked in the data match.
  • the mobile robot extracts a pattern area corresponding to one of the landmark patterns from the light intensity lattice data acquired in step S310 according to the geometric feature/pattern template representing the landmark pattern in step S120.
  • the mobile robot matches the landmark features extracted from the light intensity lattice data with the landmark features in the first landmark data marked in step S120 or step S210 in the foregoing example, and if they match The matching degree of the landmark feature relative to the landmark feature in the first landmark data corresponding to a landmark pattern is greater than the preset matching degree threshold, then it is determined that the light intensity lattice data includes a pattern area corresponding to the landmark pattern.
  • An example of the matching degree includes at least one of the following: the ratio of the number of matched landmark features to the total number of first landmark data representing the landmark pattern, the difference in number, the similarity, the position error, and the like.
  • the mobile robot extracts landmark features from the light intensity lattice data acquired in step S310, and compares the extracted landmark features with the landmark features in each first landmark data in each pre-marked landmark pattern in the map data Matching is performed, and if the ratio of the number of landmark features matched in one of the landmark patterns to the total number of landmark features in the first landmark data corresponding to a landmark pattern is greater than a preset matching number threshold, then determine the corresponding light intensity lattice The data includes the pattern area corresponding to the landmark pattern; otherwise, step S310 is performed.
  • the mobile robot may determine that the mobile robot is in the map data according to the landmark pattern determined in step S320 and the landmark location information in the map data.
  • the approximate positioning position in the for example, the position range near the corresponding landmark pattern is determined, and step S340 is executed accordingly.
  • the mobile robot further executes step S330.
  • step S330 obtain environmental measurement data, and use the environmental measurement data and the map data to determine the landmark location information of the landmark pattern identified by the mobile robot in the map data; wherein the environmental measurement data includes a light intensity lattice data.
  • the environmental measurement data is the same as the environmental measurement data mentioned in the foregoing examples, and will not be repeated here.
  • the mobile robot also extracts from the light intensity lattice data at least the features of each landmark corresponding to the second landmark data in the map data, and combines the landmark location information of each second landmark data in the map data to determine the location of the landmarks recognized by the mobile robot.
  • the landmark location information of the landmark pattern in the map data For example, the mobile robot extracts the landmark features of each first landmark data and the second landmark data from the light intensity lattice data in step S320, and compares the landmark features in the extracted second landmark data with the marked features in the map data.
  • the landmark features in the second landmark data near the same landmark pattern are matched, and the landmark location information in the map data of the landmark pattern identified in step S320 is determined according to the matching degree.
  • the mobile robot extracts each landmark feature of each first landmark data and the second landmark data from the light intensity lattice data in step S320, and extracts each landmark in each of the first landmark data and the second landmark data.
  • the feature is matched with the landmark feature in the first landmark data corresponding to the same landmark pattern marked in the map data and the landmark feature in the second landmark data in its vicinity; the landmark pattern identified in step S320 is determined according to the matching degree in the map data. landmark location information.
  • the matching methods in the above examples include calculating the similarity of the descriptors of the landmark features, and even calculating the error of the positional relationship between the landmark features.
  • determine the approximate positioning position of the mobile robot in the map data such as determining its position range near the corresponding landmark pattern, thereby executing the step S340.
  • the mobile robot performs a positioning operation in real time during the movement to determine its positioning information in the map data, determines the landmark position information in the map data of the identified landmark patterns located in its vicinity according to the positioning information, and executes Step S340.
  • control information of the mobile robot is generated based on the work mode corresponding to the determined landmark pattern; wherein, the control information is used to control the mobile robot to include the landmark according to the work mode.
  • the path to the location information performs the corresponding action.
  • the control information includes control information for controlling navigation and movement of the mobile device and control information corresponding to at least one behavior operation.
  • the behavior operation includes the mobile robot performing a movement operation related to the location of the landmark pattern, and the behavior operation performed during/at the end of the navigation movement.
  • the behavioral operation includes performing a homing movement operation when the mobile robot is substantially facing the landmark pattern on the docking device.
  • the behavioral operations include: cleaning and/or mopping the floor.
  • the behavior operations include: picking up and/or releasing goods, and the like.
  • the behavior operation and movement may be performed synchronously or in a sequential order. For example, pick up after navigating the route to the destination.
  • each action may be executed synchronously or in a sequential order.
  • the behavioral operations include cleaning and mopping operations
  • the control information includes cleaning control information and mopping control information, which are synchronously output to the cleaning system and the mopping system of the mobile robot respectively.
  • the behavioral operations include cleaning and mopping operations
  • the control information includes cleaning control information and mopping control information
  • the corresponding control information is output to the cleaning system of the mobile robot in a time-sharing manner in the order of cleaning first and then mopping the floor. and mopping system.
  • the number of working modes corresponding to the landmark pattern may be one or more. In some examples, if the number of working modes corresponding to the landmark pattern is one, the mobile robot generates corresponding control information according to the path and behavior operation in the corresponding working mode. In other examples, if the number of working modes corresponding to the landmark pattern is multiple, the step S340 includes the step S341.
  • step S341 corresponding control information is generated based on the working mode selected by the user.
  • the mobile robot uses the human-computer interaction device of the mobile robot or the terminal equipment that communicates with the mobile robot, the mobile robot displays each working mode corresponding to the landmark pattern to the user, and selects one of the working modes based on the user's trigger operation.
  • the corresponding working mode is determined based on a user's trigger operation on a terminal device that shares map data with the mobile robot.
  • the mobile robot and the terminal device share the map data marked with the landmark pattern.
  • the mobile robot recognizes that the landmark pattern corresponds to multiple working modes, the mobile robot sends the information that enables the terminal device to display the multiple working modes corresponding to the landmark pattern to the terminal. equipment; the information includes the landmark location information of the corresponding landmark pattern in the map data, or each working mode, etc.
  • the terminal device displays a plurality of corresponding working modes according to the shared map data, and feeds back the corresponding working mode to the mobile robot according to the triggering operation of one of the working modes triggered by the user; the mobile robot generates and moves along the path according to the received working mode.
  • the control information is sent to the mobile device of the mobile robot; and according to the timing of the execution of the behavior operation, the control information used to control one or more behavior operations is respectively sent to the corresponding behavior operation device of the mobile robot.
  • the human-computer interaction device includes a display screen and an instruction input component, and the display screen and the instruction input component It can be integrated hardware such as touch screen, or discrete hardware such as LED screen and buttons.
  • the mobile robot displays multiple working modes corresponding to the landmark pattern to the user, and generates each control information in the corresponding working mode according to the triggering operation of one of the working modes triggered by the user.
  • an example of the way of displaying multiple working modes in any of the above examples includes: in the displayed map data, the displayed map data will be displayed in different colors, different lines, or split-screen display according to the combination of behavior operations and paths in each working mode. at least one of the other ways to display to the user.
  • the mobile robot In the case of determining the working mode, the mobile robot starts to transfer to the corresponding working mode from its position, and executes the control of the corresponding movement and behavior operations.
  • the mobile robot generates control information for guiding the mobile robot to move along the path in the working mode by determining the relative positional relationship between the mobile robot and the corresponding landmark pattern, and generates behavior operations corresponding to the working mode. control information.
  • the mobile robot moves to the path in the corresponding working mode according to the relative positional relationship between the mobile robot and the landmark pattern, moves along the path, and performs the corresponding behavior operation.
  • the method of determining the relative positional relationship includes: determining the relative positional relationship between the mobile robot and the corresponding landmark pattern by at least using at least one type of positional information mapped into the map data, which will be described in detail in step S341 below.
  • the mobile robot In a work mode in which the mobile robot moves autonomously, such as performing automatic cleaning tasks, and completes behavioral operations during the movement, the mobile robot generates control information for autonomous movement and control information for behavioral operations according to a preset path corresponding to the work mode .
  • the landmark pattern is any landmark pattern pre-marked in the map data by the mobile robot.
  • the mobile robot performs a cleaning operation during the movement along the circular path L2', and when the mobile robot recognizes the landmark pattern according to its relative positional relationship with the previous landmark pattern, the mobile robot determines Its deviation on the circular path, and using the current position as the starting point to output control information for moving along the circular path to the mobile device in real time through an iterative positioning navigation and movement method, and output to the cleaning device in real time for cleaning control information.
  • the control information used for cleaning includes: control information used to control the rotation speed of the side brush, control information used to control the suction force of the fan, etc.; wherein, the control information used for cleaning can be related to the location of the mobile robot.
  • the corresponding control information includes control information for accelerating the rotation speed of the side brush, and the like.
  • the mobile robot In a work mode in which the mobile robot moves autonomously, such as a homing operation, and performs a behavioral operation when moving to a preset position, the mobile robot generates a path from the current position to the preset position and uses it for autonomous movement according to the work mode control information, and when positioned at a preset position, generate control information for performing behavioral operations such as a homing operation.
  • the landmark pattern is usually a landmark pattern that is pre-set on a peripheral device and is clearly different from the surrounding environment.
  • the physical size of the landmark pattern can be used to determine its orientation in the map data, for example, according to the rectangular shape in the landmark pattern.
  • the corner positions of , and their side length values determine their positions in the map data, thereby facilitating the determination of the path between the mobile robot and the landmark pattern.
  • the step S340 includes steps S342 and S343.
  • step S342 the relative positional relationship between the mobile robot and the corresponding landmark pattern is determined using at least one type of positional information mapped into the map data.
  • the mobile robot determines the relative positional relationship between the landmark pattern set on the peripheral device and the mobile robot according to at least one type of position information in the map data determined when the mobile robot moves to the vicinity of the peripheral device.
  • the location information includes: location information of the current mobile robot in the map data, and/or landmark location information of the landmark feature in the acquired light intensity lattice data in the map data, and the like.
  • use positioning schemes such as publication numbers CN107907131A, CN111220148A, or CN109643127A to determine the positioning information of the mobile robot in the map data, and determine the relative positional relationship between the mobile robot and the landmark pattern according to the landmark location information of the landmark pattern in the map data.
  • the landmark location information of at least part of the landmark features in the light intensity lattice data is determined based on matching with the first landmark data and/or the second landmark data in the map data, and since the mobile robot and the light sensing device move as a whole , so the pixel position deviation between the pattern area of the corresponding landmark pattern and the matched landmark feature in the light intensity lattice data reflects the relative positional relationship between the mobile robot and the landmark pattern; The relative positional relationship is calculated from the corresponding data.
  • step S343 based on the relative positional relationship, a path for moving to the position at the time of the homing operation and control information thereof are generated.
  • the mobile robot uses the map data and the relative position relationship to generate a path from the current position to the position during the homing operation, and outputs corresponding control information according to the path.
  • the position during the homing operation includes a preset angular deviation between the heading angle of the mobile robot and the landmark pattern, or the angular deviation and the distance between the mobile robot and the landmark pattern.
  • the step S340 includes S344 and S345.
  • step S344 the relative positional relationship between the mobile robot and the landmark pattern is determined by constructing a mapping relationship between the landmark pattern and the corresponding pattern area in the light intensity lattice data.
  • the mobile robot when acquiring the light intensity lattice data, the mobile robot also acquires the shooting parameters of the light sensing device, and uses the shooting parameters to construct a mapping relationship between the extracted pattern area and the corresponding landmark pattern; and according to the The mapping relationship determines the relative positional relationship between the mobile robot and the landmark pattern.
  • the mapping relationship and the shooting parameters are as described in the foregoing step S131, and will not be repeated here.
  • the relative positional relationship between the pattern area and the landmark pattern in the light intensity lattice data obtained by the mobile robot corresponds to the relative positional relationship between the mobile robot and the landmark pattern.
  • the pattern and the pattern area are mapped into the same coordinate system, and the mapping relationship is determined when the behavior operation condition is satisfied by calculating the overlapping degree of the two.
  • the behavior operation condition includes, for example, a homing operation condition, or a grab/fetch operation condition, and the like.
  • FIG. 14 shows a schematic diagram of a mobile robot in a working mode including a homing operation, wherein the path in the working mode is related to the relative positional relationship between the mobile robot and the landmark pattern.
  • the starting point of the path is the position of the mobile robot when the landmark pattern is photographed at any position, and the end point of the path is the position when the mobile robot determines that the homing operation can be performed by adjusting the pose.
  • the homing operation conditions include that the angle deviation between the mobile robot and the landmark pattern does not exceed the angular deviation threshold for performing the homing operation, and even includes that the distance between the mobile robot and the landmark pattern does not exceed the distance threshold .
  • the homing operation condition can be predicted/calibrated in advance by mapping the physical size of the landmark pattern to the pixel size in the light intensity lattice data.
  • the mobile robot uses the shooting parameters to construct the coordinate position of the measurement point i in the landmark pattern in the camera coordinate system, and uses the conversion relationship between the camera coordinate system and the coordinate system where the light intensity lattice data is located (for example, according to the light sensing device).
  • the matrix conversion coefficients set by the internal and external parameters, etc.) the measurement point i is mapped to the pixel position Pc i ' in the coordinate system where the light intensity lattice data is located; wherein, the pixel position Pc i ' and the obtained light intensity point
  • the mobile robot may determine the movement according to the overall angular deviation between the pixel positions Pci ′ and Pci The relative positional relationship between the robot and the landmark pattern with respect to the angular deviation.
  • the overall angular deviation is determined based on the fact that the angular deviation error between the angular deviation between each pixel position Pc i ′ and Pci and the angular deviation in the homing operation condition does not exceed a preset error threshold .
  • the degree of overlap between the pixel positions Pci ′ and Pci is calculated so that the degree of overlap does not exceed a preset threshold of overlap degree
  • the distance and angular deviation required to move the robot at the same time are used to determine the relative position relationship including the distance and angular deviation.
  • the overlapping degree threshold is determined based on the angular deviation error threshold, or the angular deviation error threshold and the distance threshold in the homing operation condition.
  • ui is the coordinate value on the coordinate axis where the measurement point i is mapped to the light intensity lattice data
  • ui ' is the coordinate value on the coordinate axis of the pixel position corresponding to the measurement point i in the pattern area in the light intensity lattice data
  • f x is the measurement distance of the light sensing device
  • c x is the position of the optical center of the light sensing device on the coordinate axis
  • (x i , y i ) is the coordinate position of the measurement point i on the landmark pattern.
  • y i are all constants.
  • N is the number of measurement points
  • ui is the coordinate value of the measurement point i mapped to the coordinate system where the light intensity lattice data is located
  • u i ′ is the pixel position of the corresponding measurement point i in the pattern area in the light intensity lattice data
  • the coordinate values on the coordinate system, f x , f y are the focal lengths of the light sensing device
  • c x and cy are the positions of the optical center of the light sensing device on the coordinate system
  • (x i , y i ) is the coordinate position of the measurement point i on the landmark pattern.
  • step S345 based on the relative positional relationship, a path for moving to the position at the time of the homing operation and control information thereof are generated.
  • the homing operation conditions further include: the mobile robot adjusts the position The number of gestures does not exceed a preset number of times threshold, and/or the relative positional relationship between the mobile robot and the landmark pattern does not exceed a preset gesture alignment threshold.
  • the mobile robot uses the relative position relationship to generate a corresponding path and its control information, and during adjusting the pose of the mobile robot according to the path, it also acquires the light intensity lattice data in real time and executes steps S320-S344
  • the relative position relationship is updated, and the path and its control information are updated according to the updated relative position relationship, until the obtained relative position relationship satisfies the homing operation condition.
  • the mobile robot generates control information for performing the homing operation, and controls the mobile device to perform the homing movement.
  • the execution process of the step S345 is the same as or similar to the execution process of the step S343, and will not be described in detail here.
  • the mobile robot moves to a position that satisfies the homing operation conditions, the mobile robot generates control information for performing the homing operation, and controls the mobile device to perform the homing movement.
  • the mobile robot can control the mobile robot to autonomously execute corresponding actions in the work mode by marking the landmark pattern in the map data and constructing an association relationship between the landmark pattern and the working mode.
  • the operation enables the mobile robot to work efficiently and autonomously in a complex working environment, in other words, it can complete tasks with high quality within an effective range in the workplace, so as to avoid performing full-coverage autonomous work in the workplace.
  • the present application also provides a map update system, a work mode setting system and a control system, each system is a software system that can be configured in a mobile robot, and the software system can also be configured in a network system formed by a mobile robot and a server system. Wherein, the mobile robot and the server system in the network system cooperatively execute each module of each system to realize the function of the corresponding system.
  • FIG. 15 shows a map updating system for marking landmark patterns in map data according to the present application.
  • the map updating system is used for constructing a map or updating a map.
  • the map updating system includes: an acquisition module 401 , a landmark pattern recognition module 402 , and a marking module 403 .
  • the acquisition module is used to acquire light intensity lattice data
  • the landmark pattern recognition module is used to determine the pattern area that includes the corresponding landmark pattern in the acquired light intensity lattice data
  • At least one kind of position information recorded in the data to determine the landmark position information of the landmark pattern in the map data
  • the at least one kind of position information that has been recorded corresponds to the landmark data and/or in the light intensity lattice data Or it corresponds to the shooting position (also called positioning information) when the light intensity lattice data is obtained.
  • the respective execution processes of the acquisition module, the landmark pattern recognition module, and the marking module correspond to the execution manners of the foregoing examples in steps S110-S130, which will not be described in detail here.
  • the execution process of the landmark pattern recognition module corresponds to the execution manner of each example in step S210 in the foregoing examples, which is not described in detail here; and the labeling module is used for Each second landmark data selected by the landmark pattern recognition module is updated to be recorded as the first landmark data in the map data.
  • the present application also provides an association system for constructing an association relationship between a landmark pattern and a working mode.
  • the working mode setting system includes at least one of the following: a first setting module and/or a second setting module.
  • the first setting module is used to construct the association relationship by means of human-computer interaction.
  • the execution process of the first setting module may correspond to the foregoing execution process of constructing the association relationship through human-computer interaction, which will not be described in detail here.
  • the second setting module is configured to establish an association relationship between the corresponding landmark pattern and the corresponding at least one working mode by learning the working mode corresponding to the landmark pattern.
  • the execution process of the second setting module corresponds to the foregoing execution process of establishing the association relationship between the corresponding landmark pattern and the corresponding at least one work mode by learning the work mode corresponding to the landmark pattern, which will not be described in detail here.
  • FIG. 16 shows a control system of the present application for controlling a mobile robot by utilizing the relationship between the landmark pattern and the working mode.
  • the control system includes: an acquisition module 411, a landmark pattern recognition module 412, and a control module 413; in some embodiments, it also includes a positioning module (not shown).
  • the acquisition module is used to acquire the light intensity lattice data; the landmark pattern recognition module is used to extract the pattern area containing the corresponding landmark pattern from the acquired light intensity lattice data; the control module is used to based on the determined
  • the work mode corresponding to the landmark pattern generates control information of the mobile robot; wherein the control information is used to control the mobile robot to perform corresponding operations according to the path in the work mode.
  • the positioning module is used to further acquire environmental measurement data, and use the environmental measurement data and the map data to determine the landmark location information of the landmark pattern identified by the mobile robot in the map data, so as to use the landmark location information to determine the identified landmarks.
  • the location of the landmark pattern; wherein, the environmental measurement data includes light intensity lattice data.
  • the execution process of the acquisition module corresponds to the execution process of step S310 in the foregoing example
  • the execution process of the landmark pattern recognition module corresponds to the execution process of step S320 in the foregoing example
  • the execution process of the control module It corresponds to the execution process in step S340 in the foregoing example
  • the execution process of the positioning module corresponds to the execution process in step S330 in the foregoing example. It will not be described in detail here.
  • the present application also provides a computer readable and writable storage medium, storing at least one program that executes and implements the above-mentioned map marking method, building landmark pattern and work shown in FIGS. 7-13 when called. At least one embodiment described in a method for an association relationship between modes, and a control method.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to enable the mobile robot installed with the storage medium to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, A USB stick, a removable hard disk, or any other medium that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • the instructions are sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead intended to be non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc, where disks usually reproduce data magnetically, while discs use lasers to optically reproduce data replicate the data.
  • the functions described by the computer programs of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • the steps of the methods or algorithms disclosed herein may be embodied in processor-executable software modules, which may reside on a tangible, non-transitory computer-readable and readable storage medium.
  • Tangible, non-transitory computer-readable storage media can be any available media that can be accessed by a computer.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which contains one or more possible functions for implementing the specified logical function(s) Execute the instruction.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by dedicated hardware-based systems that perform the specified functions or operations , or can be implemented by a combination of dedicated hardware and computer instructions.

Abstract

一种移动机器人的标记、关联和控制方法、系统及存储介质。其中控制方法包括:获取光强度点阵数据;从所述光强度点阵数据中提取对应地标图案的图案区域;其中,所述地标图案在地图数据中标记有相应的地标位置信息;基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息;其中,所述控制信息用于控制所述移动机器人按照所述工作模式下的包含所述地标位置信息的路径执行相应行为操作。利用所述关联关系控制移动机器人自主地在工作模式下执行相应行为操作,实现了移动机器人在复杂的工作环境下高效地自主工作。

Description

移动机器人的标记、关联和控制方法、系统及存储介质 技术领域
本申请涉及移动机器人技术领域,具体的涉及一种移动机器人的标记、关联和控制方法、系统及存储介质。
背景技术
移动机器人由于具有自主移动功能,在机场、火车站、仓储、酒店等大型室内场合应用颇多。例如,商用清洁机器人、搬运/配送机器人、迎宾机器人等,利用自主移动功能实现清洁、搬运、引路等功能。
受室内场地复杂度等外部因素,以及移动机器人所能提供的功能等内部因素的影响,移动机器人有时无法充分、高效地施展其所能提供的功能,导致这类移动机器人在相应市场上的使用受到局限。
发明内容
鉴于以上所述相关技术的缺点,本申请的目的在于提供一种移动机器人的标记、关联和控制方法、控制系统及存储介质,用以克服上述相关技术中存在移动机器人不便于在复杂环境内对环境进行甄别以执行相应功能的技术问题。
为实现上述目的及其他相关目的,本申请公开的第一方面提供一种地标图案在地图数据中的标记方法,包括:获取光强度点阵数据;确认所获取的光强度点阵数据中包含对应地标图案的图案区域;其中,所述地标图案用于对应于移动机器人的至少一种工作模式;根据地图数据中已记录的至少一种位置信息,确定所述地标图案在地图数据中的地标位置信息;其中,所述已记录的至少一种位置信息对应于所述光强度点阵数据中的地标数据和/或对应于移动机器人获取所述光强度点阵数据时的定位信息。
本申请公开的第二方面提供一种建立工作模式和地标图案关联的方法,包括:获取对应地图数据中地标位置信息的地标图案;以及获得途经所述地标图案所在地标位置信息的至少一条路径及其行为操作;其中,所述地标位置信息是利用第一方面所述的标记方法标记的;建立所述地标图案与每一种工作模式之间的关联关系并予以保存,其中,所述工作模式包括其中一条路径及其行为操作。
本申请公开的第三方面提供一种移动机器人的控制方法,包括:获取光强度点阵数据;从所述光强度点阵数据中提取对应地标图案的图案区域;其中,所述地标图案在地图数据中标 记有相应的地标位置信息;基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息;其中,所述控制信息用于控制所述移动机器人按照所述工作模式下的包含所述地标位置信息的路径执行相应行为操作。
本申请公开的第四方面提供一种移动机器人的控制装置,包括:存储单元,存储有地图数据和至少一个程序;处理单元,调用并执行所述至少一个程序,以协调所述存储装置执行并实现以下任一种方法:如第一方面所述的标记方法、如第二方面所述的关联方法、或者如第三方面所述的控制方法。
本申请公开的第五方面提供一种移动机器人,包括:光感应装置,用于获取光强度点阵数据;移动装置,用于受控执行移动操作;如第四方面所述的控制装置,与所述光感应装置和移动装置相连,以基于获取自所述光感应装置的光强度点阵数据控制移动装置。
本申请公开的第六方面提供一种计算机可读存储介质,其特征在于,存储至少一种程序,所述至少一种程序被处理器运行时控制所述存储介质所在设备执行如第一方面所述的标记方法、如第二方面所述的关联方法、或者如第三方面所述的控制方法。
综上所述,本申请公开的移动机器人的标记、关联和控制方法、控制系统及存储介质,通过将地标图案标记在地图数据中,以及构建地标图案和工作模式之间的关联关系,更利用所述关联关系控制移动机器人自主地在工作模式下执行相应行为操作,实现了移动机器人在复杂的工作环境下高效地自主工作,换言之在工作场所内的有效范围内高质量完成任务,以避免在工作场所内执行全覆盖范围式的自主工作。
本领域技术人员能够从下文的详细描述中容易地洞察到本申请的其它方面和优势。下文的详细描述中仅显示和描述了本申请的示例性实施方式。如本领域技术人员将认识到的,本申请的内容使得本领域技术人员能够对所公开的具体实施方式进行改动而不脱离本申请所涉及发明的精神和范围。相应地,本申请的附图和说明书中的描述仅仅是示例性的,而非为限制性的。
附图说明
本申请所涉及的发明的具体特征如所附权利要求书所显示。通过参考下文中详细描述的示例性实施方式和附图能够更好地理解本申请所涉及发明的特点和优势。对附图简要说明书如下:
图1A-1D显示为利用几何形状构成地标图案的示例图。
图2显示为超市内一水产区的简易示意图。
图3显示为本申请移动机器人的结构框图。
图4显示为本申请配置有驱动部件的移动机器人整机的结构示意图。
图5显示为本申请所述移动机器人上的驱动部件的结构示意图。
图6显示为本申请设有地标图案的充电桩的结构示意图。
图7显示为本申请第一地标数据的标记方法的流程示意图。
图8A-8B显示为摄取到地标图案的光强度点阵数据。
图9A-9C,其分别显示为以卡通图案为例的地标图案、对应的图案掩膜、图案轮廓线条。
图10A-10B分别显示为移动机器人所获取的对应图1A的一维光强度点阵数据和二维光强度点阵数据。
图11显示为在两个地图区域中经统计筛选出的由星型表示的各第二地标数据,以三角形组合的方式构建的地标图案。
图12显示为从移动机器人所记录的多条历史路径中筛选出的三条历史路径。
图13显示为移动机器人的一种控制方法的流程示意图。
图14显示为移动机器人在包含归位操作的工作模式下的场景示意图。
图15显示为将地标图案标记在地图数据中的地图更新系统的架构示意图。
图16显示为利用所述地标图案与工作模式之间关联关系对移动机器人进行控制的控制系统的架构示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本公开的精神和范围的情况下进行模块或单元组成、电气以及操作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。
如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合 在某些方式下内在地互相排斥时,才会出现该定义的例外。
对于火车站、机场候机厅、大型超市等环境复杂的场所来说,其具有工作范围大、场景重复度高等特点,这使得移动机器人一方面预存储的地图数据中相似地标的稀疏性较差,移动机器人通过识别场地的装修环境来实现定位的能力较弱;另一方面,由于场地空旷,移动机器人所配置的环境探测装置能力不足、或周边环境遮挡严重,以至于会出现未能连续探测可匹配地标等情况,使得移动机器人在自主移动时出现实际路径与规划路径不符等情况。
以商用清洁机器人在候机厅执行清洁任务为例,候机厅的装修环境的相似度极高,商用清洁机器人利用如VSLAM技术从图像中提取装修环境所形成的地标数据并结合惯导进行定位时,其所构建的地图数据中包含较为密集的高相似度的地标数据,这使得当商用清洁机器人所在周围环境出现遮挡、光线较强/较暗等情况时,商用清洁机器人中难于在所获取的图像和地图数据中匹配出有效表示移动机器人拍摄位置的地标数据,如此,该商用清洁机器人可能因为无法获得定位数据而中断导航移动,进而中断所执行的清洁工作。
为了有利于移动机器人在复杂的工作场所中获取到更多可供识别地理位置的地标,在一些实施方式中,预先在工作场所内贴设标识码,利用标识码中经编码处理的二进制数据,向移动机器人提供位置数据。其中,所述标识码举例包括如条形码、或二维码等利用颜色块描述经编码的二进制数据的图案码。为了确定工作场所中标识码所表示的地理位置,在一些示例中,操作人员预先按照标识码所对应的标签,将各标识码贴设在移动机器人工作的工作场所的指定位置,对应地,各标识码是利用各标签的二进制数据进行编码得到的图案码。例如,标签为001编号的饮水机,操作人员将相应的标识码贴在相应饮水机上并在地图数据中标记相应饮水机的位置信息,在移动机器人自主移动期间,其通过解码图像中所拍摄到的该标识码,得到编号为001的饮水机的信息,以及关联得到该饮水机的位置信息。通过上述示例的方式,移动机器人能够判断在拍摄到标识码时其位于相应位置信息附近,依据该判断结果,移动机器人可大致确定是否仍在预设的导航路线上、或者是否已严重偏离预设的导航路线。
对移动机器人来说,通过解码标识码中的二维码数据的方式来确认标识码所表示的标签,以及根据标签确定标识码所贴的位置的方式,一定程度地提高了移动机器人在复杂工作场所内导航移动的准确性,但仍不利于平衡移动机器人的工作效率和工作范围。以商用清洁机器人为例,若为了提高商用清洁机器人的工作范围,则需要在工作场所贴更多的标识码,由此通过有效覆盖相应的工作范围来辅助机器人定位,这使得标识码影响相应工作场所的美观,或影响工作场所常规功能,例如干扰旅客在候机厅内寻找方向指示的指示牌等。不仅如此,采用人工方式在地图数据中标记标识码的方式,使得标识码的实际位置与标记位置误差较大, 不利于准确定位。例如,若标识码设置在岔路口附近,则该标识码可能误导移动机器人的定位。
为了既提高移动机器人的工作效率,又能够向移动机器人提供用于准确定位的地标数据,本申请还提供了一种利用预先构建的地标图案和工作模式的关联关系,控制移动机器人所执行的操作的及系统。其中,所述关联关系是用于供移动机器人根据地标图案在地图中的位置、途经该位置的路径而确定相应行为操作的。为了便于将地标图案与路径、工作模式进行关联,所述地标图案布置在工作场所中与工作模式关联的功能区域。例如,所述工作场所为一超市,所述移动机器人为商用清洁机器人,带有地标图案的物品布置在超市中的不同类型商品的区域;所述区域举例包括生鲜区域、零食区域、洗漱用品区域等;根据不同区域所摆放的商品类型,商用清洁机器人的路径和清洁方式不同,鉴于生鲜区域常有水渍,在生鲜区域,商用清洁机器人围绕生鲜摆放柜台的路径执行拖地操作;鉴于零食区域和洗漱用品区域常有顾客光顾,在零食区域和洗漱用品区域,商用清洁机器人围绕零食和洗漱用品摆放柜台的路径执行采用清扫操作和拖地操作。又如,所述工作场所为一候机大厅,所述移动机器人为商用清洁机器人,带有地标图案的物品布置在候机大厅中候机休息的区域和办理登机手续的区域;在候机休息的区域和办理登机手续的区域内,商用清洁机器人沿不同路径执行清扫/拖地操作。
为了标识工作场所中不同路径,所述地标图案设置在工作场所的各区域内移动机器人所移动的路径上。所述地标图案可唯一标识与移动机器人的工作模式相对应的路径上的位置,或者标识对应多个工作模式的多条路径的交汇处位置等。
在一些示例中,所述地标图案位于预先布置在工作场所不同位置的物品上。其中,所述地标图案明显区别于工作场所中的装修环境所形成的图案、或明显区别于工作场所中指示标识图案等,其主要是利用图形、纹理、颜色中至少一种或多种组合而成的图案。其中,所述装修环境中的图案包括为了表达装修风格而设置的图案,例如,墙纸、墙围、座椅上的花纹等。所述指示图案举例包括:候机厅的询问图案、各航空公司的商标图案、超市的优惠图案等。所述物品举例为贴纸、贴牌、立牌、挂牌等;或者融合地标图案的瓷砖等装饰物品。所述地标图案包括预设的至少一种几何形状。所述几何形状包括如矩形、折线型、三角形、圆形、椭圆形、线段、曲线中的至少一种或组合等基本几何形状。所述地标图案上的各几何形状利用至少一种反射系数的材料拼成的。其中,所述几何形状可以是利用高反光材料构成的,或者利用低反光材料构成的、再或者利用高反光材料和低反光材料组合而成的。其中,所述高反光材料和低反光材料是基于移动机器人的光感应装置对其敏感光波长的光强而选取的。为了辨识地标图案,在一个设置有地标图案的物品上至少采用两种不同光反射系数的材料(又 简称反光材料),其中,所述地标图案可以是通过高反光材料或低反光材料构成。为防止误识别,所述几何形状的物理尺寸为预设的,例如预设以下各示例几何形状的各物理尺寸:三角形的各边长、三角形中两个边和一个角、圆形半径、矩形边长、线段长度、曲线的长度和曲率等。例如,利用高反光材料制作的一种几何形状和利用低反光材料制作的另一种几何形状拼接成地标图案,其中,两种几何形状可以相同或不同,其拼接方式包括无缝拼接或有缝拼接等。
请参阅图1A-图1D,其显示为利用几何形状构成地标图案的示例图,其中,图1A中的几何形状包括矩形A1和A2,矩形A1和A2之间留有缝隙,为利于辨识矩形A1、A2、以及矩形A1和A2之间的缝隙,矩形A1和A2采用相对高反光材料,该缝隙采用相对低反光材料;或者矩形A1和A2采用相对低反光材料,该缝隙采用相对高反光材料。图1B中的几何形状包括矩形B1和圆形B2,矩形B1和圆形B2之间留有缝隙,为利于辨识矩形B1和圆形B2、以及矩形B1和圆形B2之间的缝隙,矩形B1和圆形B2采用相对高反光材料,该缝隙采用相对低反光材料;或者与图示不同地,矩形B1和圆形B2采用相对低反光材料,该缝隙采用相对高反光材料。图1C中的几何形状包括环状C1和C2,环状C1的外轮廓是矩形、内轮廓为椭圆形,环状C2的外轮廓和内轮廓均为圆形,环状C1和C2之间留有缝隙,为利于辨识环状C1和C2的外轮廓所形成的形状和内轮廓所形成的形状,环状C1和C2采用相对高反光材料,其外围周边、环状内和缝隙采用相对低反光材料;或者与图示不同地,环状C1和C2采用相对第一低反光材料,其外围周边采用相对第二低反光材料,和其内轮廓所形成的形状采用相对高反光材料,其中,第一低反光材料低于第二低反光材料的光反射系数,以便凸显内轮廓所围成的形状的边界。图1D中的几何形状包括利用线条D1、D2和D3围成的不封闭的三角形,为利于辨识线条D1、D2和D3,线条D1、D2和D3采用相对高反光材料,其外围周边和所围成的非封闭三角形采用相对低反光材料。
在另一些示例中,所述地标图案为移动机器人通过识别在工作场所内的区域所摆放的物体及所述物体周围环境中的多个地标而确定的图案。换言之,所述地标图案是依据移动机器人对光强度点阵数据进行图像处理而得到相应工作场所内一具体位置的多个地标所构成的图案。例如,所述工作场所为一超市,超市中水产功能的区域内所摆放的鱼缸、水龙头、鱼类商品标牌、灯罩等,移动机器人通过图像识别的形状、纹理中的一种或多种组合等得到由至少一种上述物品表面上的点/线等地标所构成的地标图案。例如,所述地标图案是利用多个地标构建出上述至少一种几何图形或几何图形的组合而确定的。请参阅图2所示,其显示为超市内一水产区的简易示意图,其中星型标记为经移动机器人识别而构建的三角形的地标图案, 其中,星型标记为本示例中为表示移动机器人所识别的地标在水产区的位置而进行的图示标记,其不表示在水产区内设有星型图案。
移动机器人根据不同时段在同一功能区域中的不同位置所拍摄到的图像和/或所测量的测量数据等提取反应功能区域内具体位置的且稳定存在的地标数据,这些地标数据所对应的地标构成地标图案。其中,所述地标的光反射能力区别于其周边环境的光反射能力,以形成与周边环境的光反射差异。所述地标举例包括但不限于:宣传牌的边框上的点/线、印刷的文字的笔画/线/点、灯罩的轮廓上的点/线等。
例如,移动机器人配置有补光灯,以配合在多次导航移动期间工作场所内获取各拍摄位置的光强度点阵数据,由此便于利用工作场所内的对补光灯的光波具有强反射能力/弱反射能力的地标,获取包含地标的光强度点阵数据,并通过对光强度点阵数据的数据处理得到相应的地标数据。其中,多次导航移动包括以下至少一种:在工作场所休业/营业期间所执行的导航移动、在日光充足/日光不足时候所执行的导航移动等。其中,所述地标数据包括:包含地标的光强度点阵数据、描述地标的特征向量(简称为地标特征)、地标在地图数据中的地标位置信息等,与所述地标数据关联的还有拍摄的位置信息和包含地标的光强度点阵数据等。利用上述各数据及其位置信息的机器学习,移动机器人从中选取用于构成地标图案的多个地标数据。其中,所述地标图案为对应所选出的地标数据的地标构成的。
借鉴上述示例中的所述地标图案是基于各地标数据在地图数据中的地标位置信息所围成的几何图形或几何图形的组合图形得到的;或者所述地标图案是基于各地标数据在光强度点阵数据中所围成的几何图形或几何图形的组合图形得到的。由此可见,所述地标图案是基于移动机器人对所获取的地标数据选择而确定的。
在此,所述选择地标数据的方式举例包括以下至少一种:通过统计拍摄到各地标数据的拍摄位置而选择能够拍摄到同一地标的拍摄位置最多的地标数据;从多个地标数据中选择能够对应于同一光强度数据的地标数据。例如,利用地图数据中所建立的地标数据与拍摄位置、所拍摄的光强度数据三者之间的关联关系,对各地标数据所对应的拍摄位置、各拍摄位置之间的位置关系等进行统计处理,对所统计的拍摄同一地标数据的拍摄位置的数量,各拍摄位置之间位置关系远近、位置角度等进行评价,从中选出用于构成地标图案的多个地标数据。又如,利用地图数据中所建立的地标数据与拍摄位置、所拍摄的光强度数据三者之间的关联关系,确定对应各光强度数据拍摄到的多组地标数据{P11、P12、P13、P14}、{P21、P22、P13、P14、P15}、{P11、P21、P12、P13}、{P21、P12、P13、P16},对所确定的多个地标数据在不同位置处被拍摄到的次数、能够落入同一光强度数据中的次数进行统计处理,对所统计的拍 摄位置的数量、落入同一光强度数据中的概率等进行评价,从中选出用于构成地标图案的多个地标数据{P21、P12、P13}。
上述各示例所选出的多个地标数据能够对应到工作场所中的多个地标,所述多个地标所围成的几何图形(或几何图形的组合)构成在工作场所中的地标图案。
在又一些示例中,所述地标图案是结合前述两个示例而得到的。为便于描述,前述明显区别于工作场所的环境和功能指示图案的地标图案称为第一地标图案,以及前述利用工作场所内的地标所构成的地标图案称为第二地标图案。结合第一地标图案和第二地标图案得到的地标图案称为第三地标图案。在此,构成第三地标图案的地标包含所述第一地标图案中至少部分几何形状上的点/线,以及第二地标图案中的地标。
需要说明的是,若工作场所中布置多个第一地标图案,各所述第一地标图案之间不完全相同,且相同的第一地标图案所分布的位置需具备稀疏性,以防止移动机器人在定位不精准时无法准确依据所识别的第一地标图案确定相应的工作模式。例如,在预设半径范围(如5m范围)内,不会布置相同图案的第一地标图案。
还需要说明的是,鉴于上述地标图案并非一定是布置在工作场所且明显区别于工作场所已有环境中的几何图形或几何图形的组合而预设的,为区分于用于关联工作模式的地标图案所对应的地标数据以及其他地标所对应的地标数据,后续说明中将地标图案所对应的地标数据称为第一地标数据,其他地标所对应的地标数据称为第二地标数据。另外,在尚未确定所识别的地标数据为第一地标数据之前,也将其称为第二地标数据。其中其他地标包括工作场所内通过环境装修、建筑布局、功能区域划分而存在的、可被移动机器人识别的与地理位置关联的地标。例如,在移动机器人第二位置的屋顶/周围环境上所形成的可被移动机器人识别出地标特征的地标。如利用屋顶的装饰图案、照明装置、探测感应装置,和/或周围环境中的宣传/指示标牌、桌椅、围栏等所形成的地标。
为了将上述任一示例所描述的地标图案与移动机器人的工作模式进行关联,所述移动机器人包含提供相应工作模式以及识别各地标图案的硬件装置和软件装置。
请参阅图3,其显示为移动机器人的结构框图。所述移动机器人包括:光感应装置12、移动装置14、控制装置11等。根据移动机器人所能提供的工作模式中的行为操作,所述移动机器人还包括:行为操作装置13。以移动机器人为商用清洁机器人为例,所述行为操作装置13包括以下至少一种:清扫组件、拖地组件、或磨光组件等。
其中,所述光感应装置用于获取光强度点阵数据。其中,根据光感应装置以点/线/面等感应光强信号的方式,所对应得到的光强度点阵数据可以是一维数据或二维数据。
在一些示例中,所述光感应装置为利用光反射原理进行测量的一维光感应装置。例如,所述光感应装置包括以下至少一种线激光感应装置、或可运动的单点激光感应装置等。其中,线激光感应装置获取具有一定可探测范围的一维光强度点阵数据。例如,所述线激光感应装置包含排列成直线状/弧线状的多个激光收发器,各激光收发器所发射出的点阵激光束位于一平面内。所述可运动的单点激光感应装置根据其运动行程得到具有对应可探测范围的一维光强度点阵数据。例如,该单点激光感应装置在大于0度且小于等于360度的可探测范围内旋转,以获得在其旋转面内所获取的一维光强度点阵数据,所述一维光强度点阵数据中各点的光强度值对应其可探测范围内的各角度值。
在又一些示例中,所述光感应装置为利用光反射原理进行图像成像的二维光感应装置。例如,所述光感应装置包括以下至少一种多线激光感应装置、摄像装置、红外摄像装置等。其中,所述多线激光感应装置获取具有一定可探测范围的多组一维光强度点阵数据,以构成二维的光强度点阵数据。所述光感应装置包含集成有CCD、CMOS的单目摄像装置或双目摄像装置等,以得到颜色图像(即二维的光强度点阵数据)。所述红外摄像装置包括红外发光二极管和CCD芯片,以感应到红外光的颜色图像(即二维的光强度点阵数据),其可与如ToF感应器等其他测量传感器集成在一起,以同时输出如深度图像数据等其他测量数据和二维的光强度点阵数据。
在再一些示例中,设置在移动机器人上的二维光感应装置还配置有驱动部件,以驱动光感应装置在移动机器人所提供的可探测范围内转动和/或平移。其中,所述可探测范围包含二维光感应装置的固有视角范围以及被驱动部件带动而产生的转动量程和/或平移量程的总和。
请参考图4和图5,其分别显示为配置有驱动部件的移动机器人整机的结构示意图,以及所述驱动部件的结构示意图。其中,移动机器人1包含有主体10、可视的外壳100、供装配驱动部件的载体102,其中,所述载件102设有供露出所述光感应装置101的镂空结构103。在此,在驱动部件的控制下,光感应装置101可沿A和/或B方向移动。如图5所示,所述驱动部件202包括:可活动件2021及驱动器2022。
具体的,所述可活动件2021连接并能活动至带动所述光感应装置(未予图示)。所述光感应装置与可活动件2021之间可以是定位连接或通过传动结构连接。其中,所述定位连接包括:卡合连接、铆接、粘接、及焊接中的任意一种或多种。在定位连接的示例中,例如图5所示,可活动件2021例如为可以横向转动的驱动杆,而所述光感应装置具有与该驱动杆形状配合地套合的凹孔(未图示),只要驱动杆和凹孔的截面非圆形,则光感应装置就可以随驱动杆进行横向转动;在一些传动结构的示例中,所述可活动件2021例如为丝杆,丝杆上的连接座 该随丝杆转动而平移,所述连接座供与所述光感应装置201固定,以使得所述光感应装置201能随之运动。在一些传动结构的示例中,所述光感应装置201与可活动件2021之间也可以通过齿部、齿轮、齿条、齿链等中的一种或多种连接,以实现可活动件2021对于光感应装置201的带动。
若设置在所述驱动部件上的线激光感应装置可获取一维的光强度点阵数据,则在在驱动部件的带动下,线激光感应装置可获得更大可探测范围的一维的光强度点阵数据。若设置在所述驱动部件上的光感应装置可获取二维的光强度点阵数据,则在在驱动部件的带动下,光感应装置可在可探测范围内获得重叠视角区域的多个二维的光强度点阵数据,或者获得具有可探测范围内的一个二维的光强度点阵数据。
所述移动装置用于受控执行移动操作。于实际的实施方式中,移动装置可包括行走机构和驱动机构,其中,所述行走机构可设置于移动机器人的底部,所述驱动机构内置于所述移动机器人的壳体内。进一步地,所述行走机构可采用行走轮方式,在一种实现方式中,所述行走机构可例如包括至少两个万向行走轮,由所述至少两个万向行走轮实现前进、后退、转向、以及旋转等移动。在其他实现方式中,所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,其中,在所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述驱动机构可例如为驱动电机,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。
所述行为操作装置用于按照移动机器人所移动的位置执行某一工作模式下的行为操作。以移动机器人为商用清洁机器人为例,所述行为操作装置又称为清洁装置,其具有清扫和拖地功能。
在实施例中,所述清洁装置包括拖地组件(未予图示),其中,所述拖地组件用于受控执行拖地操作。所述拖地组件包括:拖垫、拖垫承载体、喷雾装置、洒水装置等。所述拖地组件用于在拖地模式下受控执行拖地操作。
所述清洁装置还包括清扫组件(未予图示),所述清扫组件用于受控执行扫地操作。所述清扫组件可包括位于壳体底部的边刷、滚刷以及与用于控制所述边刷的边刷电机和用于控制所述滚刷的滚刷电机,其中,所述边刷的数量可为至少两个,分别对称设置于移动机器人壳体前端的相对两侧,所述边刷可采用旋转式边刷,可在所述边刷电机的控制下作旋转。所述滚刷位于移动机器人的底部中间处,可在所述滚刷电机的控制下作旋转转动进行清扫工作, 将垃圾由清洁地面扫入并通过收集入口输送到吸尘组件内。所述吸尘组件可包括集尘室、风机,其中,所述集尘室内置于壳体,所述风机用于提供吸力以将垃圾吸入集尘室中。所述清洁装置并不以此为限。
对于具备自动归位操作行为(简称归位操作)的移动机器人来说,与移动机器人配套使用的还包括:充电桩和/或垃圾回收装置。其中,所述垃圾回收装置包括固体垃圾回收装置和/或污水回收装置。
相应地,在充电桩、垃圾回收装置所在位置也设有上述提及的地标图案,该地标图案用于与移动机器人的归位工作模式相关联。在此应用中,所述移动机器人还包括家用清洁机器人,其用于在家庭场所内执行清洁操作,以及执行归位操作。
以充电桩为例,请参阅图6,其显示为设有地标图案的充电桩的结构示意图,其中,所述充电桩包括:本体和暴露于本体外的电信号端子22、在本体的表面设有地标图案21。该地标图案对应于移动机器人的归位操作,移动机器人通过预先设置的归为操作和地标图案的对应关系构建归为操作所属的工作模式与相应的地标图案之间的关联关系。
需要说明的是,所述充电桩的结构仅为示例,所述充电桩还可以是利用无线充电原理而提供充电功能的充电桩,其本体上也可设有用于标识充电桩位置的地标图案。
还需要说明的是,与充电桩的本体上设置地标图案的方式类似,垃圾回收装置的本体表面也设有地标图案,以便关联包含归为操作的工作模式与地标图案。
所述控制装置用于标记地标图案,构建各地标图案与工作模式之间的关联关系,以及利用所述关联关系对移动机器人的移动装置、行为操作装置等进行控制。所述控制系统包括存储单元、接口单元、和处理单元。
所述接口单元用于接收自所述光感应装置所摄取的光强度点阵数据。根据实际移动机器人所配置的光感应装置,所述接口单元与至少一个光感应装置相连,用于从相应光感应装置读取在其可探测范围内的物体表面反射光所形成的光强度点阵数据。所述接口单元还与移动装置和/或行为执行装置相连,以输出控制移动装置和/或行为执行装置的控制指令,例如,所述接口单元与驱动边刷、滚刷、拖地组件、或者行走机构的驱动电机相连,来输出所述控制指令,以控制边刷、滚刷、或者行走机构的转动。所述控制指令是处理单元基于关联关系而生成的。所述接口单元包括但不限于:如HDMI接口或USB接口的串行接口,或并行接口等。
所述存储单元用于存储至少一种程序,所述至少一种程序可供所述处理单元执行移动机器人的控制方法、地标标记方法等使各硬件装置协同执行的指令。所述存储单元还存储有地 图数据。其中,所述地图数据为利用栅格/向量等将移动机器人所移动过的工作场所映射到一坐标系下的数字化表达,其包括但不限于:各类地标数据及其地标位置信息、移动机器人在移动期间所拍摄的光强度点阵数据及其拍摄位置信息、移动机器人在移动期间所测量的障碍物轮廓及其障碍物位置信息、移动机器人通过对障碍物轮廓的数据处理而得到的工作场所的边界数据及其位置信息、可供移动机器人移动的区域的边界数据及其位置信息等。其中,所述各类地标数据包括前述提及的第一地标数据和第二地标数据。上述各数据/信息等之间根据数据关联需求而构建数据关联关系,以便利用其中部分数据/信息索引出其他数据/信息。例如,各类地标数据均与光强度点阵数据具有数据关联关系,利用各类地标数据可查询到拍摄相应地标数据的光强度点阵数据及其拍摄位置等。
在此,存储单元包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM)。例如,存储单元包括闪存设备或其他非易失性固态存储设备。在某些实施例中,存储单元还可以包括远离一个或多个处理单元的存储器,例如经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网、广域网、存储局域网等,或其适当组合。存储器控制器可控制设备的诸如CPU和外设接口之类的其他组件对存储器的访问。
处理单元与所述接口单元和存储单元相连。所述处理单元包括一个或多个处理器(CPU)。处理单元可操作地与存储单元执行数据读写操作。处理单元执行诸如获取光强度点阵数据、暂存地标特征、执行地标特征匹配计算、根据反馈自移动装置和/或行为执行装置等的测量数据而输出控制指令等。所述处理单元包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。处理单元还与I/O端口和人机交互装置可操作地耦接,该人机交互装置可使得用户能够与移动机器人进行交互。例如,输入预设高度值等配置操作。因此,人机交互装置可包括按钮、键盘、鼠标、触控板等。该I/O端口可使得移动机器人能够与如移动装置和/或行为执行装置等各种其他电子设备进行交互,所述其他电子设备包括但不限于:所述移动机器人中移动装置中的电机,或移动机器人中专用于控制移动装置和清洁装置的从处理器,如微控制单元(Microcontroller Unit,简称MCU)。
为了利用地标图案和工作模式之间的关联关系执行相应行为操作,移动机器人预先存储所述地图数据中包含有第一地标数据,以及预先构建其中至少部分第一地标数据与工作模式 之间的数据关联关系,由此确定对应第一地标数据的地标图案与工作模式之间的关联关系。
请参阅图7,其显示为一种第一地标数据的标记方法的流程示意图,其用于将移动机器人在移动期间所识别到的地标图案转换成第一地标数据并记录在地图数据中。
在此,所述第一地标数据的标记方法可执行于移动机器人构建地图数据的阶段;或者执行于移动机器人更新地图数据的阶段。例如,在构建地图数据期间,移动机器人将其移动的初始位置映射到地图数据中预设的定位位置,如地图数据中的坐标原点,并开始获取不同位置的光强度点阵数据,以及测量移动机器人相对于第一位置Pos1的后一位置Pos2处的位置和姿态变化,利用在不同位置(Pos1和Pos2)所拍摄的光强度点阵数据中相匹配的地标特征及其在各光强度点阵数据中的像素位置之间的位置偏差,以及相对于第一位置Pos1而言移动机器人在位置和姿态上的变化数据(又称相对位姿数据)等,确定后一位置Pos2在地图数据中的拍摄位置信息,以及确定相匹配的地标特征的地标位置信息;将所拍摄的光强度点阵数据及其拍摄位置信息(又称定位信息)记录在地图数据中,以及将基于相匹配的地标特征及其地标位置信息而确定的第一地标数据或第二地标数据记录在地图数据中。当移动机器人利用上述方式遍历了工作场所时,移动机器人得到对应工作场所的地图数据,其中,地图数据中包含了对应地标图案的第一地标数据和未对应地标图案的第二地标数据。又如,在移动机器人导航移动期间,移动机器人在第二位置拍摄第一光强度数据并提取其中的多个地标特征A,将第一光强度数据及其多个地标特征A与地图数据中的各第二光强度数据及其各第一地标数据和各第二地标数据中的地标特征进行匹配,并根据相匹配的地标特征及其在各光强度点阵数据中的像素偏差确定第二位置在地图数据中的拍摄位置信息(又称定位信息);根据地图数据更新的实时性要求,所述移动机器人还实时或延时执行:依据在不同位置均拍摄到的该多个地标特征A中未记录在地图数据中的第一地标数据或第二地标数据,更新在地图数据(如将相应的第一地标数据或第二地标数据添加至地图数据中);和/或依据地图数据中匹配次数低于预设匹配次数阈值的第一地标数据或第二地标数据,更新在地图数据(如删除地图数据中的相应第一地标数据或第二地标数据)。
如图7所示,所述移动机器人执行以下步骤S110-S130,以实现将第一地标数据标记在地图数据中。在此,对应第一地标数据的地标图案通常设置在与工作场所中的工作区域相关联的位置处、或与工作区域内的工作路径相关联的位置处,为此,将第一地标数据标记在地图数据中有助于移动机器人进行定位以及选择执行相关行为操作。
在步骤S110中,获取光强度点阵数据。其中,光强度点阵数据来自于存储装置。例如,通过数据库的读取操作或利用调用存储器的读取指令,读取存储装置中一存储地址中的光强 度点阵数据。
或者,所述光强度点阵数据可来自于移动机器人所配置的光感应装置。根据移动机器人所配置的光感应装置的类型,所述光强度点阵数据为一维数据或二维数据。根据移动机器人所配置的光感应装置的可探测范围大于或等于光感应装置的视角范围,所述光强度点阵数据为根据至少一次拍摄得到的。
以利用可运动的单点激光感应装置获取到一维的光强度点阵数据为例,若所拍摄的光强度点阵数据包含地标图案,其中,该地标图案是利用高反光材料制得几何形状E11和E12组合而成的,则光强度点阵数据如图8A所示。
以利用摄像装置获取到二维的光强度点阵数据为例,若所拍摄的光强度点阵数据包含地标图案,其中,该地标图案是利用高反光材料制得几何形状E21和E22组合而成的,则光强度点阵数据如图8B所示。
若利用光感应装置移动期间所获取的多个光强度点阵数据拼接而成的一光强度点阵数据,则可获得与图8A和8B类似且视角范围更宽的一维或二维数据。在此未予图示。
需要说明的是,与高反光材料类似,地标图案也可以采用地反光材料制得,移动机器人所得到的光强度点阵数据与图8A和8B所示的光强度相反,在此不再详述。还需要说明的是,为便于描述一维光强度点阵数据和二维光强度点阵数据,对图8A和8B进行了夸张/截取/滤光等处理,而非一维光强度点阵数据和二维光强度点阵数据表示仅如图所示。
在步骤S120中,确认所获取的光强度点阵数据中包含对应地标图案的图案区域。
在此,移动机器人通过对所获取的光强度点阵数据中各像素位置的光强度值进行图案筛选,选出光强度点阵数据中符合图案筛选条件的各像素位置所构成的待确认的图案区域;通过对所述待确认的图案区域进行图像分析确定其是否与一地标图案相匹配,若相匹配,则确定光强度点阵数据中包含对应地标图案的图案区域,并执行步骤S130,反之则重新执行步骤S110以获取新的光强度点阵数据。
其中,所述图案筛选条件用于去除光强度点阵数据中明显不符合地标图案的图案规则的光强度值,以使筛选后的光强度点阵数据按照的各像素位置的光强度值形成至少一个连通域(又称待确认的图案区域),为此,所述图案筛选条件至少包括基于光强度强弱而设置的筛选条件。还可以包括为了去除图像噪声而设置的筛选条件。例如,去除连通域面积小于预设的连通域面积阈值的待确认的图案区域的筛选条件。又如,去除连通域形状明显区别于地标图案所对应的图案模板形状的待确认的图案区域的筛选条件。
在利用图案筛选条件而筛选后的光强度点阵数据中,移动机器人得到基于所保留的/所滤 除的各像素位置围成的待确认的图案区域,其举例包括由一个封闭轮廓所形成的图案区域、或包含多个封闭轮廓的组合所形成的图案区域。移动机器人通过对待确认的图案区域进行图像分析,以确定图案区域所对应的地标图案。
需要说明的是,上述提及的图像筛选过程也可以根据移动机器人的光感应装置所提供的光强度点云数据的数据量、或移动机器人所执行操作的实际环境情况予以省略。例如,移动机器人对获取的光强度点云数据中基于相邻像素位置的光强度值的突变条件而划分出图案区域或图案区域的组合,并利用识别条件进行地标图案的识别。
在一些实施例中,移动机器人根据反映地标图案的图案模板识别从所述光强度点阵数据筛选出的待确定的图案区域,以确认所获取的光强度点阵数据中是否包含对应地标图案的图案区域。其中,所述图案模板举例包括地标图案中轮廓、和/或构成地标图案在光强度点阵数据中所呈现的图像特征。例如,所述图案模板包括:所述地标图案的轮廓在光强度点阵数据中所呈现的图像特征,如角特征、线特征、轮廓特征及其任意组合;和/或根据地标图案的轮廓而设置的图案线条或图案掩膜。
以所述地标图案为利用具有一定光反射系数的材料所制成的图案为例,其中,所述地标图案所使用的材料的光反射强度区别于其周边材料的光反射强度,以利于移动机器人进行识别。移动机器人中预设有对应所述地标图案的图案模板,当移动机器人获取到光强度点阵数据时,根据光强度值进行划分,得到待确定的图案区域,通过分析所述图案模板与待确定的图案区域的相似性,得到所获取的光强度点阵数据中是否包含地标图案的确认结果。其中,所述图案模板是基于地标图案的轮廓而设置的图像数据,其包括但不限于描述地标图案的轮廓的图案线条或至少一个图案掩膜。请参阅图9A和图9B,其分别显示为以卡通图案为例的地标图案,及其对应的图案掩膜,对应的图案模板包括对应该卡通图案外轮廓的图案掩膜、以及对应该卡通图案中局部轮廓的图案掩膜等。又如,仍以图9A中所示的卡通图案为例,并结合图9C,其显示为对应的图案模板为构成卡通图案的轮廓线条。利用上述任一示例所提供的图案模板,移动机器人按照光强度点阵数据中相邻像素的光强度值上的突变条件或按照图案筛选条件,从所获取的光强度点阵数据中提取至少一个待确定的图案区域,移动机器人利用所述图案模板对所得到的各待确定的图案区域的相似度进行匹配,以从中确定光强度点阵数据中包含对应地标图案的图案区域。
上述各示例中所述的利用相似度确定所述光强度点阵数据中包含对应地标图案的图案区域的方式可采用图像相似性算法,例如,利用直方图比较算法、或利用图像特征比较算法等。
上述示例中所描述的地标图案及其确定光强度点阵数据中包含地标数据的方式还可以用 于地标数据为利用一个或多个几何形状的组合而成的情况。为适配移动机器人在不同工作场所中布置的不同地标图案,移动机器人存储对应多个地标图案的图案模板。如上述图9A-9C的示例可见,移动机器人所存储的不同图像模板的数量包含可适配各类工作场所中所需布置的地标图案的数量,使得移动机器人和配有地标图案的附属物品在出厂时即彼此适配。
为减少移动机器人所存储的地标图案的图案模板的数量,提高地标图案的灵活处置能力,在所述地标图案为利用具有一定光反射系数的材料所制成的多个几何形状组合而成的图案的示例中,移动机器人还可以预存有各几何形状在光强度点阵数据所呈现的几何特征,换言之,各所述几何特征是基于对应于组成地标图案的几何形状,以及光感应装置所能提供的光强度点阵数据的点阵维度而确定的。其中,所述几何特征举例包括:几何形状的轮廓在图像中的特征。例如,所述几何特征包括根据地标图案中各几何图形的轮廓而设置的各图案模板。又如,地标图案中包含矩形,所述矩形在一维的光强度点阵数据中的几何特征包括:长度不少于预设像素数量的线段的图像特征。所述几何特征还举例包括:图像中反映几何形状中的几何数据或几何数据之间关联关系等的特征等。例如,地标图案中包含三角形,所述三角形在二维的光强度点阵数据中的几何特征包括:对应呈现与三角形的至少两个角度一致的图像特征。
在一些示例中,移动机器人通过执行以下步骤,确定所述光强度点阵数据中包含对应地标图案的图案区域。
在步骤S121中,根据预存储的几何特征,从所获取的光强度点阵数据中提取包含至少一种几何特征;其中,所述几何特征对应用于组成地标图案的几何形状。
在步骤S122中,根据所提取的各几何特征确定相应的地标图案。
在此,移动机器人根据预存储的几何特征对所获取的光强度点阵数据进图像匹配,根据所匹配的几何特征所对应的各几何形状、或者根据所匹配的各几何特征所对应的各几何形状及其之间的摆放位置关系,确定相应的地标图案。移动机器人中预存储有包含直角等腰三角形、正方形、矩形、和圆形等多种几何形状的几何特征,利用所存储各几何特征对所获取的光强度点阵数据进行遍历式的相似度计算;根据各相似度计算结果从中提取出直角等腰三角形和圆形的几何特征,以及根据光强度点阵数据中直角等腰三角形和圆形的像素位置关系,确定地标图案中直角等腰三角形和圆形的摆放位置关系为直角等腰三角形的最长边与圆形相邻,由此确定地标图案为由直角等腰三角形和圆形构成的,且直角等腰三角形的最长边与圆形相邻。
在另一些示例中,确定所述光强度点阵数据中包含对应地标图案的图案区域的方式还可 以采用如步骤S123-S125的方案,即通过将地标图案和光强度点阵数据转换至同一坐标系(如地图坐标系或图像坐标系)下,进行相似度匹配,以确定光强度点阵数据中包含对应地标图案的图案区域。
在步骤S123中,从所述光强度点阵数据中提取待确定的图案区域。
在步骤S124中,判断所述地标图案与待确定的图案区域在同一坐标系下的相似度是否符合匹配条件,若是,则执行步骤S125,即确定所述光强度点阵数据中包含对应地标图案的图案区域;若否,则重新执行步骤S110以获取新的光强度点阵数据。
在此,移动机器人预存有地标图案的物理尺寸,并获取在拍摄光强度点阵数据时光感应装置的拍摄参数,如单点光束/结构光的飞行时间、或光学焦距等。移动机器人按照预设的突变条件或图案筛选条件提取光强度点阵数据中的待确定的图案区域,利用上述数据移动机器人将各图案区域转换至世界坐标系下,或将地标图案转换至基于各图案区域构建的图像坐标系下,在同一坐标系下进行相似度匹配,以确定所述光强度点阵数据中包含对应地标图案的图案区域。
以移动机器人所获取的光强度点阵数据为一维数据为例,请参阅图1A并结合图10A,其中图10A显示为移动机器人所获取的光强度点阵数据。其中,地标图案举例为多个宽度不同的矩形拼接而成的图案。移动机器人预存有各矩形宽度的物理尺寸、各矩阵之间间隔的物理尺寸、各宽度矩形的排列顺序中的至少一种等。移动机器人按照预设的突变条件或图案筛选条件提取光强度点阵数据中待确定的图案区域包括:光强度值大于预设光强度阈值的像素点所组成的第一线段区域,提取光强度点阵数据中第一线段区域之间间隔的第二线段区域,其中第一线段区域和第二线段区域均属于图案区域。移动机器人还获取对应当前拍摄光强度点阵数据的光束飞行时间,并利用飞行时间计算出移动机器人至反射光束的物体表面测量点之间的距离D和角度α,以及光感应装置的内部参数(如像素间距等),构建预存的地标图案与所提取的各图案区域之间的映射关系,以将地标图案中各物理尺寸所表示的物理线段映射到图案区域所在一维坐标轴下,或者将所提取的各图案区域映射到所述物理线段的一维坐标轴下;通过统计各物理线段与各图案区域之间的位置偏差,确定所述光强度点阵数据中包含/不包含地标图案。其中,统计各物理线段与各图案区域之间的位置偏差的方式举例包括:根据各第一线段区域和第二线段区域之间的位置关系,分别对各第一线段区域与和第二线段区域的两端点与各物理线段的两端进行均方差/均值等统计计算,若所得到的统计结果满足预设的匹配条件,则确定所述光强度点阵数据中包含对应地标图案的图案区域,反之,则确定所述光强度点阵数据中不包含地标图案。
以移动机器人所获取的光强度点阵数据为二维数据为例,请参阅图1A并结合图10B,其中图10B为移动机器人所获取的光强度点阵数据。其中,地标图案举例为多个宽度不同的矩形拼接而成的图案。移动机器人预存有各矩形宽度、长度的物理尺寸、各矩阵之间间隔的物理尺寸、各宽度矩形的排列顺序中的至少一种等。移动机器人按照预设的突变条件或图案筛选条件提取光强度点阵数据中的待确定的图案区域包括:光强度值大于预设光强度阈值的像素点所组成的矩形区域,其中矩形区域属于图案区域。移动机器人还获取对应当前拍摄光强度点阵数据的物方焦距和像方焦距等,以及光感应装置的内部参数(如像方焦距、物方焦距、视角范围等),利用上述各数据,构建预存的地标图案与所提取的各图案区域之间的映射关系,以将依据预设的各物理尺寸所表示的地标图案映射到图案区域所在图像二维坐标系下,或者将所提取的各图案区域映射到所述地标图案的二维坐标系下;通过统计地标图案中的各矩形与各图案区域之间的位置偏差,确定所述光强度点阵数据中包含/不包含地标图案。其中,统计各矩形与各图案区域之间的位置偏差的方式举例包括:根据各矩形之间的位置关系,分别对各图案区域的端点坐标与各矩形的端点坐标进行基于距离的方差或均方值等统计计算,若所得到的统计结果满足预设的匹配条件,则确定所述光强度点阵数据中包含对应地标图案的图案区域,反之,则确定所述光强度点阵数据中不包含对应地标图案的图案区域。其中,所述匹配条件时基于统计方式而确定的,例如,若利用方差进行距离统计,则匹配条件为各端点坐标之间距离的方差统计结果与均值之间的偏差小于预设偏差阈值。
若移动机器人中预存有多个地标图案,则从各地标图案中选择若干地标图案的物理尺寸进行上述处理,若根据各统计结果确定待确定的图案区域满足位置偏差最小且符合匹配条件,则确定所述光强度点阵数据中包含对应地标图案的图案区域。其中,所选择的若干地标图案的物理尺寸举例包括:选择所有地标图案,或者根据与所提取的图案区域相匹配的几何特征/图案模板等选取包含相应几何特征/图案模板的地标图案等。移动机器人利用所选出的至少一个地标图案的物理尺寸进行逐一地坐标计算和位置偏差的统计,判断各统计结果中位置偏差最小值是否匹配条件,若是,则确定所述光强度点阵数据中包含对应地标图案的图案区域,并执行步骤S130,反之,则执行步骤S110以获取新的光强度点阵数据。
在一些示例中,移动机器人基于预先在地图数据中划分的地图区域,从位于各地图区域内的地标数据中选择尽量多的第二地标数据以组合成唯一表示相应地图区域的第一地标数据。其中,划分地图区域的方式举例包括以下任一种或多种:人工划分,依据地图区域各中心位置之间的距离不小于预设距离阈值的方式进行划分,依据统计移动机器人的历史路径/指定路径的重叠程度和/或交叉位置进行划分等。
其中,选择第二地标数据的方式举例包括以下至少一种:选择包含具有唯一描述的第二地标数据、依据第二地标数据在地图数据中的位置关系而选择具有唯一几何形状(或组合)的第二地标数据、对能够位于单个光强度点阵数据中第二地标数据进行统计以选择在统计峰值区间内的第二地标数据。选择第二地标数据的方式还举例包括结合各地图区域在地图数据中的方位和地图区域中的第二地标数据以得到唯一表示相应地图区域的第一地标数据。
例如,移动机器人对在地图区域中各第二地标数据及其各自对应的光强度点阵数据进行统计,以筛选出能够被单一光强度点阵数据获取的多个第二地标数据,如此有利于当移动机器人移动至相应位置附近时,能够在一个光强度点阵数据中获取尽量完整的地标图案;利用不同地图区域中所筛选出的多个第二地标数据在地图数据中所形成的几何形状/几何形状组合,得到唯一的标识该地图区域的地标图案。请参阅图11,其显示为两个地图区域中经统计筛选出的各第二地标数据(由星型表示),以三角形组合的方式构建的地标图案。所述地标图案利用至少一个三角形组成,不同地标图案的各三角形形状及其位置关系各不相同,则移动机器人根据利用各第二地标数据在两个地图区域内分别构建出的不同三角形及其位置关系,确定将所筛选出的不同地图区域的第二地标数据的组合作为对应地图区域的第一地标数据组合,即所述第一地标数据的组合在相应物理空间内的测量点所构成设在相应物理空间内的地标图案,其余第二地标数据(由圆形表示)仍为用于定位的地标数据。
又如,移动机器人对各地图区域进行编码,以及对在相应地图区域中各第二地标数据及其各自对应的光强度点阵数据进行统计,以筛选出更容易被单一光强度点阵数据获取的多个第二地标数据,将所述地图区域的编码和所筛选出的第二地标数据进行组合,以形成唯一的标识该地图区域的地标图案。如此得到的地标图案为限于地图区域内的地标图案,这使得不同地图区域内可能存在相同的地标图案。为增加相同地标图案在不同地图区域的稀疏度,还可以根据地图区域之间的距离,如不相邻的地图区域,选择容忍不同地图区域对应相同地标图案的第一地标数据;或者,基于几何形状(或组合),和/或保留具有唯一描述的第二地标数据等方式对所筛选出的各地图区域的第二地标数据进行二次筛选,以得到描述地标图案的第一地标数据或第一地标数据的组合。
当地标图案被分布在工作场所的不同位置时,移动机器人在构建地图或更新地图期间将位于工作场所各区域的地标作为地标图案记录在地图数据中,以便对应于不同的工作模式。
与前述一些示例中专门设计的明显区别于周围环境的地标图案相比,依据地图数据中唯一的第二地标数据或第二地标数据的组合而确定的地标图案可以让移动机器人更适应不可预测的工作场所。例如,商用清洁机器人在出厂时不可预测其工作场所是超市或候车大厅或商 场,在商用清洁机器人构建其工作场所的地图数据时,可根据工作场所的实际情况由移动机器人设定分布在工作场所的至少一个地标图案。
在步骤S130中,根据地图数据中已记录的至少一种位置信息,确定地标图案在地图数据中的地标位置信息;其中,所述已记录的至少一种位置信息对应于所述光强度点阵数据中的地标数据和/或对应于获取所述光强度点阵数据时的拍摄位置(又称定位信息)。
在此,在构建地图阶段或是更新地图阶段,移动机器人在移动过程中向地图数据中所添加的新的数据是在地图数据中已记录的各种数据的基础上确定的,新的数据包括地标图案在地图数据中的地标位置信息、移动机器人获取所述光强度点阵数据时的拍摄位置、所述光强度点阵数据、光强度点阵数据中描述地标图案的描述信息等。其中,在地图数据中尚未记录所述地标图案时,地图数据中所记录的各地标数据视为前述提及的第二地标数据。
在此,移动机器人基于地图数据中已记录的至少一种位置信息构建光强度点阵数据的坐标系(含坐标轴)与世界坐标系之间的映射关系,利用所述映射关系建立光强度点阵数据中对应地标图案的图案区域在地图数据中的位置信息,即确定相应的地标图案在地图数据中的地标位置信息。其中,所述位置信息是基于移动机器人执行步骤S110时的定位信息、和/或执行步骤S110时所获取的光强度点阵数据中的第二地标数据而确定的。所述映射关系由矩阵参数、距离之间的比例系数、坐标偏移量中的至少一种表示。其中,所述环境测量数据为移动机器人在时间和/或空间上所测得的反映其与周围环境障碍物之间的距离、方位等数据,和/或反映移动机器人在工作场所内移动的距离、航向角等数据。所述环境测量数据举例但不限于以下一种或多种:在第一位置和第二位置分别拍摄到的光强度点阵数据,在第一位置和第二位置分别拍摄到的深度数据,从第一位置移动至第二位置的移动数据(例如移动距离和/或航向角)中的至少一种等。其中,第一位置举例为从地图数据中经查询得到的拍摄位置信息/测量位置信息、在移动机器人的移动期间历次获取光强度点阵数据时的某一拍摄位置信息、在移动机器人的移动期间历次获取深度数据时的某一测量位置信息等。第二位置为执行步骤S110时所对应的拍摄位置信息。
在一些示例中,以地图数据中已记录第一位置和第二位置所对应的定位信息为例,移动机器人得到所述第一位置和第二位置之间的物理距离,以及确定两个位置处的光强度点阵数据中对应同一地标图案的各图案区域之间的像素位置偏差,移动机器人利用像素位置偏差和物理距离构建所述映射关系,并根据已知的第一位置和第二位置,将所述图像区域的像素位置映射到所述地图数据中,以得到地标图案的地标位置信息。
在另一些示例中,以地图数据中已记录第二地标数据为例,移动机器人在第二位置拍摄 的光强度点阵数据中提取对应第二地标数据的地标特征,并利用所提取的地标特征及其在地图数据中的地标位置信息所构建的映射关系,以及该地标特征与地标图案所对应的图案特征之间的像素位置偏差,将所述图像区域的像素位置映射到所述地图数据中,以得到地标图案的地标位置信息。
在又一些示例中,以地图数据中已记录至少一种位置信息为例,移动机器人还预存储地标图案的物理尺寸,移动机器人将地标图案标记在地图数据中的方式包括S131:通过构建在第二位置(即定位信息)处时所述地标图案与光强度点阵数据中图案区域之间的映射关系,确定移动机器人和/或光强度点阵数据中的地标特征各自在所述地图数据中的位置信息与所述地标图案之间的相对位置关系;以及将所述地标图案标记在地图数据中的地标位置信息。其中,所述映射关系是基于移动机器人在第二位置处相对于地标图案的位姿以及光感应装置拍摄时的拍摄参数而确定的。所述映射关系包括但不限于以下至少一种:基于地标图案中的轮廓与光强度点阵数据中的对应图案区域的轮廓之间的映射关系;基于地标图案中各几何图形上的边/角与光强度点阵数据中的对应图案区域中的几何特征之间的映射关系;基于地标图案中的测量点与光强度点阵数据中的对应图案区域的像素块之间的映射关系。其中,测量点包括可被识别为地标的测量点,或按照预设规则提取的测量点等,其中,按照预设规则提取的测量点举例包括基于轮廓上的等间隔而确定的测量点和/或基于可被提取的轮廓特征线/点而确定的测量点等。
其中,所述光感应装置的拍摄参数包括为获取光强度点阵数据而调整光感应装置中的参数,如转动角度等;根据光感应装置的类型所述拍摄参数还举例包括以下任一种光束飞行时长、焦距等。所述光感应装置的参数还包括如视角范围、像方焦距、光感应装置的光轴在光强度点阵数据中的像素位置等反映光感应装置固有特性的固定参数。
移动机器人执行步骤S110时还获取的光感应装置中的拍摄参数,并利用拍摄参数构建地标图案上各测量点i在相机坐标系中的坐标位置,并利用相机坐标系与光强度点阵数据所在坐标系之间的转换关系,将测量点i映射到光强度点阵数据所在坐标系中的像素位置Pc i′;其中,像素位置Pc i′与所获取的光强度点阵数据中对应测量点i的像素位置Pc i之间具有像素位置偏差;该像素位置偏差是基于移动机器人与地标图案之间的相对位置关系而产生的;其中,由于移动机器人在整体移动时,其获取的光强度点阵数据中的地标特征所对应的地标与地标图案之间的相对位置关系与移动机器人与地标图案之间的相对位置关系是对应的,故而,利用所述像素位置偏差以及移动机器人的第二位置和/或光强度点阵数据中的地标特征各自在所述地图数据中的位置信息,在地图数据中标记测量点i的地标位置信息。通过将地标图案 的不同测量点标记在地图数据中,即将地标图案标记在地图数据中。所述地图数据中包含有对应该地标图案的第一地标数据。
以光感应装置所摄取的光强度点阵数据为一维数据为例,利用公式:
Figure PCTCN2020106899-appb-000001
以及
Figure PCTCN2020106899-appb-000002
将地标图案中的测量点i转换至光强度点阵数据所在坐标轴中;再利用公式:
Figure PCTCN2020106899-appb-000003
确定移动机器人与地标图案之间的相对位置关系(θ,t x)。其中,u i为测量点i映射到光强度点阵数据所在坐标轴上的坐标值,u i′为光强度点阵数据中图案区域上对应测量点i的像素位置在坐标轴上的坐标值,f x为光感应装置的测量距离,c x为光感应装置的光心在所述坐标轴上的位置,
Figure PCTCN2020106899-appb-000004
为测量点i在相机坐标系下的坐标位置;(x i,y i)为测量点i在地标图案上的坐标位置。作为一维数据,
Figure PCTCN2020106899-appb-000005
y i均为常数。
以光感应装置所摄取的光强度点阵数据为二维数据为例,利用公式:
Figure PCTCN2020106899-appb-000006
Figure PCTCN2020106899-appb-000007
以及
Figure PCTCN2020106899-appb-000008
将地标图案中的测量点i转换至光强度点阵数据所在坐标系中;再利用公式:
Figure PCTCN2020106899-appb-000009
确定移动机器人与地标图案之间的相对位置关系(θ,t x,t y)。其中,N为测量点的数量,u i为测量点i映射到光强度点阵数据所在坐标系上的坐标值,u i′为光强度点阵数据中图案区域上对应测量点i的像素位置在坐标系上的坐标值,f x,f y为光感应装置的焦距,c x、c y为光感应装置的光心在所述坐标系上的位置,
Figure PCTCN2020106899-appb-000010
为测量点i在相机坐标系下的坐标位置,(x i,y i)为测量点i在地标图案上的坐标位置。
在上述任一确定相对位置关系的示例中,若已知移动机器人的第二位置在地图数据中的定位信息,则利用该相对位置关系以及定位信息,确定各测量点在地图数据中的地标位置信息;若已知地标图案中任一测量点在地图数据中的地标位置信息,则利用该相对位置关系以及地标的地标位置信息,确定其他测量点在地图数据中的地标位置信息。在此基础上,若还已知地标图案的物理尺寸,则还利用已确定的其中部分测量点在地图数据中对应的地标位置信息和所述物理尺寸,确定剩余测量点在地图数据中的地标位置信息等。
当地标图案被分布在工作场所的不同位置时,利用上述各步骤,移动机器人在构建地图或更新地图期间将所识别到的位于工作场所各位置的地标图案记录在地图数据中,以便对应于不同的工作模式。
在又一些实施例中,所述地标图案是根据移动机器人在工作场所中的一位置范围内能够识别到的唯一的第二地标数据或第二地标数据的组合而确定的。在构建地图/更新地图过程中, 移动机器人在沿途获取的各光强度点阵数据中提取地标特征,以及根据移动机器人各定位信息确定地标特征在地图数据中的地标位置信息,以形成第二地标数据。其具体方式举例但不限于利用如公开号CN107907131A、CN111220148A、或CN109643127A等定位方案确定移动机器人的第二位置在地图数据中的定位信息,并借助定位期间所确定的各地标特征,确定相对于定位信息的各地标特征的地标位置信息。其中,上述各定位方案中的图像对应于本申请中的二维的光强度点阵数据。当移动机器人获得已构建/更新的至少部分工作场所的地图数据时,执行步骤S210:在地图数据中选取具有唯一性的第二地标数据或第二地标数据的组合,以形成第一地标数据或第一地标数据的组合,该第一地标数据或第一地标数据的组合对应一地标图案。如此得到分布在地图数据中的多个地标图案。
为了将移动机器人的工作模式与上述各示例所标记的各地标图案建立关联关系,各地标图案被布置在与工作模式对应的路径上。例如,预先人工地将带有地标图案的贴纸贴在相应路径上。又如,依据工作模式中包含的路径关联位于相应路径上的地标图案。本申请提供一种建立工作模式和地标图案关联的方法,其包括:获取对应地图数据中地标位置信息的地标图案;以及获得途经所述地标图案所在地标位置信息的至少一条路径及其行为操作;建立所述地标图案与每一种工作模式之间的关联关系并予以保存,其中,所述工作模式包括其中一条路径及其行为操作。
在一些示例中,所述关联关系可通过人机交互的方式构建而得。在此,移动机器人导航移动所使用的地图数据中包含有地标数据,移动机器人将所述地图数据以地图界面的方式展示给用户,用户通过在所述地图界面上设置路径和行为操作以生成工作模式,为移动机器人建立地标图案与至少一个工作模式的关联关系。
在此,所述用户通过利用移动机器人上的交互装置进行人机交互;或者用户通过一终端设备与移动机器人进行人机交互。
在一些具体示例中,利用人机交互的方式构建所述关联关系的方式包括:展示包含至少一处地标位置信息的地图界面;以及获取用户输入的工作模式;其中,所述工作模式中的路径途经至少一处所述地标位置信息。
以用户通过终端设备与移动机器人进行人机交互为例,例如,移动机器人还包括无线通信模块,用户通过配置终端设备中的应用程序(APP)与移动机器人建立通信连接,终端设备通过通信连接获取地图数据并显示在地图界面中,为便于用户识别地标图案,终端设备还利用图标、符号等地标标签将对应地标图案的地标位置信息标记在地图界面中。用户操作终端设备在地图界面上构建/确认包含地标标签的路径,并设置对应路径的行为操作。其中,经用 户操作确认的路径及其行为操作构成一种工作模式,由此构建地标图案与工作模式之间的关联关系。所述关联关系通过终端设备反馈给移动机器人。又如,移动机器人在利用上述步骤S110-S130或者S210中所提供的标记地标图案的各示例中,还在识别出地标图案时,通过所述地图界面向用户展示地标图案及其在地图数据中的地标位置信息,用户通过操作终端设备构建以至少一个地标图案所在位置为起点/终端的路径及其至少一种行为操作,由此构建地标图案与至少一种工作模式之间的关联关系。
在另一些具体示例中,展示包含至少一条路径及其途经的至少一处地标位置信息的地图界面;以及获取用户输入对应各条路径的行为操作信息;生成包含一条路径及其对应行为操作的工作模式。在上一具体示例中所提及的地图界面中还展示途经地标图案的至少一条路径,用户通过选择相应路径输入对一个的行为操作信息,由此确定移动机器人在相应路径上的工作模式。其中,所选择的路径可以是利用人机交互输入的或者利用机器学习获得的。
在另一些示例中,移动机器人利用机器学习获得工作模式中的路径、或工作模式中的路径和行为操作,以建立相应地标图案与至少一个工作模式之间的关联关系。
在一些具体示例中,移动机器人从各历史路径及其对应的至少一种行为操作中选取包括地标图案所在位置的多条历史路径及其各自行为操作,依据所选出的多条历史路径及其各自行为操作的组合,设置各工作模式与地标图案之间的关联关系。其中,选择途经地标图案的多条历史路径及其行为操作的方式包括:通过统计途经地标图案的历史路径及其行为操作的方式进行选择,和/或从途经至少一个地标图案的各历史路径中选择多条历史路径及其行为操作。其中所述途经至少一个地标图案之间的历史路径包括利用至少一个地标图案而构成的环形路径、和/或利用至少两个不同地标图案而构成的非环形路径等。
其中,通过统计途经地标图案的历史路径及其行为操作的方式进行选择的方式举例包括:统计途经所述地标图案的多条历史路径的重复次数,选择重复次数多于预设重复次数阈值的历史路径及其行为操作;和/或去除未执行行为操作时所产生的历史路径。例如,移动机器人去除仅在移动期间途经所述地标图案的历史路径,并通过统计包含有效行为操作的各历史路径,构建其中的工作模式与地标图案建立关联关系。
从途经至少一个地标图案的各历史路径中选择多条历史路径及其行为操作的方式举例包括:从途经至少两个地标图案的各历史路径中选择多条历史路径及其行为操作;和/或从途经一个地标图案的包含环形路径段的历史路径中选择多条历史路径及其行为操作。例如,以位于相邻和/或单个工作区域内任意一个或两个地标图案为起点和终点,从途经该一个或两个地标图案的各历史路径中截取对应起点和终点的路径段及其对应的行为操作;其中,所选的路 径段包含环形路径、或非环形路径。
请参阅图12所示,其显示为移动机器人所记录的多条历史路径,至少部分历史路径对应至少一种行为操作,经统计得到历史路径L1、L2和L3的重复次数大于预设重复次数阈值,其中历史路径L1途经地标图案Pic1和Pic2,历史路径L2两次途经Pic1,则移动机器人根据上述选择方式选择历史路径L1中以Pic1和Pic2中任一地标图案为起点另一地标图案为终端的非环形路径段L1’,以及选择历史路径L2中以地标图案Pic1为起点和终点的环形路径段L2’。
利用上述收集的历史路径及其行为操作,确定各地标图案与工作模式之间的关联关系。仍以图12为例,在非环形路径段L1’所在各历史路径中其使用过的行为操作包括清扫操作。以及在环形路径段L2’所在各历史路径中其使用过的行为操作包括清扫操作、或拖地操作,则所述地标图案Pic1所关联的工作模式包括:沿环形路径段L2’执行清扫操作,沿环形路径段L2’执行拖地操作,以及沿非环形路径段L1’执行清扫操作;所述地标图案Pic2所关联的工作模式包括沿非环形路径段L1’执行清扫操作。
在另一具体示例中,移动机器人在所识别出的地标图案时触发执行学习相应工作模式的操作,以构建所述关联关系。在此,移动机器人在所识别的地标图案、以及处于可建立所述关联关系的档位时,建立与地标图案关联的至少一种工作模式。
在此,移动机器人根据一可切换的档位选择器的档位确定其是否处于可建立所述关联关系的档位。其中,档位选择器上设有多个档位,其包括工作模式的学习档位、构建地图档位、手动工作档位、自动工作档位等中的至少两种。其中,前述步骤S110-S130、S210等示例的地图标记方案可藉由移动机器人在如工作模式的学习档位、构建地图档位、或手动工作档位下执行。
为构建地标图案与至少一种工作模式之间的关联关系,以移动机器人在工作模式的学习档位下移动并识别出在地图数据中所标记的地标图案为例,移动机器人以所识别出的地标图案所在位置为起点,学习执行至少一种行为操作时的路径,直至再次识别到一地标图案为止,并将再次识别出的地标图案所在位置设为该路径的终点。其中,再次识别出的地标图案可与起点处的地标图案相同或不同,由此得到该地标图案所对应的工作模式包括经学习得到的路径及其至少一种行为操作。
仍以移动机器人在工作模式的学习档位下移动并识别出在地图数据中所标记的地标图案为例,移动机器人以所识别出的地标图案所在位置为起点,学习移动机器人在人工操作下移动的路径,直至再次识别到一地标图案为止,并将再次识别出的地标图案所在位置设为该路 径的终点。其中,再次识别出的地标图案可与起点处的地标图案相同或不同,以及藉由移动机器人与用户的人机交互方式,由用户输入对应该路径的至少一种行为操作,由此,移动机器人确定该地标图案所对应的至少一种工作模式。其中,用户可输入属于一个工作模式的一种或两种行为操作。
以移动机器人在除自动工作档位以外的任一档位下移动并识别出在地图数据中所标记的地标图案为例,移动机器人以所识别出的地标图案所在位置为起点,向用户展示是否增加新的工作模式,当接收到用户的选择增加的输入操作时,移动机器人学习从该起点开始执行至少一种行为操作时的路径,直至再次识别到一地标图案为止,并将再次识别出的地标图案所在位置设为该路径的终点。其中,再次识别出的地标图案可与起点处的地标图案相同或不同,由此得到该地标图案所对应的工作模式包括经学习得到的路径及其至少一种行为操作。
需要说明的是,上述移动机器人上设置有可切换的档位选择器的描述仅为举例,事实上,移动机器人的档位选择器可设置在终端设备侧并通过无线通信方式将相应档位发送给移动机器人。其中,所述档位选择器可以为包含机械按钮/旋钮和电路而构成的硬件和软件结构、或借助人机交互装置而检测用户操作指令而令移动机器人确定档位的硬件和软件结构。
还需要说明的是,地标图案与工作状态之间的关联关系可以考虑/不考虑其路径的起止位置;换言之,所述工作模式中的路径可以是有向或无向的。例如,对于同一地标图案来说,其对应的多个工作模式中的路径可以是以该地标图案为起点的路径和以该地标图案为终点的路径,只要其工作模式中的路径和行为操作不完全重叠即可。又如,对于同一地标图案来说,其对应的至少一个工作模式只包含以该地标图案为起点的路径。
在再一些示例中,移动机器人还与外围装置进行交互,其中,所述外围装置是指为配合移动机器人补给能量、拿/放物品、释放/回收废料等而配套使用的装置,其包含与移动机器人配合使用的定位部件,以供帮助移动机器人准确地确定其与外围装置之间的相对位置关系。所述外围装置举例包括停泊装置(如充电桩或垃圾回收装置等)、或移动补给/回收装置等。其中,所述定位部件上设有地标图案,该地标图案所对应的几何特征/图案模板被预先存储在移动机器人中,其对应的工作模式与外围装置所能提供的功能、以及移动机器人内部产生的事件中断机制相关,该地标图案与所述工作模式预先关联。移动机器人基于内部产生的事件中断机制从一个档位切换至相应事件中断机制所对应的另一个档位,以及在识别出相应外围装置上的地标图案时,执行所关联的工作模式为:调整移动机器人与定位部件之间的位置关系和与外围装置建立接口连接。所述事件中断机制举例包括基于以下至少一种事件而产生中断的机制:移动机器人的电源管理系统根据所检测到的电源电量低于预设电量阈值时所产生 的事件,和/或移动机器人的垃圾监测系统根据所监测到的所收集的垃圾体积达到预设垃圾体积阈值时所产生的事件等。其中,所述预设指定的地标图案为预设的被识别成移动机器人的停泊装置(如充电桩或垃圾回收装置等)的地标图案。例如,移动机器人中预设有对应停泊装置的地标图案的图案模板或几何形状及其位置关系,当移动机器人内部产生相应的事件中断机制(即切换至事件中断机制所对应的档位)时,以当前位置为起点、相应地标图案所在位置为终点,执行寻找停泊装置的导航移动,当移动机器人识别出相应的地标图案时,确定工作模式包含从识别出地标图案所在位置至可执行归位操作的路径和执行归位操作。又如,移动机器人通过与用户的交互预先确定地图数据中至少一个地标图案为所述停泊装置的地标图案,当移动机器人内部产生相应的事件中断机制时,以当前位置为起点、相应地标图案所在位置为终点,执行寻找停泊装置的导航移动,当移动机器人识别出相应的地标图案时,确定工作模式包含从识别出地标图案所在位置至可执行归位操作的路径和执行归位操作。
利用上述任一种示例所构建的地标图案与工作模式之间的关联关系,当移动机器人处于自动工作档位时,请参阅图13,其显示为本申请移动机器人的一种控制方法的流程示意图,移动机器人基于所获取的光强度点阵数据而执行如图13所示的控制方法。
在步骤S310中,获取光强度点阵数据。
在此,所述移动机器人上设置的光感应装置为如前述示例中提及的光感应装置。例如,光感应装置为利用光反射原理进行测量的一维光感应装置,或者为利用光反射原理进行图像成像的二维光感应装置;对应的,获取到的光强度点阵数据为一维数据或二维数据。
在此,所述光感应装置可以固定在移动机器人上,或者可在移动机器人上移动。例如,光感应装置设置在一驱动部件上,以在驱动部件移动期间获取相应探测范围内的光强度点阵数据;其中,所获取的光强度点阵数据的数量可以是在驱动部件移动期间所获取的一个或多个,或者,所获取的光强度点阵数据为将在驱动部件移动期间所获取的多个光强度点阵数据拼接而成的。
在一些示例中,移动机器人还执行获得可切换的控制档位的步骤,以根据所获取的可切换的控制档位,执行步骤S310。例如,当移动机器人接收到对应自动清洁档位的信号时,执行步骤S310。又如,当移动机器人中的供电系统/垃圾回收系统发出相应的事件中断时,自动执行归位的导航移动,并在移动至相应外围装置附近时,执行步骤S310。在另一些示例中,移动机器人实时执行步骤S310。
为实现所获取的一个光强度点阵数据对应更宽泛的探测范围,移动机器人可控制驱动部件进行移动;或者通过移动机器人整体移动,使得光感应装置在相应探测范围内至少一次执 行步骤S310,得到可供后续步骤S320执行时使用的至少一个光强度点阵数据。
为提高移动机器人获取到包含地标图案的光强度点阵数据的效率,所述步骤S310包括步骤S311。
在步骤S311中,利用地图数据,确定所述移动机器人与其附近的地标图案之间的位姿信息,以根据所述位姿信息获取所述光强度点阵数据。
在此,移动机器人利用如前所述的公开号CN107907131A、CN111220148A、或CN109643127A等定位方案确定当前的定位信息;在所述地图数据中以所述移动机器人当前的定位信息为中心预设半径范围内查询地标图案的第一地标位置信息,以及确定所述移动机器人与相应第一地标定位信息之间的方位信息,若所述方位信息落入所述光感应装置的视角范围内,获取光强度点阵数据;或者按照所述方位信息控制移动机器人整体或驱动部件向相应方向移动以获取在移动期间的至少一光强度点阵数据。由此提高获取地标图案的效率。
在步骤S320中,从所获取的光强度点阵数据中提取对应地标图案的图案区域。
在一些示例中,所述步骤S320的执行过程与前述示例中的步骤S120的执行过程相同或相似,并将其与地图数据中预先标记的地标图案进行匹配,以确认所提取的地标图案与地图数据中所标记的地标图案相符。例如,移动机器人按照步骤S120中的表示地标图案的几何特征/图案模板等方式,从步骤S310中所获取的光强度点阵数据中提取其中一地标图案所对应的图案区域。
在又一些示例中,移动机器人将光强度点阵数据中所提取的地标特征与利用前述示例中的步骤S120中或步骤S210中所标记的第一地标数据中的地标特征进行匹配,若相匹配的地标特征相对于对应一地标图案的第一地标数据中地标特征的匹配程度大于预设的匹配程度阈值,则确定光强度点阵数据中包含对应地标图案的图案区域。所述匹配程度举例包括以下至少一种:相匹配的地标特征的数量与表示地标图案的第一地标数据的总数的占比、数量差、相似度、位置误差等。
例如,移动机器人从步骤S310中所获取的光强度点阵数据中提取地标特征,并将所提取的各地标特征与地图数据中预先标记的各地标图案中的各第一地标数据中的地标特征进行匹配,若与其中一地标图案中相匹配的地标特征的数量相对于对应一地标图案的第一地标数据中地标特征的总数的比例大于预设的匹配数量阈值,则确定相应光强度点阵数据中包含对应地标图案的图案区域;反之再执行步骤S310。
在一些示例中,若地图数据中的各地标图案各不相同,则所述移动机器人可根据步骤S320所确定的地标图案及其在地图数据中的地标位置信息,确定所述移动机器人在地图数据 中的大致定位位置,如确定其位于相应地标图案附近的位置范围,由此执行步骤S340。
在另一些示例中,若地图数据中包含部分相同的地标图案,则所述移动机器人还执行步骤S330。
在步骤S330中,获取环境测量数据,以及利用所述环境测量数据与地图数据,确定移动机器人所识别的地标图案在地图数据中的地标位置信息;其中,所述环境测量数据包括光强度点阵数据。在此,所述环境测量数据与前述示例中所提及的环境测量数据相同,在此不再重述。
例如,移动机器人还从光强度点阵数据中提取至少对应地图数据中的第二地标数据的各地标特征,以及结合各第二地标数据在地图数据中的地标位置信息,确定移动机器人所识别的地标图案在地图数据中的地标位置信息。例如,移动机器人利用步骤S320中从光强度点阵数据中提取各第一地标数据和第二地标数据的各地标特征,将所提取的第二地标数据中的各地标特征与地图数据中标记的对应相同的各地标图案附近的第二地标数据中的地标特征进行匹配,根据匹配程度确定步骤S320中识别出的地标图案在地图数据中的地标位置信息。又如,移动机器人利用步骤S320中从光强度点阵数据中提取各第一地标数据和第二地标数据的各地标特征,将所提取的各第一地标数据和第二地标数据中的各地标特征与利用地图数据中标记的对应相同的各地标图案的第一地标数据和其附近的第二地标数据中的地标特征进行匹配;根据匹配程度确定步骤S320中识别出的地标图案在地图数据中的地标位置信息。其中,上述各示例中的匹配方式举例包括计算各地标特征的描述子的相似性,甚至还包括计算各地标特征之间位置关系的误差等。在确定步骤S320中识别出的地标图案在地图数据中的地标位置信息时,确定所述移动机器人在地图数据中的大致定位位置,如确定其位于相应地标图案附近的位置范围,由此执行步骤S340。
又如,移动机器人在移动期间实时执行定位操作,以确定其在地图数据中的定位信息,根据该定位信息确定位于其附近的被识别出的地标图案在地图数据中的地标位置信息,并执行步骤S340。
在步骤S340中,基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息;其中,所述控制信息用于控制所述移动机器人按照所述工作模式下的包含所述地标位置信息的路径执行相应行为操作。所述控制信息中包含用于控制移动装置导航移动的控制信息和对应至少一种行为操作的控制信息。
其中,所述行为操作包括移动机器人执行与地标图案所在位置相关的移动操作、在导航移动期间/导航移动结束时执行的行为操作。例如,所述行为操作包括:当移动机器人基本正 对于停泊装置上的地标图案时,执行归位移动操作。又如,以移动机器人为商用清洁机器人为例,所述行为操作包括:清扫和/或拖地。再如,以移动机器人为仓储的配货机器人为例,所述行为操作包括:取货和/或放货等。
其中,所述行为操作和移动可以是同步执行或具有先后顺序。例如,沿路径导航至终点后执行取货操作。当所述行为操作的控制信息为多个时,各行为可以是同步执行,或具有先后顺序。例如,行为操作包括清扫和拖地操作,所述控制信息中包括清扫控制信息和拖地控制信息,并同步分别输出至移动机器人的清扫系统和拖地系统。又如,行为操作包括清扫和拖地操作,所述控制信息中包括清扫控制信息和拖地控制信息,并按照先清扫后拖地的顺序将相应控制信息分时地输出至移动机器人的清扫系统和拖地系统。
在此,所述地标图案所对应的工作模式的数量可以为一个或多个。在一些示例中,若地标图案所对应的工作模式的数量为一个,移动机器人根据相应的工作模式中的路径和行为操作生成相应的控制信息。在另一些示例中,若地标图案所对应的工作模式的数量为多个,所述步骤S340包括步骤S341。
在步骤S341中,基于用户所选择的工作模式生成相应控制信息。在此,利用移动机器人的人机交互装置、或与移动机器人通信的终端设备,移动机器人将地标图案所对应的各工作模式展示给用户,并基于用户的触发操作,选择其中一个工作模式。
以移动机器人将对应同一地标图案的多个工作模式发送至终端设备为例,基于用户对与所述移动机器人共享地图数据的一终端设备的触发操作,确定相应的工作模式。
在此,移动机器人与终端设备共享标记有地标图案的地图数据,移动机器人在识别出地标图案对应多个工作模式时,将能够使终端设备展示对应地标图案的多个工作模式的信息发送至终端设备;该信息包括相应的地标图案在地图数据中的地标位置信息、或各工作模式等。终端设备依据所共享的地图数据展示对应的多个工作模式,并根据用户触发其中一个工作模式的触发操作,将相应工作模式反馈给移动机器人;移动机器人依据所接收的工作模式生成沿其中路径移动的控制信息,并发送至移动机器人的移动装置;以及根据行为操作执行的时机,将用于控制某一或多种行为操作的控制信息分别发送至移动机器人相应的行为操作装置。
以移动机器人在本地人机交互装置上显示所识别的地标图案对应的多个工作模式为例,其中,所述人机交互装置包含一显示屏和指令输入部件,所述显示屏和指令输入部件可以是如触摸屏等的集成硬件,或者如LED屏和按键等分立的硬件。移动机器人将对应地标图案的多个工作模式展示给用户,并根据用户触发其中一个工作模式的触发操作,生成相应工作模式下的各控制信息。
其中,上述任一示例中展示多个工作模式的方式举例包括:在所展示的地图数据中,将依据各工作模式中行为操作和路径的组合方式以不同颜色、不同线条、或分屏展示中的至少一种等方式展示给用户。
在确定工作模式的情况下,移动机器人从其所在位置开始转入相应的工作模式,并执行相应的移动和行为操作的控制。
在此,移动机器人通过确定移动机器人与相应的地标图案之间的相对位置关系,生成用于引导移动机器人沿所述工作模式中的路径移动的控制信息,以及生成对应所述工作模式中行为操作的控制信息。换言之,移动机器人根据其与地标图案之间的相对位置关系移动至相应工作模式中的路径上,并沿路径移动,以及执行相应的行为操作。其中,确定相对位置关系的方式包括:至少利用映射到地图数据中的至少一种位置信息,确定移动机器人与相应的地标图案之间的相对位置关系,该方式将在下述步骤S341中详述。
在如移动机器人执行自动清洁任务等自主移动并在移动期间完成行为操作的工作模式下,移动机器人按照工作模式所对应的预设路径生成用于自主移动的控制信息以及用于行为操作的控制信息。其中,所述地标图案为移动机器人预先标记在地图数据中的任一种地标图案。
如图12中的环形路径L2’,其中,移动机器人在沿所述环形路径L2’移动期间执行清扫操作,移动机器人在识别出所述地标图案时根据其与地标图案之前的相对位置关系,确定其在所述环形路径上的偏差,并以当前位置为起点通过迭代式定位地导航移动方式实时地向移动装置输出沿所述环形路径移动的控制信息,以及实时地向清扫装置输出用于清扫的控制信息。其中,所述用于清扫的控制信息包括:用于控制边刷旋转速度的控制信息、用于控制风机的吸力的控制信息等;其中,所述用于清扫的控制信息可与移动机器人所在位置相关,例如,所述用于清扫的控制信息在移动机器人当前位置接近障碍物,则相应的控制信息包含加快边刷旋转速度的控制信息等。
在如移动机器人执行归位操作等自主移动并在移动至预设位置时执行行为操作的工作模式下,移动机器人按照所述工作模式生成从当前位置至预设位置的路径及其用于自主移动的控制信息,以及当定位在预设位置时,生成用于执行如归位操作等行为操作的控制信息。
其中,所述地标图案通常为预先设置在外围装置上的明显区别于周围环境的地标图案来说,利用地标图案的物理尺寸可确定其在地图数据中摆放方位,例如,根据地标图案中矩形的各角位置及其边长值确定其在地图数据中的位置,由此便于确定移动机器人与地标图案之间的路径。
在一些示例中,所述步骤S340包括:步骤S342和S343。
在步骤S342中,至少利用映射到地图数据中的至少一种位置信息,确定移动机器人与相应的地标图案之间的相对位置关系。
移动机器人根据其移动至外围装置附近时所确定的所述地图数据中的至少一种位置信息,确定设置在外围装置上的地标图案相对于移动机器人之间的相对位置关系。其中,所述位置信息包括:当前移动机器人在地图数据中的定位信息、和/或所获取的光强度点阵数据中地标特征在地图数据中的地标位置信息等。
例如,利用如公开号CN107907131A、CN111220148A、或CN109643127A等定位方案确定移动机器人在地图数据中的定位信息,以及根据地图数据中地标图案所在地标位置信息,确定移动机器人与地标图案之间的相对位置关系。又如,光强度点阵数据中至少部分地标特征的地标位置信息是基于与地图数据中的第一地标数据和/或第二地标数据进行匹配确定的,以及由于移动机器人和光感应装置是整体移动,故而光强度点阵数据中对应地标图案的图案区域与所匹配的地标特征之间的像素位置偏差反映移动机器人与地标图案之间的相对位置关系;利用上述原理,移动机器人利用所能提取的相应数据计算所述相对位置关系。
在步骤S343中,根据所述相对位置关系,生成用于移动至归位操作时的位置的路径及其控制信息。
在此,移动机器人利用地图数据和所述相对位置关系,生成从当前位置至归位操作时的位置的路径,并依据所述路径输出相应的控制信息。其中,所述归位操作时的位置包括根据预设的移动机器人的航向角与地标图案之间的角度偏差、或者所述角度偏差以及移动机器人与地标图案之间的距离。
在又一些示例中,所述步骤S340包括S344和S345。
在步骤S344中,通过构建在所述地标图案与光强度点阵数据中对应的图案区域之间的映射关系,确定移动机器人与所述地标图案之间的相对位置关系。
在此,移动机器人在获取所述光强度点阵数据时还获取光感应装置的拍摄参数,利用所述拍摄参数构建所提取的图案区域和相应的地标图案之间的映射关系;以及根据所述映射关系确定移动机器人与地标图案之间的相对位置关系。其中,所述映射关系和拍摄参数如在前述步骤S131中描述的,在此不再重述。
由于移动机器人在整体移动时,其获取的光强度点阵数据中图案区域与地标图案之间的相对位置关系与移动机器人与地标图案之间的相对位置关系是对应的,为此,通过将地标图案和图案区域映射到同一坐标系中,并通过计算二者重叠程度满足行为操作条件时,确定所述映射关系。其中,所述行为操作条件举例包括归位操作条件、或拿/取操作条件等。
请参阅图14,其显示为移动机器人在包含归位操作的工作模式下的场景示意图,其中,所述工作模式下的路径与移动机器人和地标图案之间的相对位置关系相关。所述路径的起点是在任一位置拍摄到所述地标图案时移动机器人的位置,以及所述路径的终点为移动机器人通过调整位姿而确定可执行归位操作时的位置。
为此,所述归位操作条件包括为了执行归位操作而使移动机器人与地标图案之间的角度偏差不超过角度偏差阈值,甚至还包括使移动机器人与地标图案之间的距离不超过距离阈值。其中,所述归位操作条件可预先根据地标图案的物理尺寸映射到光强度点阵数据中的像素尺寸而预测/标定得到的。
移动机器人藉由所述拍摄参数构建地标图案中的测量点i在相机坐标系中的坐标位置,并利用相机坐标系与光强度点阵数据所在坐标系之间的转换关系(如根据光感应装置的内参和外参而设置的矩阵转换系数等),将测量点i映射到光强度点阵数据所在坐标系中的像素位置Pc i′;其中,像素位置Pc i′与所获取的光强度点阵数据中对应测量点i的像素位置Pc i之间具有像素位置偏差;该像素位置偏差是基于移动机器人与地标图案之间的相对位置关系而产生的。
在一些具体示例中,在地标图案上选择多个测量点的情况下,为计算所述相对位置关系,移动机器人可采用根据各像素位置Pc i′和Pc i之间的整体角度偏差,确定移动机器人与地标图案之间的关于角度偏差的相对位置关系。其中,所述整体角度偏差是基于各像素位置Pc i′和Pc i之间的角度偏差与所述归位操作条件中的角度偏差之间的角度偏差误差不超过预设的误差阈值而确定的。
在又一些具体示例中,在地标图案上选择多个测量点的情况下,为计算所述相对位置关系,计算使各像素位置Pc i′和Pc i的重叠程度不超过预设的重叠程度阈值时移动机器人所需移动的距离和角度偏差,由此确定包含距离和角度偏差的相对位置关系。其中,所述重叠程度阈值是基于归位操作条件中的角度偏差误差阈值、或者角度偏差误差阈值和距离阈值而确定的。
以光感应装置所摄取的光强度点阵数据为一维数据为例,利用公式:
Figure PCTCN2020106899-appb-000011
以及
Figure PCTCN2020106899-appb-000012
将地标图案中的测量点i转换至光强度点阵数据所在坐标轴中;再利用公式:
Figure PCTCN2020106899-appb-000013
确定移动机器人与地标图案之间的相对位置关系(θ,t x)。其中,u i为测量点i映射到光强度点阵数据所在坐标轴上的坐标值,u i′为光强度点阵数据中图案区域上对应测量点i的像素位置在坐标轴上的坐标值,f x为光感应装置的测量距离,c x为光感应 装置的光心在所述坐标轴上的位置,
Figure PCTCN2020106899-appb-000014
为测量点i在相机坐标系下的坐标位置;(x i,y i)为测量点i在地标图案上的坐标位置。作为一维数据,
Figure PCTCN2020106899-appb-000015
y i均为常数。
以光感应装置所摄取的光强度点阵数据为二维数据为例,利用公式:
Figure PCTCN2020106899-appb-000016
Figure PCTCN2020106899-appb-000017
以及
Figure PCTCN2020106899-appb-000018
将地标图案中的测量点i转换至光强度点阵数据所在坐标系中;再利用公式:
Figure PCTCN2020106899-appb-000019
确定移动机器人与地标图案之间的相对位置关系(θ,t x,t y)。其中,N为测量点的数量,u i为测量点i映射到光强度点阵数据所在坐标系上的坐标值,u i′为光强度点阵数据中图案区域上对应测量点i的像素位置在坐标系上的坐标值,f x,f y为光感应装置的焦距,c x、c y为光感应装置的光心在所述坐标系上的位置,
Figure PCTCN2020106899-appb-000020
为测量点i在相机坐标系下的坐标位置,(x i,y i)为测量点i在地标图案上的坐标位置。
在步骤S345中,根据所述相对位置关系,生成用于移动至归位操作时的位置的路径及其控制信息。
为了确保移动机器人确实位于可执行归位操作的位置,可根据实时的相对位置关系调整相对于地标图案的位置和/或姿态,故而,所述归位操作条件还包括:所述移动机器人调整位姿的次数不超过预设次数阈值、和/或所述移动机器人与地标图案之间的相对位置关系不超过预设的姿态对齐阈值。例如,移动机器人利用所述相对位置关系,生成相应的路径及其控制信息,在按照所述路径调整移动机器人的位姿期间,还实时地获取光强度点阵数据并通过执行步骤S320-S344而更新所述相对位置关系,并依据更新后的相对位置关系更新路径及其控制信息,直至所得到的相对位置关系满足归位操作条件。移动机器人生成用于执行归位操作的控制信息,并控制移动装置执行归位移动。
又如,所述步骤S345的执行过程与步骤S343的执行过程相同或相似,在此不再详述。当移动机器人移动至满足归位操作条件的位置时,移动机器人生成用于执行归位操作的控制信息,并控制移动装置执行归位移动。
利用上述各示例,所述移动机器人通过将地标图案标记在地图数据中,以及构建地标图案和工作模式之间的关联关系,更利用所述关联关系控制移动机器人自主地在工作模式下执行相应行为操作,实现了移动机器人在复杂的工作环境下高效地自主工作,换言之在工作场所内的有效范围内高质量完成任务,以避免在工作场所内执行全覆盖范围式的自主工作。
本申请还提供地图更新系统、工作模式设置系统和控制系统,各系统为可配置在移动机器人中的软件系统,所述软件系统还可以配置在移动机器人与服务器系统所构成的网络系统 中。其中,所述网络系统中的移动机器人和服务器系统协同地执行各系统的各模块以实现相应系统的功能。
请参阅图15,其显示为本申请的一种将地标图案标记在地图数据中的地图更新系统。所述地图更新系统用于构建地图或更新地图。所述地图更新系统包括:获取模块401、地标图案识别模块402、和标记模块403。
其中,所述获取模块用于获取光强度点阵数据;所述地标图案识别模块用于确定所获取的光强度点阵数据中包含对应地标图案的图案区域;以及所述标记模块用于根据地图数据中已记录的至少一种位置信息,确定地标图案在地图数据中的地标位置信息;其中,所述已记录的至少一种位置信息对应于所述光强度点阵数据中的地标数据和/或对应于获取所述光强度点阵数据时的拍摄位置(又称定位信息)。
在一些示例中,所述获取模块、地标图案识别模块、和标记模块各自的执行过程对应于前述步骤S110-S130中各示例的执行方式,在此不再详述。
在另一些示例中,与前述示例不同的是,所述地标图案识别模块的执行过程对应于前述示例中步骤S210中各示例的执行方式,在此不再详述;以及所述标记模块用于更新所述地标图案识别模块所选中的各第二地标数据,以在地图数据中记录成第一地标数据。
本申请还提供了一种构建地标图案与工作模式之间关联关系的关联系统。所述工作模式设置系统包括以下至少一种:第一设置模块和/或第二设置模块。
其中,第一设置模块用于通过人机交互的方式构建所述关联关系。在此,所述第一设置模块的执行过程可对应于前述通过人机交互的方式构建所述关联关系的执行过程,在此不再详述。
第二设置模块用于通过学习对应地标图案的工作模式而建立相应地标图案与对应至少一个工作模式之间的关联关系。在此,所述第二设置模块的执行过程对应于前述通过学习对应地标图案的工作模式而建立相应地标图案与对应至少一个工作模式之间的关联关系的执行过程,在此不再详述。
请参阅图16,其显示为本申请的一种利用所述地标图案与工作模式之间关联关系对移动机器人进行控制的控制系统。所述控制系统包括:获取模块411、地标图案识别模块412、和控制模块413;在一些实施例中还包括定位模块(未予图示)。
其中,所述获取模块用于获取光强度点阵数据;地标图案识别模块用于从所获取的光强度点阵数据中提取包含对应地标图案的图案区域;所述控制模块用于基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息;其中,所述控制信息用于控制所述移 动机器人按照所述工作模式下的路径执行相应操作。所述定位模块用于还获取环境测量数据,以及利用所述环境测量数据与地图数据,确定移动机器人所识别的地标图案在地图数据中的地标位置信息,以便利用所述地标位置信息确定所识别的地标图案的位置;其中,所述环境测量数据包括光强度点阵数据。
在此,所述获取模块的执行过程对应于前述示例中步骤S310中的执行过程;所述地标图案识别模块的执行过程对应于前述示例中步骤S320中的执行过程;所述控制模块的执行过程对应于前述示例中步骤S340中的执行过程;所述定位模块的执行过程对应于前述示例中步骤S330中的执行过程。在此不再详述。
本申请还提供一种计算机可读写存储介质,存储至少一种程序,所述至少一种程序在被调用时执行并实现上述针对图7-13所示的地图标记方法、构建地标图案与工作模式之间关联关系的方法、以及控制方法所描述的至少一种实施例。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得安装有所述存储介质的移动机器人可以执行本申请各个实施例所述方法的全部或部分步骤。
于本申请提供的实施例中,所述计算机可读写存储介质可以包括只读存储器、随机存取存储器、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够由计算机进行存取的任何其它介质。另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。
在一个或多个示例性方面,本申请所述方法的计算机程序所描述的功能可以用硬件、软件、固件或其任意组合的方式来实现。当用软件实现时,可以将这些功能作为一个或多个指令或代码存储或传送到计算机可读介质上。本申请所公开的方法或算法的步骤可以用处理器 可执行软件模块来体现,其中处理器可执行软件模块可以位于有形、非临时性计算机可读写存储介质上。有形、非临时性计算机可读写存储介质可以是计算机能够存取的任何可用介质。
本申请上述的附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。基于此,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这根据所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以通过执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以通过专用硬件与计算机指令的组合来实现。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (36)

  1. 一种地标图案在地图数据中的标记方法,其特征在于,包括:
    获取光强度点阵数据;
    确认所获取的光强度点阵数据中包含对应地标图案的图案区域;其中,所述地标图案用于对应于移动机器人的至少一种工作模式;
    根据地图数据中已记录的至少一种位置信息,确定所述地标图案在地图数据中的地标位置信息;其中,所述已记录的至少一种位置信息对应于所述光强度点阵数据中的地标数据和/或对应于移动机器人获取所述光强度点阵数据时的定位信息。
  2. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述确认所获取的光强度点阵数据中包含对应地标图案的图案区域的步骤包括:利用地标图案与从所述光强度点阵数据筛选出的待确定的图案区域之间的相似性,确认所获取的光强度点阵数据中包含对应地标图案的图案区域。
  3. 根据权利要求2所述的地标图案在地图数据中的标记方法,其特征在于,所述利用地标图案与从所述光强度点阵数据筛选出的待确定的图案区域之间的相似性,确认所获取的光强度点阵数据中包含对应地标图案的图案区域的步骤包括:
    根据反映地标图案的图案模板识别所述光强度点阵数据中的待确定的图案区域;和/或,利用所述地标图案与待确定的图案区域在同一坐标系下的相似度,确定所述光强度点阵数据中包含对应地标图案的图案区域;其中,所述待确定的图案区域是从所述光强度点阵数据中提取的。
  4. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述确认所获取的光强度点阵数据中包含对应地标图案的图案区域的步骤包括:
    根据预存储的几何特征,从所获取的光强度点阵数据中提取包含至少一种几何特征;
    根据所提取的各几何特征确定相应的地标图案;其中,所述几何特征对应用于组成地标图案的几何形状。
  5. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述根据地图数据中已记录的至少一种位置信息,确定所述地标图案在地图数据中的地标位置信息的步骤包括:
    通过构建在移动机器人的定位信息处时所述地标图案与光强度点阵数据中图案区域之间的映射关系,确定移动机器人和/或光强度点阵数据中的地标特征各自在所述地图数据中的相应位置信息与所述地标图案之间的相对位置关系;
    将所述地标图案标记在地图数据中的地标位置信息。
  6. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述确认所获取的光强度点阵数据中包含对应地标图案的图案区域的步骤包括:在地图数据中选取所述光强度点阵数据中具有唯一性的至少一个第二地标数据,以形成第一地标数据或第一地标数据的组合,该第一地标数据或第一地标数据的组合对应一地标图案;其中,所述至少一个第二地标数据为移动机器人用于室内定位的地标数据。
  7. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述地标图案的数量为多个;基于相同地标图案在不同地图区域的稀疏度布置其中相同的各地标图案,或各地标图案不相同。
  8. 根据权利要求1所述的地标图案在地图数据中的标记方法,其特征在于,所述地标图案包括以下至少一种:利用至少一种几何形状及其组合而成的图案、明显区别于周围环境的图案、或利用环境中的地标构成的图案。
  9. 一种建立工作模式和地标图案关联的方法,其特征在于,包括:
    获取对应地图数据中地标位置信息的地标图案;以及获得途经所述地标图案所在地标位置信息的至少一条路径及其行为操作;其中,所述地标位置信息是利用如上述权利要求1-8中任一所述的方法标记的;
    建立所述地标图案与每一种工作模式之间的关联关系并予以保存,其中,所述工作模式包括其中一条路径及其行为操作。
  10. 根据权利要求9所述的建立工作模式和地标图案关联的方法,其特征在于,所述获得途经所述地标图案所在地标位置信息的至少一条路径及其行为操作的步骤是通过以下至少一种方式执行的:人机交互的方式;利用机器学习获得工作模式中的路径、或工作模式中的路径和行为操作;或者利用与标有地标图案的外围装置交互的方式。
  11. 根据权利要求10所述的建立工作模式和地标图案关联的方法,其特征在于,人机交互的方式获得途经所述地标图案所在地标位置信息的至少一条路径及其行为操作的步骤的步骤包括:
    展示包含至少一处地标位置信息的地图界面;以及获取用户输入的工作模式;其中,所述工作模式中的路径途经至少一处所述地标位置信息;和/或
    展示包含至少一条路径及其途经的至少一处地标位置信息的地图界面;以及获取用户输入对应各条路径的行为操作信息;生成包含一条路径及其对应行为操作的工作模式。
  12. 根据权利要求10所述的建立工作模式和地标图案关联的方法,其特征在于,利用机器学习获得工作模式中的路径、或工作模式中的路径和行为操作的步骤包括:
    从各历史路径及其对应的至少一种行为操作中选取包括地标图案所在位置的多条历史路径及其各自行为操作,以确定至少一种工作模式;和/或
    在所识别出的地标图案时触发学习相应工作模式中的路径、或工作模式中的路径和行为操作。
  13. 根据权利要求9所述的建立工作模式和地标图案关联的方法,其特征在于,所述地标图案的数量为多个;基于相同地标图案在不同地图区域的稀疏度布置其中相同的各地标图案,或各地标图案不相同。
  14. 根据权利要求9所述的建立工作模式和地标图案关联的方法,其特征在于,所述地标图案包括以下至少一种:利用至少一种几何形状及其组合而成的图案、明显区别于周围环境的图案、或利用环境中的地标构成的图案。
  15. 一种移动机器人的控制方法,其特征在于,包括:
    获取光强度点阵数据;
    从所述光强度点阵数据中提取对应地标图案的图案区域;其中,所述地标图案在地图数据中标记有相应的地标位置信息;
    基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息;其中,所述控制信息用于控制所述移动机器人按照所述工作模式下的包含所述地标位置信息的路径执行相应行为操作。
  16. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述地标图案所对应的工作模式的数量为至少一个;当所述数量为多个时,基于用户所选择的工作模式生成相应控制信息。
  17. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述工作模式是基于用户对与所述移动机器人共享地图数据的一终端设备的触发操作而确定的、基于用户对所述移动机器人的触发操作而确定的、或者移动机器人根据预先建立的关联关系自主选择确定的。
  18. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述工作模式中路径的起点和/或终点是基于所述地标图案在地图数据中的地标位置信息而设置的。
  19. 根据权利要求15所述的移动机器人的控制方法,其特征在于,还包括以下步骤:获取环境测量数据,以及利用所述环境测量数据与地图数据,确定移动机器人所识别的地标图案在地图数据中的地标位置信息;其中,所述环境测量数据包括光强度点阵数据。
  20. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述移动机器人设有光感应装置,所述光强度点阵数据是在控制所述光感应装置移动期间获得的。
  21. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述获取光强度点阵数据的步骤包括:利用所述地图数据确定所述移动机器人与其附近的地标图案之间的位姿信息;以及根据所述位姿信息获取所述光强度点阵数据。
  22. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述从光强度点阵数据中提取对应地标图案的图案区域的步骤包括:利用预设的地标图案与从所述光强度点阵数据筛选出的待确定的图案区域之间的相似性,确认所获取的光强度点阵数据中包含对应地标图案的图案区域。
  23. 根据权利要求22所述的移动机器人的控制方法,其特征在于,所述利用预设的地标图案与从所述光强度点阵数据筛选出的待确定的图案区域之间的相似性,确认所获取的光强度点阵数据中包含对应地标图案的图案区域的步骤包括:
    根据反映地标图案的图案模板识别所述光强度点阵数据中的待确定的图案区域;和/或
    利用预设的多个地标图案与待确定的图案区域在同一坐标系下的相似度,确定所述光强度点阵数据中包含对应地标图案的图案区域;其中,所述待确定的图案区域是从所述光强度点阵数据中提取的。
  24. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述从光强度点阵数据中提取对应地标图案的图案区域的步骤包括:
    根据预存储的几何特征,从所获取的光强度点阵数据中提取包含至少一种几何特征;
    根据所提取的各几何特征确定相应的地标图案;其中,所述几何特征对应用于组成地标图案的几何形状。
  25. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述地标图案的数量为多个;基于相同地标图案在不同地图区域的稀疏度布置其中相同的各地标图案,或各地标图案不相同。
  26. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述地标图案包括以下至少一种:利用至少一种几何形状及其组合而成的图案、明显区别于周围环境的图案、或利用环境中的地标构成的图案。
  27. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息的步骤包括:
    通过确定移动机器人与相应的地标图案之间的相对位置关系,生成用于引导移动机器人沿所述工作模式中的路径移动的控制信息,以及生成对应所述工作模式中行为操作的控 制信息。
  28. 根据权利要求15所述的移动机器人的控制方法,其特征在于,所述地标图案设置在外围设备上,所述基于所确定的地标图案所对应的工作模式,生成所述移动机器人的控制信息的步骤包括:
    通过构建在所述地标图案与光强度点阵数据中对应的图案区域之间的映射关系,确定移动机器人与所述地标图案之间的相对位置关系;
    根据所述相对位置关系,生成用于移动至归位操作时的位置的路径及其控制信息。
  29. 根据权利要求28所述的移动机器人的控制方法,其特征在于,还包括:根据所述相对位置关系不断调整所述移动机器人的位姿,直至在相应位置映射关系下所述地标图案与图案区域之间的重叠程度满足预设的归位定位条件。
  30. 根据权利要求28所述的移动机器人的控制方法,其特征在于,所述外围装置包括充电桩或垃圾回收装置。
  31. 一种移动机器人的控制装置,其特征在于,包括:
    存储单元,存储有地图数据和至少一个程序;
    处理单元,调用并执行所述至少一个程序,以协调所述存储装置执行并实现以下任一种方法:如权利要求1-8中任一所述的标记方法、如权利要求9-14中任一所述的关联方法、或者如权利要求15-30中任一所述的控制方法。
  32. 一种移动机器人,其特征在于,包括:
    光感应装置,用于获取光强度点阵数据;
    移动装置,用于受控执行移动操作;
    如权利要求31所述的控制装置,与所述光感应装置和移动装置相连,以基于获取自所述光感应装置的光强度点阵数据控制移动装置移动。
  33. 根据权利要求32所述的移动机器人,其特征在于,所述光感应装置包括:利用光反射原理进行测量的一维光感应装置,或者利用光反射原理进行图像成像的二维光感应装置。
  34. 根据权利要求32移动机器人,其特征在于,所述移动机器人包括商用清洁机器人。
  35. 根据权利要求32移动机器人,其特征在于,还包括与所述移动机器人配合的设有地标图案的外围装置。
  36. 一种计算机可读存储介质,其特征在于,存储至少一种程序,所述至少一种程序被处理器运行时控制所述存储介质所在设备执行如权利要求1-8中任一所述的标记方法、如权利要求9-14中任一所述的关联方法、或者如权利要求15-30中任一所述的控制方法。
PCT/CN2020/106899 2020-08-04 2020-08-04 移动机器人的标记、关联和控制方法、系统及存储介质 WO2022027252A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/106899 WO2022027252A1 (zh) 2020-08-04 2020-08-04 移动机器人的标记、关联和控制方法、系统及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/106899 WO2022027252A1 (zh) 2020-08-04 2020-08-04 移动机器人的标记、关联和控制方法、系统及存储介质

Publications (1)

Publication Number Publication Date
WO2022027252A1 true WO2022027252A1 (zh) 2022-02-10

Family

ID=80118679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/106899 WO2022027252A1 (zh) 2020-08-04 2020-08-04 移动机器人的标记、关联和控制方法、系统及存储介质

Country Status (1)

Country Link
WO (1) WO2022027252A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115258106A (zh) * 2022-08-08 2022-11-01 中国舰船研究设计中心 一种船载无人潜器回收方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150157182A1 (en) * 2013-10-31 2015-06-11 Lg Electronics Inc. Mobile robot and operating method thereof
CN106239504A (zh) * 2016-07-29 2016-12-21 北京小米移动软件有限公司 清洁机器人及其控制方法
CN107981790A (zh) * 2017-12-04 2018-05-04 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN109839921A (zh) * 2017-11-24 2019-06-04 中国电信股份有限公司 视觉定位导航方法、装置以及终端
CN111476892A (zh) * 2020-03-09 2020-07-31 珠海格力电器股份有限公司 路径确定方法、装置、终端设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150157182A1 (en) * 2013-10-31 2015-06-11 Lg Electronics Inc. Mobile robot and operating method thereof
CN106239504A (zh) * 2016-07-29 2016-12-21 北京小米移动软件有限公司 清洁机器人及其控制方法
CN109839921A (zh) * 2017-11-24 2019-06-04 中国电信股份有限公司 视觉定位导航方法、装置以及终端
CN107981790A (zh) * 2017-12-04 2018-05-04 深圳市沃特沃德股份有限公司 室内区域划分方法及扫地机器人
CN111476892A (zh) * 2020-03-09 2020-07-31 珠海格力电器股份有限公司 路径确定方法、装置、终端设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115258106A (zh) * 2022-08-08 2022-11-01 中国舰船研究设计中心 一种船载无人潜器回收方法

Similar Documents

Publication Publication Date Title
US11927450B2 (en) Methods for finding the perimeter of a place using observed coordinates
CN112867424B (zh) 导航、划分清洁区域方法及系统、移动及清洁机器人
AU2017363769B2 (en) Mobile robot and control method thereof
US11042760B2 (en) Mobile robot, control method and control system thereof
WO2019144541A1 (zh) 一种清洁机器人
US10620636B2 (en) Method of identifying functional region in 3-dimensional space, and robot implementing the method
EP3951544A1 (en) Robot working area map constructing method and apparatus, robot, and medium
WO2019232806A1 (zh) 导航方法、导航系统、移动控制系统及移动机器人
EP3104194B1 (en) Robot positioning system
CN107072457B (zh) 清洁机器人及其控制方法
JP5946147B2 (ja) 可動式ヒューマンインターフェースロボット
CN101663637B (zh) 利用悬浮和点击输入法的触摸屏系统
JP2022540906A (ja) 移動ロボット及びその制御方法
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
WO2021143543A1 (zh) 机器人及其控制方法
WO2022027252A1 (zh) 移动机器人的标记、关联和控制方法、系统及存储介质
KR20210004675A (ko) 인공지능을 이용한 이동 로봇 및 이동 로봇의 제어방법
WO2022027611A1 (zh) 移动机器人的定位方法、构建地图的方法及移动机器人
He et al. Furniture free mapping using 3d lidars
KR102354229B1 (ko) 무인 비행체의 비행 제어장치 및 그 방법
KR20230134109A (ko) 청소 로봇 및 그의 태스크 수행 방법
CN102799344A (zh) 虚拟触摸屏系统以及方法
US20220095871A1 (en) Systems and methods for enabling navigation in environments with dynamic objects
Einhorn et al. Monocular detection and estimation of moving obstacles for robot navigation
Centing Approximating material area, volume, and velocity for belt conveyor systemapplications using 3D depth sensor technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20947908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20947908

Country of ref document: EP

Kind code of ref document: A1