CN109431381B - Robot positioning method and device, electronic device and storage medium - Google Patents

Robot positioning method and device, electronic device and storage medium Download PDF

Info

Publication number
CN109431381B
CN109431381B CN201811268293.6A CN201811268293A CN109431381B CN 109431381 B CN109431381 B CN 109431381B CN 201811268293 A CN201811268293 A CN 201811268293A CN 109431381 B CN109431381 B CN 109431381B
Authority
CN
China
Prior art keywords
image data
pose information
historical
robot
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811268293.6A
Other languages
Chinese (zh)
Other versions
CN109431381A (en
Inventor
曹晶瑛
罗晗
王磊
薛英男
蔡为燕
吴震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Stone Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Stone Innovation Technology Co ltd filed Critical Beijing Stone Innovation Technology Co ltd
Priority to CN202210535950.9A priority Critical patent/CN114847803B/en
Priority to CN201811268293.6A priority patent/CN109431381B/en
Publication of CN109431381A publication Critical patent/CN109431381A/en
Application granted granted Critical
Publication of CN109431381B publication Critical patent/CN109431381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/28Floor-scrubbing machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The disclosure relates to a positioning method and device of a robot, an electronic device and a computer readable storage medium; the method can comprise the following steps: determining historical image data matched with the current image data according to the current image data acquired by the image acquisition unit, wherein the historical image data is acquired by the image acquisition unit at a historical moment; acquiring historical pose information corresponding to the robot when the historical image data is acquired; determining the current pose information of the robot according to the ranging data currently acquired by the ranging unit; and positioning the current position of the robot according to the historical pose information and the current pose information. Through the positioning scheme of the robot disclosed by the invention, the positioning accuracy can be improved, so that the working efficiency of the robot is further improved.

Description

Robot positioning method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of robot technologies, and in particular, to a method and an apparatus for positioning a robot, an electronic device, and a storage medium.
Background
With the development of technology, various robots with an autonomous moving function have appeared, such as automatic cleaning devices, e.g., automatic floor sweeping robots, automatic floor mopping robots, and the like. The automatic cleaning device may automatically perform the cleaning operation by actively sensing the surrounding environment. For example, in the related art, a map of an environment that needs to be cleaned currently is constructed by SLAM (simultaneous localization and mapping), and a cleaning operation is performed according to the constructed map.
However, the positioning method in the related art has the defect of inaccurate positioning, so that the working efficiency of the robot is easily affected.
Disclosure of Invention
The present disclosure provides a positioning method and apparatus for a robot, an electronic device, and a computer-readable storage medium, to solve the deficiencies in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided a positioning method of a robot, the robot being configured with an image acquisition unit and a ranging unit; the method comprises the following steps:
determining historical image data matched with the current image data according to the current image data acquired by the image acquisition unit, wherein the historical image data is acquired by the image acquisition unit at a historical moment;
acquiring historical pose information corresponding to the robot when the historical image data is acquired;
determining the current pose information of the robot according to the ranging data currently acquired by the ranging unit;
and positioning the current position of the robot according to the historical pose information and the current pose information.
Alternatively to this, the first and second parts may,
further comprising: determining whether a hijacking event occurs to the robot;
the positioning the current position of the robot according to the historical pose information and the current pose information comprises: and when the robot is determined to have the hijack event, positioning the position of the robot after the hijack event according to the historical pose information and the current pose information.
Optionally, the determining whether the robot has a hijacking event includes:
and when the image data acquired by the image acquisition unit and/or the ranging data acquired by the ranging unit are mutated, determining that the robot is hijacked.
Optionally, the determining, according to the current image data acquired by the image acquisition unit, historical image data matched with the current image data includes:
when the similarity between the current image data and the historical image data exceeds a preset threshold, determining that the historical image data is matched with the current image data.
Optionally, the determining, according to the current image data acquired by the image acquisition unit, historical image data matched with the current image data includes:
when the current image data and the historical image data both contain one or more same collected objects, determining that the historical image data matches the current image data.
Optionally, the positioning the current position of the robot according to the historical pose information and the current pose information includes:
determining historical pose information of the target matched with the current pose information in the historical pose information;
and positioning the current position of the robot in a map constructed according to the ranging unit according to the historical pose information of the target.
Optionally, the positioning the current position of the robot according to the historical pose information and the current pose information includes:
acquiring a three-dimensional environment composition, wherein the three-dimensional environment composition is constructed by the historical image data and the historical pose information;
determining pose information corresponding to the current image data based on the three-dimensional environment composition;
and positioning the current position of the robot in the map constructed according to the ranging unit according to the determined pose information.
Optionally, the three-dimensional environment composition is constructed in advance according to the acquired historical image data and the historical pose information; or the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data.
Optionally, the method further includes:
judging whether the current pose information has errors or not;
and when the current pose information is judged to have errors, correcting the current pose information by using the historical pose information.
Optionally, the determining whether the current pose information has an error includes:
and when the distance measuring unit is blocked or the robot slips at present, judging that the current pose information has errors.
Optionally, the determining whether the current pose information has an error includes:
and when the current pose information is not matched with any one of the historical pose information, judging that the current pose information has errors.
Optionally, the method further includes:
in the moving process of the robot executing specific operation, searching historical image data matched with the image data acquired by the image acquisition unit, and counting corresponding matching times;
and when the proportion of the matching times of any historical image data to the times of executing the specific operation is smaller than a preset threshold value, deleting the any historical image data and the corresponding pose information of the robot when the any historical image data is collected.
Optionally, all historical image data and historical pose information are stored in a preset database; the method further comprises the following steps:
when an updating instruction for the preset database is received, determining an open area according to a map constructed by the robot in the moving process;
and acquiring image data and corresponding pose information through the image acquisition unit in the open area so as to update the preset database.
Optionally, the robot is configured with corresponding cleaning strategies for different scene types; the method further comprises the following steps:
and in the process of executing cleaning operation by the robot, cleaning by adopting a corresponding cleaning strategy according to a scene recognition result aiming at the image data collected by the image collecting unit.
According to a second aspect of the embodiments of the present disclosure, there is provided a positioning apparatus of a robot, the robot being configured with an image acquisition unit and a ranging unit; the device comprises:
the image data determining unit is used for determining historical image data matched with the current image data according to the current image data acquired by the image acquiring unit, and the historical image data is acquired by the image acquiring unit at a historical moment;
the pose acquisition unit is used for acquiring historical pose information corresponding to the robot when the historical image data is acquired;
the pose determining unit is used for determining the current pose information of the robot according to the ranging data currently acquired by the ranging unit;
and the positioning unit is used for positioning the current position of the robot according to the historical pose information and the current pose information.
Alternatively to this, the first and second parts may,
further comprising: the hijacking event determining unit is used for determining whether the robot has a hijacking event or not;
the positioning unit includes: and the first positioning subunit is used for positioning the position of the robot after the hijack event occurs according to the historical pose information and the current pose information when the hijack event occurs in the robot.
Optionally, the hijacking event determining unit includes:
and the hijacking event determining subunit is used for determining that the robot has a hijacking event when the image data acquired by the image acquisition unit and/or the ranging data acquired by the ranging unit mutate.
Optionally, the positioning unit includes:
the first determining subunit is used for determining historical pose information of the target matched with the current pose information in the historical pose information;
and the second positioning subunit is used for positioning the current position of the robot in the map constructed according to the ranging unit according to the historical pose information of the target.
Optionally, the positioning unit includes:
the acquisition subunit acquires a three-dimensional environment composition, wherein the three-dimensional environment composition is constructed by the historical image data and the historical pose information;
a second determination subunit that determines pose information corresponding to the current image data based on the three-dimensional environment composition;
and the third positioning subunit positions the current position of the robot in the map constructed according to the ranging unit according to the determined pose information.
Optionally, the three-dimensional environment composition is constructed in advance according to the acquired historical image data and the historical pose information; or the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data.
Optionally, the method further includes:
a determination unit configured to determine whether the current pose information has an error;
and the correction unit corrects the current pose information by using the historical pose information when the current pose information is judged to have errors.
Optionally, the determining unit includes:
and the first judging subunit judges that the current pose information has errors when the distance measuring unit is blocked or the robot slips.
Optionally, the determining unit includes:
and the second judging subunit judges that the current pose information has errors when the current pose information is not matched with any one of the historical pose information.
Optionally, the method further includes:
the statistical unit is used for searching historical image data matched with the image data acquired by the image acquisition unit in the moving process of the robot executing specific operation and counting corresponding matching times;
and the deleting unit is used for deleting any historical image data and the corresponding pose information of the robot when any historical image data is collected when the proportion of the matching times of any historical image data to the times of executing the specific operation is smaller than a preset threshold value.
Optionally, all historical image data and historical pose information are stored in a preset database; the device further comprises:
the open area determining unit is used for determining the open area according to a map constructed by the robot in the moving process when an updating instruction aiming at the preset database is received;
and the updating unit is used for acquiring image data and corresponding pose information through the image acquisition unit in the open area so as to update the preset database.
Optionally, the robot is configured with corresponding cleaning strategies for different scene types; the device further comprises:
and the strategy adjusting unit adopts a corresponding cleaning strategy to clean according to a scene recognition result aiming at the image data collected by the image collecting unit in the process that the robot executes the cleaning operation.
Optionally, the image data determination unit includes:
a first image data determination subunit that determines that the history image data matches the current image data when a similarity between the current image data and the history image data exceeds a preset threshold.
Optionally, the image data determination unit includes:
and a second image data determination subunit that determines that the history image data matches the current image data when the current image data and the history image data each include one or more same captured objects.
According to a third aspect of the embodiments of the present disclosure, there is provided a robot configured with an image acquisition unit and a ranging unit; the robot further includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method as in any of the above embodiments by executing the executable instructions.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions, wherein the instructions, when executed by a processor, implement the steps of the method as in any one of the above embodiments.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiment, the image acquisition unit is configured on the robot with the ranging unit, so that the robot can acquire image data in the moving process and establish a mapping relation with pose information during image data acquisition. In the subsequent moving process, the current position of the robot can be positioned by taking the historical pose information corresponding to the historical image data matched with the current image data as reference and taking the historical pose information and the current pose information determined by the ranging unit as basis, so that the self-positioning accuracy of the robot is improved, and the working efficiency of the robot is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating a robotic hijacking event, according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method for positioning a robot in accordance with an exemplary embodiment.
Fig. 3-4 are flow diagrams illustrating another method for positioning a robot in accordance with an exemplary embodiment.
FIG. 5 is a flow chart illustrating a method of repositioning a robot in accordance with an exemplary embodiment.
FIG. 6 is a flow chart illustrating another method of repositioning a robot in accordance with an exemplary embodiment.
Fig. 7 is a flowchart illustrating a method of checking current pose information according to an exemplary embodiment.
FIG. 8 is a flow chart illustrating a cleaning operation performed by an automated cleaning apparatus according to one exemplary embodiment.
FIG. 9 is a block diagram illustrating a positioning device of a robot in accordance with an exemplary embodiment.
Fig. 10-21 are block diagrams illustrating another robot positioning device according to an exemplary embodiment.
Fig. 22 is a schematic diagram illustrating a positioning apparatus for a robot according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the related art, a floor sweeping robot is taken as an example: an LDS (LASER DISTANCE SENSOR, laser range finding SENSOR) is typically deployed on the sweeping robot and positioned during cleaning using SLAM algorithms. However, the positioning by LDS is less accurate on one hand; on the other hand, when the sweeping robot is hijacked, the situation that the hijacking event cannot be detected by the LDS may exist. This is explained below with reference to fig. 1.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a hijacking event occurring in a robot cleaner according to an exemplary embodiment. As shown in fig. 1, when the sweeping robot 100 performs a cleaning operation in the square environment 10, if the sweeping robot 100 is moved from the position a to the position B (i.e. the sweeping robot 100 has a hijacking event), the sweeping robot 100 cannot determine that the hijacking event occurs at itself according to the LDS (since the data of the positions a and B are similar in terms of distance). In fact, since the sweeping robot 100 cannot determine that the kidnapping event occurs itself according to the LDS, but instead determines that the robot is still at the position a (actually at the position B, that is, the repositioning error), the cleaning operation is still performed according to the cleaning strategy (for example, the adopted cleaning route is unchanged) that the robot is at the position a, which results in the reduction of the cleaning efficiency.
Accordingly, the present disclosure solves the above-mentioned technical problems in the related art by improving a positioning manner of a robot having an autonomous moving function. The following examples are given for illustrative purposes.
The robot 100 provided by the present disclosure may be (but is not limited to) an automatic cleaning device such as a sweeping robot, a mopping robot or a sweeping and mopping integrated robot, and the robot 100 may include a machine body, a sensing system, a control system, a driving system, a cleaning system, an energy system and a human-computer interaction system.
Wherein:
the machine body includes a forward portion and a rearward portion and has an approximately circular shape (circular front to back), but may have other shapes including, but not limited to, an approximately D-shape with a front to back circle.
The sensing system includes a position determining device located above the machine body, a bumper located at a forward portion of the machine body, cliff sensors and ultrasonic sensors, infrared sensors, magnetometers, accelerometers, gyroscopes, odometers, and like sensing devices, providing various positional and kinematic state information of the machine to the control system. The position determining device includes, but is not limited to, a camera, a laser distance measuring device (LDS). The following describes how position determination is performed by taking a laser distance measuring device of the triangulation method as an example. The basic principle of the triangulation method is based on the geometric relation of similar triangles, and is not described herein.
The laser ranging device includes a light emitting unit and a light receiving unit. The light emitting unit may include a light source that emits light, and the light source may include a light emitting element, such as an infrared or visible Light Emitting Diode (LED) that emits infrared light or visible light. Preferably, the light source may be a light emitting element that emits a laser beam. In the present embodiment, a Laser Diode (LD) is taken as an example of the light source. In particular, a light source using a laser beam may make the measurement more accurate than other lights due to the monochromatic, directional, and collimation characteristics of the laser beam. For example, infrared or visible light emitted by a Light Emitting Diode (LED) is affected by ambient environmental factors (e.g., color or texture of an object) as compared to a laser beam, and may be reduced in measurement accuracy. The Laser Diode (LD) may be a spot laser for measuring two-dimensional position information of an obstacle, or a line laser for measuring three-dimensional position information of an obstacle within a certain range.
The light receiving unit may include an image sensor on which a light spot reflected or scattered by an obstacle is formed. The image sensor may be a set of a plurality of unit pixels of a single row or a plurality of rows. These light receiving elements can convert optical signals into electrical signals. The image sensor may be a Complementary Metal Oxide Semiconductor (CMOS) sensor or a Charge Coupled Device (CCD) sensor, and is preferably a Complementary Metal Oxide Semiconductor (CMOS) sensor due to cost advantages. Also, the light receiving unit may include a light receiving lens assembly. Light reflected or scattered by the obstruction may travel through a light receiving lens assembly to form an image on the image sensor. The light receiving lens assembly may comprise a single or multiple lenses.
The base may support the light emitting unit and the light receiving unit, which are disposed on the base and spaced apart from each other by a certain distance. In order to measure the obstacle situation in the 360 degree direction around the robot, the base may be rotatably disposed on the main body, or the base itself may be rotated by providing a rotating element to rotate the emitted light and the received light without rotating. The rotating angular speed of the rotating element can be obtained by arranging the optical coupling element and the coded disc, the optical coupling element senses tooth gaps on the coded disc, and instantaneous angular speed can be obtained by dividing the sliding time of the tooth gap distance and the tooth gap distance value. The higher the density of the tooth notches on the coded disc is, the higher the measurement accuracy and precision are correspondingly, but the structure is more precise, and the calculated amount is higher; on the contrary, the smaller the density of the tooth defects is, the lower the accuracy and precision of measurement are, but the structure can be relatively simple, the calculation amount is smaller, and the cost can be reduced.
The data processing device connected with the light receiving unit, such as a DSP, records and transmits the obstacle distance values at all angles in the angle direction of 0 degree relative to the robot to a data processing unit in the control system, such as an Application Processor (AP) comprising a CPU, the CPU runs a particle filter-based positioning algorithm to obtain the current position of the robot, and maps according to the position for navigation. The positioning algorithm preferably uses instant positioning and mapping (SLAM).
Although the laser distance measuring device based on the triangulation method can measure the distance value at an infinite distance position beyond a certain distance in principle, the realization of the long-distance measurement, for example, more than 6 meters, is difficult in practice, mainly because the size of a pixel unit on a sensor of a light receiving unit is limited, and the laser distance measuring device is also influenced by the photoelectric conversion speed of the sensor, the data transmission speed between the sensor and a connected DSP and the calculation speed of the DSP. The measured value obtained by the laser ranging device under the influence of temperature can also have variation which cannot be tolerated by a system, mainly because the angle between incident light and emergent light is changed due to thermal expansion deformation of a structure between the light emitting unit and the light receiving unit, and the light emitting unit and the light receiving unit can also have the temperature drift problem. After the laser ranging device is used for a long time, the measurement result is also seriously influenced by deformation caused by accumulation of various factors such as temperature change, vibration and the like. The accuracy of the measuring result directly determines the accuracy of the map drawing, and is the basis for further strategy implementation of the robot, and is particularly important.
The forward portion of the machine body may carry a bumper that detects one or more events (or objects) in the travel path of the robot 100 via a sensor system, such as an infrared sensor, as the drive wheel module propels the robot over the ground during cleaning, and the robot may control the drive wheel module to cause the robot to respond to the events (or objects), such as moving away from an obstacle, by detecting the events (or objects), such as an obstacle, a wall, by the bumper.
The control system is arranged on a circuit main board in the machine body and comprises a non-transitory memory, such as a hard disk, a flash memory and a random access memory, a communication calculation processor, such as a central processing unit and an application processor, wherein the application processor draws an instant map of the environment where the robot is located by using a positioning algorithm, such as SLAM, according to the obstacle information fed back by the laser ranging device. And the current working state of the sweeper is comprehensively judged by combining distance information and speed information fed back by sensing devices such as a buffer, a cliff sensor, an ultrasonic sensor, an infrared sensor, a magnetometer, an accelerometer, a gyroscope, a speedometer and the like, for example, when the sweeper passes a threshold, a carpet is arranged at the cliff, the upper part or the lower part of the sweeper is clamped, a dust box is full, the sweeper is taken up and the like, and a specific next-step action strategy is provided according to different conditions, so that the robot can work more according with the requirements of an owner, and better user experience is achieved. Furthermore, the control system can plan the most efficient and reasonable cleaning path and cleaning mode based on the instant map information drawn by the SLAM, and the cleaning efficiency of the robot is greatly improved.
The drive system may steer the robot 100 across the ground based on drive commands having distance and angle information, such as x, y, and theta components. The drive system includes drive wheel modules that can control both the left and right wheels, preferably including a left drive wheel module and a right drive wheel module, respectively, for more precise control of the motion of the machine. The left and right drive wheel modules are opposed along a transverse axis defined by the body. In order for the robot to be able to move more stably or with greater mobility over the ground, the robot may include one or more driven wheels, including but not limited to universal wheels. The driving wheel module comprises a traveling wheel, a driving motor and a control circuit for controlling the driving motor, and can also be connected with a circuit for measuring driving current and a milemeter. The driving wheel module can be detachably connected to the main body, and is convenient to disassemble, assemble and maintain. The drive wheel may have a biased drop-type suspension system movably secured, e.g., rotatably attached, to the robot body and receiving a spring bias biased downward and away from the robot body. The spring bias allows the drive wheels to maintain contact and traction with the floor with a certain landing force while the cleaning elements of the robot 100 also contact the floor 10 with a certain pressure.
The cleaning system may be a dry cleaning system and/or a wet cleaning system. As a dry cleaning system, the main cleaning function is derived from a sweeping system composed of a rolling brush structure, a dust box structure, a fan structure, an air outlet, and connecting members therebetween. The rolling brush structure with certain interference with the ground sweeps the garbage on the ground and winds the garbage to the front of a dust suction opening between the rolling brush structure and the dust box structure, and then the garbage is sucked into the dust box structure by the air with suction generated by the fan structure and passing through the dust box structure. The dust removal capability of the sweeper can be represented by the sweeping efficiency DPU (dust pick up efficiency), which is influenced by the structure and the material of the rolling brush, the wind power utilization rate of an air duct formed by a dust suction port, a dust box structure, a fan structure, an air outlet and connecting parts among the dust suction port, the dust box structure, the fan structure, the air outlet and the dust box structure, the type and the power of the fan, and the sweeper is a complicated system design problem. Compared with the common plug-in dust collector, the improvement of the dust removal capability is more significant for the cleaning robot with limited energy. Because the improvement of the dust removal capability directly and effectively reduces the energy requirement, namely the machine which can clean the ground of 80 square meters by charging once can be developed into the machine which can clean 180 square meters or more by charging once. And the service life of the battery, which reduces the number of times of charging, is also greatly increased, so that the frequency of replacing the battery by the user is also increased. More intuitively and importantly, the improvement of the dust removal capability is the most obvious and important user experience, and the user can directly draw a conclusion whether the sweeping/wiping is clean. The dry cleaning system can also include an edge brush having an axis of rotation that is angled relative to the floor for moving debris into the roller brush area of the cleaning system.
Energy systems include rechargeable batteries, such as nickel metal hydride batteries and lithium batteries. The charging battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit and a battery under-voltage monitoring circuit, and the charging control circuit, the battery pack charging temperature detection circuit and the battery under-voltage monitoring circuit are connected with the single chip microcomputer control circuit. The host computer is connected with the charging pile through the charging electrode arranged on the side or the lower part of the machine body for charging.
The man-machine interaction system comprises keys on a host panel, and the keys are used for a user to select functions; the machine control system can further comprise a display screen and/or an indicator light and/or a loudspeaker, wherein the display screen, the indicator light and the loudspeaker show the current state or function selection item of the machine to a user; and a mobile phone client program can be further included. For the path navigation type cleaning equipment, a map of the environment where the equipment is located and the position of a machine can be displayed for a user at a mobile phone client, and richer and more humanized function items can be provided for the user.
The robot provided by the present disclosure is configured with an image acquisition unit and a ranging unit; the image acquisition unit is used for acquiring image data, and the ranging unit is used for acquiring ranging data. The image acquisition unit and the distance measurement unit can be contained in the position determination device of the sensing system. For example, the image acquisition unit may be a camera and the ranging unit may be a laser ranging device. For another example, the image acquisition unit and the ranging unit may be integrated in a camera; for example, a depth-sensing camera having a TOF (Time of flight) function, or a camera using a 3D structured light technique may be employed. Of course, the present disclosure does not limit the specific hardware form of the image acquisition unit and the ranging unit.
Based on the structure of the robot, the present disclosure provides a positioning method of the robot. As shown in fig. 2, the method may include the steps of:
in step 202, according to the current image data acquired by the image acquisition unit, historical image data matched with the current image data is determined, and the historical image data is acquired by the image acquisition unit at a historical time.
In the present embodiment, the image data acquired at the "historical time" may be understood as image data acquired during a previous operation (movement of the robot during movement) of the robot (preceding the current time in time series). Taking an automatic cleaning device as an example (of course, the automatic cleaning device is not limited to an automatic cleaning device, and may be any other robot with an autonomous moving function), image data acquired by the automatic cleaning device in the first cleaning process may be used as the historical image data; or, the image data collected by the automatic cleaning device before the current cleaning process (namely, the historical cleaning process) is used as the historical image data. It should be noted that the positioning solution of the present disclosure is directed to cleaning the same environment with an automatic cleaning device.
In the present embodiment, "matching" may be understood as a case where the degree of similarity (or degree of matching) between the current image data and the history image data exceeds a certain threshold. In another case, "match" can be understood as one or more of the same subject being contained in both the current image data and the history image data.
In step 204, historical pose information corresponding to the robot when the historical image data is acquired is obtained.
In this embodiment, the robot records the pose information of the robot when acquiring image data in the moving process, and establishes a mapping relationship between the image data and the pose information. The pose information may include parameters that can describe the relative position between the robot and the object (i.e., the object captured by the image capturing unit, and the image data of the object obtained by capturing the object), such as the distance, angle, and posture of the robot between the robot and the object. Taking the example of configuring the LDS and the camera (i.e. the LDS is the distance measuring unit and the camera is the image acquisition unit) by the automatic cleaning equipment, the camera and the LDS work simultaneously to acquire corresponding data. For example, in the cleaning process of the 'historical moment', the automatic cleaning equipment utilizes the distance measurement data acquired by the LDS while acquiring image data through the camera to form a map of the environment for executing the cleaning operation through a SLAM algorithm and determines the current position information of the automatic cleaning equipment in the map; meanwhile, current attitude information of the user is acquired according to other sensing devices (such as a gyroscope, an accelerometer, an electronic compass and the like), and the pose information is determined according to the position information and the attitude information.
It should be noted that, as described above with respect to "matching", there may be a plurality of image data matching a certain image data, and the plurality of image data correspond to different angles, positions, orientations, and the like. For example, the robotic cleaning device may shoot the same end table at different angles, distances, and poses during the cleaning process.
In step 206, the current pose information of the robot is determined according to the ranging data currently acquired by the ranging unit.
In this embodiment, the current pose information of the robot may be determined according to the ranging data and the current pose information of the robot. Taking the configuration of the LDS by the automatic cleaning equipment as an example, the automatic cleaning equipment utilizes the ranging data collected by the LDS to form a map of the environment for executing the cleaning operation through the SLAM algorithm, and determines the position information of the automatic cleaning equipment in the map. Meanwhile, current posture information of the user is acquired according to other sensing devices (such as a gyroscope, an accelerometer, an electronic compass and the like), and then the current posture information is determined according to the position information and the posture information.
In step 608, the current position of the robot is located according to the historical pose information and the current pose information.
In this embodiment, based on the fact that the robot can acquire image data through the image acquisition unit in the moving process and establish a mapping relationship with pose information corresponding to the acquired image data, in the subsequent moving process of the robot, historical pose information corresponding to historical image data matched with current image data can be used as a reference and is used together with the determined current pose information as a basis to position the current position of the robot, so that the positioning accuracy is improved, and the working efficiency of the robot is further improved.
As can be seen from the above description of step 202, there may be a plurality of historical image data matching the current image data; in other words, the historical pose information acquired in step 204 may also include a plurality of historical poses. Then, the following two ways may be included for how to perform positioning using the acquired plurality of historical pose information and the current pose information (the current pose information determined in step 206):
in an embodiment, historical pose information of a target in the historical pose information, which matches the current pose information, may be determined, and then the current position of the robot in the map constructed according to the ranging unit may be located according to the historical pose information of the target.
In another embodiment, a three-dimensional environment composition may be obtained, where the three-dimensional environment composition is constructed from the historical image data and the historical pose information, the pose information corresponding to the current image data is determined based on the three-dimensional environment composition, and then the current position of the robot in a map constructed from the ranging unit (constructed from all the ranging data collected by the ranging unit) may be located according to the determined pose information. The three-dimensional map may be generated in real time during positioning, or may be generated in advance. In other words, in one case, the three-dimensional environment composition is constructed in advance according to the acquired historical image data and the historical pose information; in another case, the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matching the current image data.
According to the positioning scheme of the robot, the accurate repositioning of the position of the robot after the hijack event occurs can be supported for the condition that the hijack event occurs to the robot. As an exemplary embodiment, on the basis of the above-mentioned embodiment shown in fig. 2, before step 202, the following steps may be further included: determining whether the robot has a hijacking event. Then, in the case of a hijacking event of the robot, the current image data in step 202 above should be understood as the image data collected by the image collecting unit after the hijacking event of the robot; similarly, the distance measurement data currently collected by the distance measurement unit in step 206 should be understood as the distance measurement data collected by the distance measurement unit after the robot has the hijacking event, that is, the current pose information should be understood as the current pose information determined according to the distance measurement data collected by the distance measurement unit after the robot has the hijacking event. Therefore, in case of a hijacking event of the robot, the positioning operation performed in step 208 may further comprise: and when the robot is determined to have the hijack event, positioning the position of the robot after the hijack event according to the historical pose information and the current pose information. For details on how to perform the positioning operation, reference may be made to the related description in step 208, which is not described herein again.
And aiming at the condition of determining the occurrence of the hijacking event of the robot, the data collected by the image collecting unit and the distance measuring unit configured by the robot can be referred. For example, when a robot is hijacked, the image data collected by the image collecting unit will suddenly change, and the ranging data collected by the ranging unit will also suddenly change. Therefore, as an exemplary embodiment, when the image data collected by the image collecting unit and/or the ranging data collected by the ranging unit suddenly changes, it may be determined that the robot has a hijacking event. Therefore, the change condition of the image data acquired by the image acquisition unit is added to be used as a basis for judging whether the robot has the hijacking event, so that the precise detection of the hijacking event can be realized, and the subsequent relocation is facilitated.
In the positioning solution of the robot provided by the present disclosure, the current pose information (i.e., the current pose information in step 206) may be checked, and the current pose information may be corrected after checking that there is an error in the current pose information. As an exemplary embodiment, on the basis of the embodiment shown in fig. 2, it may be determined whether the current pose information has an error, and when it is determined that the current pose information has an error, the current pose information is corrected by using the historical pose information, so as to improve the accuracy of the map constructed according to the ranging unit.
In one embodiment, when the ranging unit is blocked or the robot slips, it can be determined that the current pose information is wrong; in another embodiment, when the current pose information does not match any of the historical pose information, it may be determined that there is an error in the current pose information.
In the embodiment shown in fig. 2, the accuracy of positioning is improved by taking the historical pose information corresponding to the historical image data matching the current image data as a reference. Therefore, whether the historical image data and the historical pose information can reflect the actual position of the robot is important. Accordingly, the historical image data and the historical pose information can be maintained in the following manner.
In one embodiment, in a moving process of the robot performing a specific operation (taking an automatic cleaning device as an example, the moving process is a cleaning process of the automatic cleaning device), historical image data matched with the image data acquired by the image acquisition unit is searched, and corresponding matching times are counted. Based on the statistics of the matching times, when the ratio of the matching times of any historical image data to the number of times of performing the specific operation (taking the automatic cleaning device as an example, the number of times is the cleaning times of the automatic cleaning device) is smaller than a preset threshold, the any historical image data and the corresponding pose information of the robot when the any historical image data is collected can be deleted.
In another embodiment, all of the historical image data and the historical pose information may be stored in a preset database. When an update instruction for the preset database is received, an open area can be determined according to a map (for example, the map can be constructed according to a distance measurement unit) constructed in the moving process of the robot, and image data and corresponding pose information are collected through the image collection unit in the open area so as to update the preset database. The updating instruction can be issued to the robot by a user through a mobile terminal (which establishes communication connection with the robot); alternatively, the update instruction may be generated by the robot according to a preset update cycle.
In this embodiment, based on that the robot can collect images of a cleaned environment in the moving process, the cleaning strategy of the robot (for example, the sweeping robot) can be adjusted accordingly according to the difference of scene types of the cleaned environment, so as to improve the cleaning efficiency and the user experience. For example, the robot is configured with corresponding cleaning strategies for different scene types; then, during the cleaning operation performed by the robot, a corresponding cleaning strategy may be adopted for cleaning according to the scene recognition result for the image data acquired by the image acquisition unit.
For the convenience of understanding, the following takes a robot as an example of an automatic cleaning device (of course, the robot is not limited to the automatic cleaning device, and may also be any other robot with an autonomous moving function), and the positioning scheme of the robot of the present disclosure is described in detail with reference to the drawings and specific scenarios.
Referring to fig. 3, fig. 3 is a flowchart illustrating another positioning method for a robot according to an exemplary embodiment. As shown in fig. 3, the method applied to an automatic cleaning device (configured with a camera and an LDS, i.e. the image acquisition unit is a camera and the distance measurement unit is an LDS) may comprise the following steps:
in step 302, current image data is acquired.
In step 304, historical image data that matches the current image data is determined.
In this embodiment, the automatic cleaning apparatus may use image data acquired in the first cleaning process as history image data; or, the image data collected before the cleaning process (i.e. the historical cleaning processes, which are all cleaning the same environment) is used as the historical image data. And establishing a mapping relation between the historical image data and the pose information of the historical image data when the historical image data is collected, and storing the mapping relation in a preset database. The pose information may include parameters that can describe the relative position between the automatic cleaning apparatus and the object (i.e., the object captured by the camera, and the image data of the object captured by the camera), the angle, the posture of the robot, and the like. For example, both the camera and the LDS are in a working state during cleaning, and the pose information includes position information and pose information. Then, the automatic cleaning equipment utilizes the distance measurement data collected by the LDS while collecting the image data through the camera to form a map of the environment for executing the cleaning operation through an SLAM algorithm, and determines the position information of the automatic cleaning equipment in the map; meanwhile, self attitude information is acquired according to other sensing devices (such as a gyroscope, an accelerometer, an electronic compass and the like), and then pose information is determined according to the position information and the attitude information.
Note that, in determining the history image data that matches the current image data, "match" may be understood as meaning that the degree of similarity (or the degree of matching) between the current image data and the history image data exceeds a certain threshold in one case. For example, an image matching algorithm (mean absolute difference algorithm, sum of absolute differences algorithm, normalization product correlation algorithm, etc.) or a machine learning model may be used to determine, as the "historical image data matching the current image data" in step 304, the image data whose similarity with the current image data exceeds a preset threshold value, among all the historical image data in the preset database. In another case, "match" can be understood as one or more of the same subject being contained in both the current image data and the history image data. For example, an object (e.g., a cup, a tea table, a television, etc.) included in the current image data may be identified (e.g., the current image data may be identified by an image identification method based on a neural network, an image identification method based on a wavelet moment, etc.), and image data of the identified object (a plurality of objects may be present, and may be set to include all or a part of the objects) is also included in the preset database as "historical image data matched with the current image data" in step 304. In still another case, the conditions in the above two cases can be used as the basis for determining "match". For example, when the degree of similarity between the current image data and the history image data exceeds a certain threshold and (or) each contains one or more identical subjects, a relationship of "match" between the two may be determined.
In step 306, historical pose information of the automatic cleaning device at the time the historical image data was acquired is obtained.
In this embodiment, historical pose information corresponding to the historical image data determined in step 304 may be searched for in a preset database based on the established mapping relationship.
In step 308, current pose information of the automatic cleaning device is determined according to the ranging data currently acquired by the LDS.
In the present embodiment, the automatic cleaning apparatus may form a map of an environment where a cleaning operation is performed by a SLAM algorithm using ranging data collected by the LDS, and determine location information of itself currently in the map. Meanwhile, current posture information of the user is acquired according to other sensing devices (such as a gyroscope, an accelerometer, an electronic compass and the like), and then the current posture information is determined according to the position information and the posture information.
In step 310, historical pose information of the target in the historical pose information that matches the current pose information is determined.
In the present embodiment, the history image data matching the current image data may include a plurality of pieces. In other words, the historical pose information acquired in step 306 may also include a plurality of historical poses. For example, the plurality of historical image data acquired in step 304 all include the same tea table; among them, the automatic cleaning apparatus differs in shooting distance, shooting angle, posture, and the like when shooting each history image data.
When determining the historical pose information of the target matching with the current pose information, the historical pose information acquired in step 306 and the current pose information can be compared one by one, so that the historical pose information close to or even identical to the current pose information is taken as the historical pose information of the target. For example, the historical pose information obtained in step 306 may be the target historical pose information, in which the difference between the distance and the distance included in the current pose information is within a preset distance threshold, and the difference between the angle and the angle included in the current pose information is within a preset angle threshold.
In step 312, the current position of the automatic cleaning device in the map constructed according to the ranging unit is located according to the historical pose information of the target.
In this embodiment, the automatic cleaning device may use the SLAM algorithm to construct a map of the environment in which the cleaning operation is performed, and then locate the current position of the automatic cleaning device according to the historical pose information of the target. The automatic cleaning device can then determine a route for performing the cleaning operation further based on the map and the current location.
The positioning accuracy is improved by taking the historical pose information corresponding to the historical image data matched with the current image data as a reference. Therefore, whether the historical image data and the historical pose information can reflect the actual position of the automatic cleaning equipment is very important. Therefore, the historical image data and the historical pose information in the preset database can be maintained in the following manner.
In one embodiment, during each cleaning process of the automatic cleaning device, historical image data matched with the image data collected by the camera can be searched, and corresponding matching times are counted. Based on the statistics of the matching times, when the ratio of the matching times to the cleaning times (the process from the start of cleaning the environment to the completion of cleaning the environment by the automatic cleaning device can be understood as a cleaning process, that is, the cleaning times are recorded once) of any historical image data is smaller than a preset threshold, the any historical image data and the pose information of the automatic cleaning device when the any historical image data is collected can be deleted.
In another embodiment, when an update instruction for the preset database is received, an open area may be determined according to a map (for example, the map may be constructed according to the LDS) constructed by the automatic cleaning device during the cleaning process, and then the automatic cleaning device may collect image data and corresponding pose information through a camera in the open area to update the preset database. For example, for the same cup, the automatic cleaning device photographs the cup at different angles and postures by rotating, and simultaneously records corresponding posture information. Wherein, the updating instruction can be issued to the automatic cleaning equipment by a user through a mobile terminal or a server (establishing communication connection with the automatic cleaning equipment); alternatively, the update instruction may be generated by the automatic cleaning apparatus according to a preset update cycle.
Referring to fig. 4, fig. 4 is a flowchart illustrating another positioning method for a robot according to an exemplary embodiment. As shown in fig. 4, the method applied to an automatic cleaning device (configured with a camera and an LDS, i.e. the image acquisition unit is a camera and the distance measurement unit is an LDS) may comprise the following steps:
in step 402, current image data is acquired.
In step 404, historical image data that matches the current image data is determined.
In step 406, historical pose information of the automatic cleaning device at the time the historical image data was acquired is obtained.
In the present embodiment, the specific process of steps 402-406 is similar to that of steps 302-306, and is not described herein again.
In step 408, a three-dimensional environment composition is acquired.
In this embodiment, the three-dimensional environment composition is constructed by the historical image data acquired in step 404 and the historical pose information acquired in step 406. In one case, the three-dimensional environment composition may be constructed when the historical image data and the historical pose information are acquired; in other words, when the historical image data and the historical pose information are collected, a three-dimensional map is constructed, and then the three-dimensional map is directly read when step 408 is executed. In another case, the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data; in other words, after the historical image data is acquired in step 404 and the historical pose information is acquired in step 406, the historical image data and the historical pose information are constructed to obtain the three-dimensional map.
In step 410, pose information corresponding to the current image data is determined based on the three-dimensional environment composition.
In the present embodiment, the pose information corresponding to the current image data can be determined from the three-dimensional environment composition by the PNP algorithm. Of course, other algorithms for calculating pose information may be used, and the disclosure is not limited thereto.
In step 412, the current position of the automatic cleaning device in the map constructed according to the LDS is located according to the determined pose information.
In this embodiment, the map may be constructed according to all the distance measurement data (including the distance measurement data currently acquired according to the LDS) acquired by the LDS in the cleaning process. After determining the current position, the automatic cleaning device may further determine a route for performing the cleaning operation according to the map and the current position.
As can be seen from the embodiments shown in fig. 3 to 4, based on that the automatic cleaning device can acquire image data through the image acquisition unit in the cleaning process and establish a mapping relationship with pose information when the image data is acquired, in the subsequent cleaning process of the automatic cleaning device, the current position of the automatic cleaning device can be located by taking historical pose information corresponding to historical image data matched with current image data as a reference and taking the historical pose information and the determined current pose information together as a basis, so that the accuracy of location is improved, and the cleaning efficiency of the automatic cleaning device is further improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a robot repositioning method according to an exemplary embodiment. As shown in fig. 5, the method applied to an automatic cleaning device (configured with a camera and an LDS, i.e. the image acquisition unit is a camera and the distance measurement unit is an LDS) may comprise the following steps:
in step 502, the currently acquired data is acquired.
In this embodiment, the collected data includes current image data collected by the camera and distance measurement data currently collected by the LDS.
In step 504, it is determined whether the acquired data is mutated, and if so, the process proceeds to step 506; otherwise, the procedure returns to step 502.
In this embodiment, when the automatic cleaning device is subjected to a hijacking event (the hijacking event can be understood as that the automatic cleaning device is not moving along a route or at a speed for normally performing a cleaning operation, for example, the automatic cleaning device is forcibly moved away from a cleaning position by a user during a cleaning process), the image data collected by the camera will undergo sudden change, and the distance measurement data collected by the LDS will also undergo sudden change. Therefore, whether the automatic cleaning equipment is subjected to the hijacking event can be judged by judging whether the image data and/or the ranging data collected currently are subjected to sudden changes. Therefore, the change condition of the image data acquired by the camera is added to be used as a basis for judging whether the automatic cleaning equipment has the hijack event, so that the precise detection of the hijack event can be realized, and the subsequent relocation is facilitated. Wherein, the image data mutation is understood as the mutation of information contained in the image frames adjacent in time sequence; for example, the amount of change in the proportion of the same content in adjacent image frames exceeds a certain threshold, or the adjacent image frames do not include the same subject, or the like. Of course, the present disclosure is not limited to the algorithm employed to identify whether an abrupt change in image data occurs. Similarly, the abrupt change of the ranging data can be understood as the abrupt change of the ranging data adjacent in time sequence (the same ranging data can contain the ranging data of all angles around the automatic cleaning equipment); of course, the present disclosure also does not limit the algorithm employed to identify whether a sudden change occurs in the ranging data.
For example, as an example of fig. 1 in the related art, even if the data (i.e., the distance measurement data) of the positions a and B are similar (i.e., there is no sudden change) in terms of distance, if there is a difference between the surfaces of the positions a and B (e.g., the sweeping robot 100 can shoot the tv 101 through the camera at the position a, but cannot shoot the tv 101 through the camera at the position B), the image data collected by the camera will have a sudden change (it can be understood that the difference between the image data collected by the camera at the position a and the image data collected at the position B is large) during the movement of the sweeping robot 100 from the position a to the position B, therefore, it can be determined that the sweeping robot 100 has the hijacking event, and a determination result (i.e. determining that the sweeping robot 100 has not the hijacking event) in the related art can not be obtained.
In step 506, historical image data that matches the current image data is determined.
In this embodiment, in the case that a hijacking event occurs for the automatic cleaning device, the current image data collected in step 502 should be understood as image data collected by the camera after the hijacking event occurs for the automatic cleaning device; similarly, the distance measurement data currently collected by the LDS in step 502 should be understood as the distance measurement data collected by the LDS after the automatic cleaning device has a hijacking event.
In step 508, historical pose information of the automatic cleaning device at the time the historical image data was acquired is obtained.
In step 510, pose information of the automatic cleaning device after being hijacked is obtained.
In step 512, historical pose information of targets in the historical pose information, which are matched with the hijacked pose information, is determined.
In this embodiment, it should be understood by those skilled in the art that, in the case of a hijacking event for the automatic cleaning device, the pose information of the hijacked automatic cleaning device is the current pose information of the automatic cleaning device determined according to the ranging data currently collected by the LDS in step 502.
In step 514, the position of the automatic cleaning device after being hijacked is positioned according to the historical pose information of the target.
In the present embodiment, it will be appreciated by those skilled in the art that in the event of a hijacking event for the automatic cleaning device, step 514 is similar to step 312 described above, i.e. the current position in the map constructed from the ranging units is located. After the hijacked position is determined, the automatic cleaning equipment can further re-plan a route for executing cleaning operation according to the constructed map and the hijacked position.
It should be noted that the principles of steps 506-514 are the same as those of steps 304-312, and are not described herein again.
Referring to fig. 6, fig. 6 is a flow chart illustrating another method for repositioning a robot in accordance with an exemplary embodiment. As shown in fig. 6, the method applied to an automatic cleaning device (configured with a camera and an LDS, i.e. the image acquisition unit is the camera and the distance measurement unit is the LDS) may comprise the following steps:
in step 602, the currently acquired data is acquired.
In this embodiment, the collected data includes current image data collected by the camera and distance measurement data currently collected by the LDS.
In step 604, judging whether the acquired data has mutation, if so, turning to step 606; otherwise, the procedure returns to step 602.
In the present embodiment, the steps 602-604 are similar to the steps 502-504, and are not described herein again.
In step 606, historical image data that matches the current image data is determined.
In this embodiment, in the case that a hijacking event occurs for the automatic cleaning device, the current image data collected in step 602 should be understood as image data collected by the camera after the hijacking event occurs for the automatic cleaning device; similarly, the distance measurement data currently collected by the LDS in step 602 should be understood as the distance measurement data collected by the LDS after the automatic cleaning device has a hijacking event.
In step 608, historical pose information of the automatic cleaning device at the time the historical image data was acquired is obtained.
In step 610, a corresponding three-dimensional environment composition is obtained.
In step 612, pose information corresponding to the current image data is determined based on the three-dimensional environment composition.
In step 614, the position of the automatic cleaning device after being hijacked is positioned according to the determined pose information.
In the present embodiment, it should be understood by those skilled in the art that in the case of a hijacking event for an automatic cleaning device, the current location is the hijacked location, i.e., step 614 is similar to step 412 described above. After the hijacked position is determined, the automatic cleaning equipment can further re-plan a route for executing cleaning operation according to the constructed map and the hijacked position.
It should be noted that the principles of steps 606-614 are the same as those of steps 404-412 described above, and are not described herein again. Therefore, the image acquisition unit is arranged on the automatic cleaning equipment with the ranging unit, so that the automatic cleaning equipment can acquire image data in the cleaning process and establish a mapping relation with pose information when the image data is acquired. In the subsequent cleaning process, the current position of the automatic cleaning equipment can be positioned by taking the historical pose information corresponding to the historical image data matched with the current image data as reference and taking the historical pose information and the current pose information determined by the ranging unit as basis, so that the self-positioning accuracy of the automatic cleaning equipment is improved, and the cleaning efficiency of the automatic cleaning equipment is further improved.
As can be seen from the embodiments shown in fig. 5 to 6, by adding the change condition of the image data collected by the camera to the basis for determining whether the hijacking event occurs in the automatic cleaning device, the precise detection of the hijacking event can be realized, thereby facilitating the subsequent relocation. Referring to fig. 7, fig. 7 is a flowchart illustrating checking current pose information according to an exemplary embodiment. As shown in fig. 7, the method applied to the automatic cleaning device in any of the above embodiments may include the following steps:
in step 702, current pose information is obtained.
In this embodiment, the current pose information is the current pose information (the current pose information of the automatic cleaning apparatus determined according to the ranging data currently acquired by the LDS) in any of the above embodiments.
In step 704, judging whether the current pose information has an error, and if so, turning to step 706; otherwise, the procedure returns to step 702.
In this embodiment, in one case, when the current LDS is occluded (e.g., the ranging data of the LDS remains unchanged all the time) or the automatic cleaning device slips (e.g., the data of the accelerometer and odometer of the automatic cleaning device do not match), it may be determined that there is an error in the current pose information; in another case, when the current pose information does not match any of the historical pose information, it may be determined that there is an error in the current pose information.
In step 706, historical pose information is obtained.
In this embodiment, the historical pose information is the historical pose information in any of the above embodiments (e.g., the historical pose information in step 306).
In step 708, the current pose information is corrected using the historical pose information.
In this embodiment, when it is determined that the current pose information is wrong, the current pose information is corrected by using the historical pose information, which is beneficial to improving the accuracy of the map constructed according to the LDS.
Fig. 8 is a flow chart illustrating a cleaning operation performed by an automatic cleaning apparatus according to an exemplary embodiment, and as shown in fig. 8, the method applied to the automatic cleaning apparatus in any of the above embodiments may include the following steps:
in step 802, current image data is acquired.
In step 804, a scene type of the current image data is identified.
In the embodiment, a machine learning model (which can be obtained by training sample data marked with a scene type) applied to scene recognition can be adopted for recognition; or, searching image data matched with the current image data in a preset scene type database (storing image data of various scene types, such as a living room, a toilet, a bedroom, and the like), and using the scene type of the searched image data as the scene type represented by the current image data; alternatively, the user may control the automatic cleaning device to move to capture image data in each scene and mark the corresponding scene type, so that subsequent automatic cleaning devices may identify the scene type of the environment to be cleaned based on the mark. Of course, this disclosure is not limited to the manner in which scene types are identified.
In step 806, a corresponding cleaning strategy is looked up.
In this embodiment, based on the collection that self-cleaning equipment can carry out the image to the environment that cleans through the camera in clean process, can carry out corresponding adjustment to self-cleaning equipment's cleaning strategy according to the difference of the scene type of the environment that cleans to promote clean efficiency and user's use experience.
For example, assuming a scene type for the environment being cleaned, a cleaning strategy as shown in Table 1 may be configured:
scene type cleaning strategy
The living room adopts a powerful cleaning mode
The toilet does not need to be cleaned
Bedroom silent mode
…………
TABLE 1
Of course, table 1 is only an example of configuring a cleaning strategy, and a specific cleaning strategy can be flexibly set by a user according to actual situations, which is not limited by the present disclosure. Therefore, different cleaning requirements of users can be met by configuring corresponding cleaning strategies according to different scene types, and user experience is improved.
In step 808, cleaning is performed in accordance with the found cleaning strategy.
Corresponding to the embodiment of the positioning method of the automatic cleaning equipment, the disclosure also provides an embodiment of a positioning device of the automatic cleaning equipment.
FIG. 9 is a block diagram illustrating a positioning device of a robot in accordance with an exemplary embodiment. Referring to fig. 9, the robot is configured with an image acquisition unit and a ranging unit; the apparatus includes an image data determination unit 901, a pose acquisition unit 902, a pose determination unit 903, and a positioning unit 904.
The image data determining unit 901 is configured to determine historical image data matched with the current image data according to the current image data acquired by the image acquiring unit, wherein the historical image data is acquired by the image acquiring unit at a historical moment;
the pose acquisition unit 902 is configured to acquire historical pose information corresponding to the robot when the historical image data is acquired;
the pose determination unit 903 is configured to determine current pose information of the robot according to ranging data currently acquired by the ranging unit;
the positioning unit 904 is configured to position the current position of the robot according to the historical pose information and the current pose information.
As shown in fig. 10, fig. 10 is a block diagram of a positioning apparatus of another robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and further includes: a hijacking event determining unit 905, the positioning unit 904 comprising: a first positioning subunit 9041.
The hijacking event determining unit 905 is configured to determine whether a hijacking event occurs to the robot;
the first positioning subunit 9041 is configured to, when it is determined that the robot has a hijacking event, position the robot after the hijacking event according to the historical pose information and the current pose information.
As shown in fig. 11, fig. 11 is a block diagram of another positioning apparatus of a robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 10, and the hijacking event determining unit 905 includes: the hijacking event determines sub-unit 9051.
The hijacking event determining subunit 9051 is configured to determine that the robot has a hijacking event when the image data collected by the image collecting unit and/or the distance measuring data collected by the distance measuring unit suddenly change.
As shown in fig. 12, fig. 12 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and the positioning unit 904 includes: a first determination subunit 9042 and a second location subunit 9043.
The first determining subunit 9042 is configured to determine target historical pose information that matches the current pose information from the historical pose information;
the second positioning subunit 9043 is configured to position the current position of the robot in the map constructed according to the ranging unit according to the target historical pose information.
It should be noted that, the structures of the first determining subunit 9042 and the second positioning subunit 9043 in the apparatus embodiment shown in fig. 12 may also be included in the apparatus embodiment shown in fig. 10, and the present disclosure is not limited thereto.
As shown in fig. 13, fig. 13 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and the positioning unit 904 includes: an acquisition sub-unit 9044, a second determination sub-unit 9045, and a third positioning sub-unit 9046.
The acquiring subunit 9044 is configured to acquire a three-dimensional environment composition constructed from the historical image data and the historical pose information;
the second determining subunit 9045 is configured to determine pose information corresponding to the current image data based on the three-dimensional environment composition;
the third positioning subunit 9046 is configured to position the current position of the robot in the map constructed according to the ranging unit according to the determined pose information.
Optionally, the three-dimensional environment composition is constructed in advance according to the acquired historical image data and the historical pose information; or the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data.
It should be noted that the structures of the acquisition subunit 9044, the second determination subunit 9045, and the third positioning subunit 9046 in the apparatus embodiment shown in fig. 13 may also be included in the apparatus embodiment shown in fig. 10, and the disclosure is not limited thereto.
As shown in fig. 14, fig. 14 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and further includes: a decision unit 906 and a correction unit 907.
The determination unit 906 is configured to determine whether there is an error in the current pose information;
the correcting unit 907 is configured to correct the current pose information using the historical pose information when it is determined that there is an error in the current pose information.
It should be noted that the structures of the determination unit 906 and the correction unit 907 in the device embodiment shown in fig. 14 may also be included in the device embodiment shown in fig. 10, and the present disclosure is not limited thereto.
As shown in fig. 15, fig. 15 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, where the determining unit 906 includes, based on the foregoing embodiment shown in fig. 14: a first decision subunit 9061.
The first determination subunit 9061 is configured to determine that there is an error in the current pose information when the ranging unit is currently blocked or the robot slips.
As shown in fig. 16, fig. 16 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, where the determining unit 906 includes, based on the foregoing embodiment shown in fig. 14: a second decision subunit 9062.
The second determination subunit 9062 is configured to determine that there is an error in the current pose information when the current pose information does not match any of the historical pose information.
As shown in fig. 17, fig. 17 is a block diagram of a positioning apparatus of another robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and further includes: a statistics unit 908 and a deletion unit 909.
The counting unit 908 is configured to search historical image data matched with the image data collected by the image collecting unit and count corresponding matching times during the movement of the robot performing a specific operation;
the deletion unit 909 is configured to delete any one of the history image data and the pose information corresponding to the robot when the any one of the history image data is acquired when a ratio of the number of times of matching of the any one of the history image data to the number of times of performing the specific operation is smaller than a preset threshold.
It should be noted that the structures of the statistics unit 908 and the deletion unit 909 in the apparatus embodiment shown in fig. 17 may also be included in the apparatus embodiments shown in fig. 10 and fig. 14, and the present disclosure is not limited thereto.
As shown in fig. 18, fig. 18 is a block diagram of another positioning apparatus for a robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and all the historical image data and the historical pose information are stored in a preset database; further comprising: an open area determining unit 910 and an updating unit 911.
The open area determining unit 910 is configured to determine an open area according to a map constructed by the robot during movement when an update instruction for the preset database is received;
the updating unit 911 is configured to acquire image data and corresponding pose information through the image acquisition unit in the open area to update the preset database.
It should be noted that the structures of the clear area determining unit 910 and the updating unit 911 in the apparatus embodiment shown in fig. 18 may also be included in the apparatus embodiments of fig. 10, fig. 14 and fig. 17, and the disclosure is not limited thereto.
As shown in fig. 19, fig. 19 is a block diagram of a positioning device of another robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and the robot is configured with corresponding cleaning strategies for different scene types; further comprising: a policy adjustment unit 912.
The strategy adjustment unit 912 is configured to adopt a corresponding cleaning strategy for cleaning according to the scene recognition result of the image data collected by the image collection unit during the cleaning operation performed by the robot.
It should be noted that the configuration of the policy adjustment unit 912 in the device embodiment shown in fig. 19 may also be included in the device embodiments of fig. 10, fig. 14, fig. 17 and fig. 18, and the present disclosure is not limited thereto.
As shown in fig. 20, fig. 20 is a block diagram of a positioning apparatus of another robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and the image data determination unit 901 includes a first image data determination subunit 9011.
The first image data determination subunit 9011 is configured to determine that the history image data matches the current image data when a degree of similarity (or degree of matching) between the current image data and the history image data exceeds a preset threshold.
As shown in fig. 21, fig. 21 is a block diagram of a positioning apparatus of another robot according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 9, and the image data determination unit 901 includes a second image data determination subunit 9012.
The second image data determination subunit 9012 is configured to determine that the history image data matches the current image data when the current image data and the history image data each contain one or more of the same captured objects.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the disclosure also provides a robot, which is provided with an image acquisition unit and a distance measurement unit; the robot further includes: a processor; a memory for storing processor-executable instructions; the processor executes the executable instructions to implement the method for implementing screen illumination supplement as described in any one of the above embodiments, for example, the method may include: determining historical image data matched with the current image data according to the current image data acquired by the image acquisition unit, wherein the historical image data is acquired by the image acquisition unit at a historical moment; acquiring historical pose information corresponding to the robot when the historical image data is acquired; determining the current pose information of the robot according to the ranging data currently acquired by the ranging unit; and positioning the current position of the robot according to the historical pose information and the current pose information.
Accordingly, the present disclosure also provides a terminal comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured to be executed by one or more processors, the one or more programs including instructions for implementing the positioning method of the robot as described in any of the above embodiments, such as the method may comprise: determining historical image data matched with the current image data according to the current image data acquired by the image acquisition unit, wherein the historical image data is acquired by the image acquisition unit at a historical moment; acquiring historical pose information corresponding to the robot when the historical image data is acquired; determining the current pose information of the robot according to the ranging data currently acquired by the ranging unit; and positioning the current position of the robot according to the historical pose information and the current pose information.
Fig. 22 is a block diagram illustrating a positioning apparatus 2200 for a robot, according to an example embodiment. For example, the apparatus 2200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 22, the apparatus 2200 may include one or more of the following components: processing component 2202, memory 2204, power component 2206, multimedia component 2208, audio component 2210, interface to input/output (I/O) 2212, sensor component 2214, and communication component 2216.
The processing component 2202 generally controls overall operation of the apparatus 2200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 2202 may include one or more processors 2220 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 2202 may include one or more modules that facilitate interaction between the processing component 2202 and other components. For example, the processing component 2202 can include a multimedia module to facilitate interaction between the multimedia component 2208 and the processing component 2202.
The memory 2204 is configured to store various types of data to support operations at the apparatus 2200. Examples of such data include instructions for any application or method operating on device 2200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 2206 provides power to the various components of the device 2200. The power components 2206 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 2200.
The multimedia component 2208 includes a screen that provides an output interface between the device 2200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 2208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 2200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 2210 is configured to output and/or input audio signals. For example, audio component 2210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 2200 is in an operating mode, such as a call mode, a record mode, and a voice recognition mode. The received audio signal may further be stored in the memory 2204 or transmitted via the communication component 2216. In some embodiments, audio component 2210 also includes a speaker for outputting audio signals.
The I/O interface 2212 provides an interface between the processing component 2202 and a peripheral interface module, which may be a keyboard, click wheel, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 2214 includes one or more sensors for providing various aspects of state assessment for the apparatus 2200. For example, the sensor assembly 2214 may detect an open/closed state of the apparatus 2200, the relative positioning of components, such as a display and keypad of the apparatus 2200, the sensor assembly 2214 may also detect a change in position of the apparatus 2200 or a component of the apparatus 2200, the presence or absence of user contact with the apparatus 2200, orientation or acceleration/deceleration of the apparatus 2200, and a change in temperature of the apparatus 2200. The sensor assembly 2214 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 2214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 2216 is configured to facilitate wired or wireless communication between the apparatus 2200 and other devices. The apparatus 2200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 2216 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 2216 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 2200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 2204 comprising instructions, executable by the processor 2220 of the device 2200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (28)

1. The robot positioning method is characterized in that the robot is provided with an image acquisition unit, a distance measurement unit and a sensing device; the method comprises the following steps:
determining historical image data matched with the current image data according to the current image data acquired by the image acquisition unit, wherein the historical image data is acquired by the image acquisition unit at a historical moment;
acquiring historical pose information corresponding to the robot when the historical image data is acquired;
determining current pose information of the robot according to the ranging data acquired by the ranging unit and the current attitude information of the robot acquired by the sensing device; the pose information comprises the distance between the robot and the object shot by the image acquisition unit and attitude parameters;
positioning the current position of the robot according to the historical pose information and the current pose information, comprising:
determining historical pose information of the target matched with the current pose information in the historical pose information;
and positioning the current position of the robot in a map constructed according to the ranging unit according to the historical pose information of the target.
2. The method of claim 1,
further comprising: determining whether a hijacking event occurs to the robot;
the positioning the current position of the robot according to the historical pose information and the current pose information comprises: and when the robot is determined to have the hijack event, positioning the position of the robot after the hijack event according to the historical pose information and the current pose information.
3. The method of claim 2, wherein the determining whether the robot has a hijacking event comprises:
and when the image data acquired by the image acquisition unit and/or the ranging data acquired by the ranging unit are mutated, determining that the robot is hijacked.
4. The method of claim 1, wherein determining historical image data matching the current image data from the current image data acquired by the image acquisition unit comprises:
when the similarity between the current image data and the historical image data exceeds a preset threshold, determining that the historical image data is matched with the current image data.
5. The method of claim 1, wherein determining historical image data matching the current image data from the current image data acquired by the image acquisition unit comprises:
when the current image data and the historical image data both contain one or more same collected objects, determining that the historical image data matches the current image data.
6. The method of claim 1, wherein locating the current position of the robot based on the historical pose information and the current pose information further comprises:
acquiring a three-dimensional environment composition, wherein the three-dimensional environment composition is constructed by the historical image data and the historical pose information;
determining pose information corresponding to the current image data based on the three-dimensional environment composition;
and positioning the current position of the robot in the map constructed according to the ranging unit according to the determined pose information.
7. The method according to claim 6, wherein the three-dimensional environment composition is constructed in advance according to the collected historical image data and the historical pose information; or the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data.
8. The method of claim 1, further comprising:
judging whether the current pose information has errors or not;
and when the current pose information is judged to have errors, correcting the current pose information by using the historical pose information.
9. The method according to claim 8, wherein the determining whether there is an error in the current pose information includes:
and when the distance measuring unit is blocked or the robot slips at present, judging that the current pose information has errors.
10. The method according to claim 8, wherein the determining whether there is an error in the current pose information includes:
and when the current pose information is not matched with any one of the historical pose information, judging that the current pose information has errors.
11. The method of claim 1, further comprising:
searching historical image data matched with the image data acquired by the image acquisition unit in the moving process of the robot executing specific operation, and counting corresponding matching times;
and when the proportion of the matching times of any historical image data to the times of executing the specific operation is smaller than a preset threshold value, deleting the any historical image data and the corresponding pose information of the robot when the any historical image data is collected.
12. The method according to claim 1, characterized in that all historical image data and historical pose information are stored in a preset database; the method further comprises the following steps:
when an updating instruction for the preset database is received, determining an open area according to a map constructed by the robot in the moving process;
and acquiring image data and corresponding pose information in the open area so as to update the preset database.
13. The method of claim 1, wherein the robot is configured with corresponding cleaning strategies for different scene types; the method further comprises the following steps:
and in the process of executing cleaning operation by the robot, cleaning by adopting a corresponding cleaning strategy according to a scene recognition result aiming at the image data collected by the image collecting unit.
14. The positioning device of the robot is characterized in that the robot is provided with an image acquisition unit, a distance measurement unit and a sensing device; the device comprises: the image data determining unit is used for determining historical image data matched with the current image data according to the current image data acquired by the image acquiring unit, and the historical image data is acquired by the image acquiring unit at a historical moment;
the pose acquisition unit is used for acquiring historical pose information corresponding to the robot when the historical image data is acquired;
the pose determining unit is used for determining the current pose information of the robot according to the ranging data acquired by the ranging unit and the current attitude information of the robot acquired by the sensing device; the pose information comprises the distance between the robot and the object shot by the image acquisition unit and attitude parameters;
a positioning unit configured to position a current position of the robot according to the historical pose information and the current pose information, the positioning unit including:
the first determining subunit is used for determining historical pose information of the target matched with the current pose information in the historical pose information;
and the second positioning subunit is used for positioning the current position of the robot in the map constructed according to the ranging unit according to the historical pose information of the target.
15. The apparatus of claim 14,
further comprising: the hijacking event determining unit is used for determining whether the robot has a hijacking event or not;
the positioning unit includes: and the first positioning subunit is used for positioning the position of the robot after the hijack event occurs according to the historical pose information and the current pose information when the hijack event occurs in the robot.
16. The apparatus of claim 15, wherein the hijacking event determining unit comprises:
and the hijacking event determining subunit is used for determining that the robot has a hijacking event when the image data acquired by the image acquisition unit and/or the ranging data acquired by the ranging unit mutate.
17. The apparatus of claim 14, wherein the positioning unit further comprises:
the acquisition subunit acquires a three-dimensional environment composition, wherein the three-dimensional environment composition is constructed by the historical image data and the historical pose information;
a second determination subunit that determines pose information corresponding to the current image data based on the three-dimensional environment composition;
and the third positioning subunit positions the current position of the robot in the map constructed according to the ranging unit according to the determined pose information.
18. The device according to claim 17, wherein the three-dimensional environment composition is constructed in advance according to the collected historical image data and the historical pose information; or the three-dimensional environment composition is constructed based on the historical image data and the historical pose information after determining the historical image data matched with the current image data.
19. The apparatus of claim 14, further comprising:
a determination unit configured to determine whether the current pose information has an error;
and the correction unit corrects the current pose information by using the historical pose information when the current pose information is judged to have errors.
20. The apparatus according to claim 19, wherein the determination unit comprises:
and the first judging subunit judges that the current pose information has errors when the ranging unit is blocked or the robot slips.
21. The apparatus according to claim 19, wherein the determination unit comprises:
and the second judgment subunit judges that the current pose information is wrong when the current pose information is not matched with any one of the historical pose information.
22. The apparatus of claim 14, further comprising:
the statistical unit is used for searching historical image data matched with the image data acquired by the image acquisition unit in the moving process of the robot executing specific operation and counting corresponding matching times;
and the deleting unit is used for deleting any historical image data and the corresponding pose information of the robot when any historical image data is collected when the proportion of the matching times of any historical image data to the times of executing the specific operation is smaller than a preset threshold value.
23. The apparatus according to claim 14, wherein all historical image data and historical pose information are stored in a preset database; the device further comprises:
the open area determining unit is used for determining the open area according to a map constructed by the robot in the moving process when an updating instruction aiming at the preset database is received;
and the updating unit is used for acquiring image data and corresponding pose information through the image acquisition unit in the open area so as to update the preset database.
24. The apparatus of claim 14, wherein the robot is configured with corresponding cleaning strategies for different scene types; the device further comprises:
and the strategy adjusting unit adopts a corresponding cleaning strategy to clean according to a scene recognition result aiming at the image data collected by the image collecting unit in the process that the robot executes the cleaning operation.
25. The apparatus according to claim 14, wherein the image data determining unit includes:
a first image data determination subunit determining that the history image data matches the current image data when a similarity between the current image data and the history image data exceeds a preset threshold.
26. The apparatus according to claim 14, wherein the image data determination unit comprises:
and a second image data determination subunit that determines that the history image data matches the current image data when the current image data and the history image data each include one or more same captured objects.
27. A robot, characterized in that the robot is provided with an image acquisition unit and a distance measurement unit; the robot further includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-13 by executing the executable instructions.
28. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 13.
CN201811268293.6A 2018-10-29 2018-10-29 Robot positioning method and device, electronic device and storage medium Active CN109431381B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210535950.9A CN114847803B (en) 2018-10-29 2018-10-29 Positioning method and device of robot, electronic equipment and storage medium
CN201811268293.6A CN109431381B (en) 2018-10-29 2018-10-29 Robot positioning method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811268293.6A CN109431381B (en) 2018-10-29 2018-10-29 Robot positioning method and device, electronic device and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210535950.9A Division CN114847803B (en) 2018-10-29 2018-10-29 Positioning method and device of robot, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109431381A CN109431381A (en) 2019-03-08
CN109431381B true CN109431381B (en) 2022-06-07

Family

ID=65550206

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811268293.6A Active CN109431381B (en) 2018-10-29 2018-10-29 Robot positioning method and device, electronic device and storage medium
CN202210535950.9A Active CN114847803B (en) 2018-10-29 2018-10-29 Positioning method and device of robot, electronic equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210535950.9A Active CN114847803B (en) 2018-10-29 2018-10-29 Positioning method and device of robot, electronic equipment and storage medium

Country Status (1)

Country Link
CN (2) CN109431381B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696157B (en) * 2019-03-12 2024-06-18 北京京东尚科信息技术有限公司 Image repositioning determination method, system, device and storage medium
CN109993794A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot method for relocating, device, control equipment and storage medium
CN111860058A (en) * 2019-04-29 2020-10-30 杭州萤石软件有限公司 Method and system for positioning indoor functional area of mobile robot
CN110414353B (en) * 2019-06-24 2023-06-20 炬星科技(深圳)有限公司 Robot startup positioning and operation repositioning method, electronic equipment and storage medium
CN112205937B (en) * 2019-07-12 2022-04-05 北京石头世纪科技股份有限公司 Automatic cleaning equipment control method, device, equipment and medium
CN112414391B (en) * 2019-08-20 2024-06-18 北京京东乾石科技有限公司 Repositioning method and device for robot
CN112444251B (en) * 2019-08-29 2023-06-13 长沙智能驾驶研究院有限公司 Vehicle driving position determining method and device, storage medium and computer equipment
CN112766023B (en) * 2019-11-04 2024-01-19 北京地平线机器人技术研发有限公司 Method, device, medium and equipment for determining gesture of target object
CN112880691B (en) * 2019-11-29 2022-12-02 北京魔门塔科技有限公司 Global positioning initialization method and device
CN111784661B (en) * 2019-12-31 2023-09-05 山东信通电子股份有限公司 Adjustment method, device, equipment and medium of transmission line detection equipment
CN111239761B (en) * 2020-01-20 2021-12-28 西安交通大学 Method for indoor real-time establishment of two-dimensional map
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN113436309A (en) * 2020-03-23 2021-09-24 南京科沃斯机器人技术有限公司 Scene reconstruction method, system and device and sweeping robot
CN111443033A (en) * 2020-04-26 2020-07-24 武汉理工大学 Floor sweeping robot carpet detection method
CN112041634A (en) * 2020-08-07 2020-12-04 苏州珊口智能科技有限公司 Mobile robot positioning method, map building method and mobile robot
CN112013840B (en) * 2020-08-19 2022-10-28 安克创新科技股份有限公司 Sweeping robot and map construction method and device thereof
CN112212853A (en) * 2020-09-01 2021-01-12 北京石头世纪科技股份有限公司 Robot positioning method and device, and storage medium
CN112418046B (en) * 2020-11-17 2023-06-23 武汉云极智能科技有限公司 Exercise guiding method, storage medium and system based on cloud robot
CN112631303B (en) * 2020-12-26 2022-12-20 北京云迹科技股份有限公司 Robot positioning method and device and electronic equipment
CN112987764B (en) * 2021-02-01 2024-02-20 鹏城实验室 Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium
CN114941448B (en) * 2021-02-07 2023-09-05 广东博智林机器人有限公司 Mortar cleaning method, device, system and storage medium
CN115177178A (en) * 2021-04-06 2022-10-14 美智纵横科技有限责任公司 Cleaning method, cleaning device and computer storage medium
CN113256710B (en) * 2021-05-21 2022-08-02 深圳市慧鲤科技有限公司 Method and device for displaying foresight in game, computer equipment and storage medium
CN113554754A (en) * 2021-07-30 2021-10-26 中国电子科技集团公司第五十四研究所 Indoor positioning method based on computer vision
CN116965745A (en) * 2022-04-22 2023-10-31 追觅创新科技(苏州)有限公司 Coordinate repositioning method and system and cleaning robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203341A (en) * 2016-07-11 2016-12-07 百度在线网络技术(北京)有限公司 A kind of Lane detection method and device of unmanned vehicle
CN107340522A (en) * 2017-07-10 2017-11-10 浙江国自机器人技术有限公司 A kind of method, apparatus and system of laser radar positioning
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107845114A (en) * 2017-11-10 2018-03-27 北京三快在线科技有限公司 Construction method, device and the electronic equipment of map
CN108125622A (en) * 2017-12-15 2018-06-08 珊口(上海)智能科技有限公司 Control method, system and the clean robot being applicable in
CN108544494A (en) * 2018-05-31 2018-09-18 珠海市微半导体有限公司 A kind of positioning device, method and robot based on inertia and visual signature

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205671994U (en) * 2015-12-16 2016-11-09 小米科技有限责任公司 Automatic cleaning equipment
CN107357286A (en) * 2016-05-09 2017-11-17 两只蚂蚁公司 Vision positioning guider and its method
CN106092104B (en) * 2016-08-26 2019-03-15 深圳微服机器人科技有限公司 A kind of method for relocating and device of Indoor Robot
US10593060B2 (en) * 2017-04-14 2020-03-17 TwoAntz, Inc. Visual positioning and navigation device and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203341A (en) * 2016-07-11 2016-12-07 百度在线网络技术(北京)有限公司 A kind of Lane detection method and device of unmanned vehicle
CN107340522A (en) * 2017-07-10 2017-11-10 浙江国自机器人技术有限公司 A kind of method, apparatus and system of laser radar positioning
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107845114A (en) * 2017-11-10 2018-03-27 北京三快在线科技有限公司 Construction method, device and the electronic equipment of map
CN108125622A (en) * 2017-12-15 2018-06-08 珊口(上海)智能科技有限公司 Control method, system and the clean robot being applicable in
CN108544494A (en) * 2018-05-31 2018-09-18 珠海市微半导体有限公司 A kind of positioning device, method and robot based on inertia and visual signature

Also Published As

Publication number Publication date
CN109431381A (en) 2019-03-08
CN114847803A (en) 2022-08-05
CN114847803B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109431381B (en) Robot positioning method and device, electronic device and storage medium
CN109947109B (en) Robot working area map construction method and device, robot and medium
CN114521836B (en) Automatic cleaning equipment
CN109890573B (en) Control method and device for mobile robot, mobile robot and storage medium
WO2021212926A1 (en) Obstacle avoidance method and apparatus for self-walking robot, robot, and storage medium
JP7356566B2 (en) Mobile robot and its control method
WO2019144541A1 (en) Cleaning robot
CN110623606B (en) Cleaning robot and control method thereof
KR101366860B1 (en) Mobile robot and controlling method of the same
US9597804B2 (en) Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
EP3308911A1 (en) Mobile robot and method of controlling same
CN113284240A (en) Map construction method and device, electronic equipment and storage medium
CN109932726B (en) Robot ranging calibration method and device, robot and medium
CN106175606A (en) Robot and the method for the autonomous manipulation of realization, device
KR20180039438A (en) Guidance robot for airport and method thereof
CN106239517A (en) Robot and the method for the autonomous manipulation of realization, device
CN111990930B (en) Distance measuring method, distance measuring device, robot and storage medium
CN211022482U (en) Cleaning robot
EP4209754A1 (en) Positioning method and apparatus for robot, and storage medium
US20220280007A1 (en) Mobile robot and method of controlling the same
CN109920425B (en) Robot voice control method and device, robot and medium
JP2013246589A (en) Space information generation device, space information use system, space information generation method, control program, and recording medium
CN114296447B (en) Self-walking equipment control method and device, self-walking equipment and storage medium
CN210673215U (en) Multi-light-source detection robot
CN112269379B (en) Obstacle identification information feedback method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 100192 No. 6016, 6017, 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing

Applicant after: Beijing Roborock Technology Co.,Ltd.

Address before: 100192 No. 6016, 6017, 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing

Applicant before: BEIJING ROCKROBO TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220422

Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing

Applicant after: Beijing Stone Innovation Technology Co.,Ltd.

Address before: 100192 No. 6016, 6017, 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing

Applicant before: Beijing Roborock Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant