CN114983302A - Attitude determination method and device, cleaning equipment, storage medium and electronic device - Google Patents

Attitude determination method and device, cleaning equipment, storage medium and electronic device Download PDF

Info

Publication number
CN114983302A
CN114983302A CN202210742705.5A CN202210742705A CN114983302A CN 114983302 A CN114983302 A CN 114983302A CN 202210742705 A CN202210742705 A CN 202210742705A CN 114983302 A CN114983302 A CN 114983302A
Authority
CN
China
Prior art keywords
determining
image
point cloud
target
cleaning device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210742705.5A
Other languages
Chinese (zh)
Other versions
CN114983302B (en
Inventor
韩松杉
盛腾飞
王睿麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210742705.5A priority Critical patent/CN114983302B/en
Publication of CN114983302A publication Critical patent/CN114983302A/en
Application granted granted Critical
Publication of CN114983302B publication Critical patent/CN114983302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining a posture, a cleaning device, a storage medium and an electronic device, wherein the method comprises the following steps: detecting a first image shot by first camera equipment arranged on the cleaning equipment, and determining a first ROI (region of interest) of a first plane included in the first image; determining first 3D point cloud information corresponding to the first ROI area; correcting the angular velocity of the cleaning device based on the first 3D point cloud information; determining the attitude of the cleaning device based on the corrected angular velocity of the cleaning device. By the method and the device, the problem that estimation errors are large due to the fact that the attitude estimation needs to be carried out by relying on an accelerometer in the related technology is effectively solved.

Description

Attitude determination method and device, cleaning equipment, storage medium and electronic device
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of communication, in particular to a method and a device for determining a posture, a cleaning device, a storage medium and an electronic device.
[ background of the invention ]
With the rapid development of artificial intelligence technology, more and more intelligent cleaning devices (such as floor sweeper, floor washer, dust collector, etc.) enter people's lives, so that people's lives are more and more convenient.
In the related art, the body posture estimation is a necessary prerequisite for the autonomous navigation sweeper (the sweeper is taken as an example here, but may also be other cleaning equipment, such as a scrubber, etc.) to perform mapping, planning and obstacle avoidance. The body postures of the existing autonomous navigation sweeper mainly comprise a yaw angle, a pitch angle and a roll angle based on a global coordinate system. The yaw angle of the system is mainly estimated by a single-line laser radar, a camera, gyroscope information and the like, the pitch angle and the roll angle are mainly estimated by the attitude of the system depending on the acceleration, but when the accelerometer is used, the acceleration reading of the accelerometer is influenced by factory calibration, offset, noise, instability, aging, self-motion acceleration and the like, so that the attitude estimation of the pitch angle and the roll angle of a global coordinate system has larger errors, and the attitude estimation of the pitch angle and the roll angle of the global coordinate system has drift after long-time use, so that the error of the attitude estimation of the pitch angle and the roll angle of the global coordinate system is larger.
Aiming at the problem that the estimation error is large because the accelerometer is required to be relied on for attitude estimation in the related technology, an effective solution is not provided at present.
[ summary of the invention ]
The embodiment of the invention provides a method and a device for determining an attitude, a cleaning device, a storage medium and an electronic device, which are used for solving the problem that estimation errors are large due to the fact that an accelerometer is required to estimate the attitude in the related technology.
According to an aspect of the present invention, there is provided a method of determining a pose, including: detecting a first image shot by first camera equipment arranged on cleaning equipment, and determining a first ROI (region of interest) of a first plane included in the first image; determining first 3D point cloud information corresponding to the first ROI area; correcting the angular velocity of the cleaning device based on the first 3D point cloud information; determining an attitude of the cleaning device based on the corrected angular velocity of the cleaning device.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI area includes: under the condition that a target sensor is determined to be included in the cleaning equipment, determining 3D point cloud information of a target area acquired by the target sensor, wherein the first image is an image obtained by shooting the target area, and the difference between the shooting time of the first image and the time of acquiring the 3D point cloud information of the target area by the target sensor is smaller than a preset threshold value; registering all pixel points included in the first image with the 3D point cloud information of the target area; determining the first 3D point cloud information corresponding to the first ROI area based on a registration result.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI area comprises: detecting a second image captured by the first image capturing apparatus in a case where it is determined that the object sensor is not included in the cleaning apparatus, and determining a second ROI area of a second plane included in the second image, wherein the second image is an image of a frame previous to the first image; extracting a first 2D feature point of the first ROI region, and extracting a second 2D feature point of the second ROI region; matching the first 2D feature points with the second 2D feature points based on a feature point matching mode; determining the first 3D point cloud information corresponding to the first ROI area based on a matching result.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI region based on the matching result includes: determining plane constraints of the first camera equipment, and determining a target homography matrix based on the matching result and the plane constraints; determining a rotation matrix and a translation vector without scale between the first image and the second image based on the target homography matrix; determining an absolute position increment between the first image and the second image based on a first sensor and/or a first algorithm of the cleaning device; determining a scaled translation vector based on the un-scaled translation vector and the absolute position increment; triangularizing the scaled translation vector and the matching result to determine first 3D point cloud information corresponding to the first ROI area.
In one exemplary embodiment, detecting a first image captured by a first image capturing apparatus provided on a cleaning apparatus, determining a first ROI region of a first plane included in the first image includes: detecting the first image by using a neural network model, and determining the first ROI area of the first plane included in the first image, wherein the neural network model is obtained by training an initial neural network model by using training data, the training data comprises a plurality of groups of data, and each group of data comprises the training image and the ROI area of the plane included in the training image.
In one exemplary embodiment, the correcting the angular velocity of the cleaning device based on the first 3D point cloud information comprises: screening a target 3D point cloud from the first 3D point cloud according to a target screening mode, and determining a first plane equation based on the target 3D point cloud; determining a target maximum 3D point cloud from the target 3D point cloud, and generating a target plane equation through least square optimization based on the target maximum 3D point cloud and a Huber robust kernel function, wherein the target maximum 3D point cloud is the 3D point cloud which is less than a preset distance threshold from the first plane equation; and correcting the angular velocity based on the target plane equation.
In an exemplary embodiment, correcting the angular velocity based on the target plane equation includes: determining a quaternion of a last-time attitude of the cleaning device, and determining a second plane equation of the last-time attitude based on the quaternion of the last-time attitude; determining a plane equation difference between the target plane equation and the second plane equation, and obtaining a gyroscope angular velocity measurement of the cleaning device; and determining an angular velocity correction amount based on the plane equation difference, and correcting the gyroscope angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity.
In one exemplary embodiment, after determining an angular velocity correction amount based on the plane equation difference value and correcting the gyro angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity, the method further includes: updating the quaternion of the attitude at the previous moment based on the corrected angular velocity to obtain an updated quaternion of the attitude at the current moment; carrying out normalization processing on the quaternion of the current time attitude, and converting the quaternion after the normalization processing into an Euler angle; determining an updated pose of the cleaning device based on the Euler angle.
In one exemplary embodiment, before detecting a first image captured by a first image capturing apparatus provided on a cleaning apparatus and determining a first ROI region of a first plane included in the first image, the method further includes: calibrating a parameter of a sensor included in the cleaning device, the parameter being at least one of: an internal reference matrix and a distortion model of the first camera device; an internal reference to a gyroscope of the cleaning device; external reference between the first camera device coordinate system and the gyroscope coordinate system; an external reference between the gyroscope coordinate system and a body coordinate system of the cleaning device; a point cloud reference model of a target sensor included in the cleaning device; an external reference between the target sensor coordinate system and the gyroscope coordinate system.
According to another aspect of the present invention, there is also provided an attitude determination apparatus including: the device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for detecting a first image shot by a first camera arranged on the cleaning device and determining a first ROI (region of interest) of a first plane included in the first image; the second determining module is used for determining first 3D point cloud information corresponding to the first ROI; a correction module for correcting the angular velocity of the cleaning device based on the first 3D point cloud information; a third determination module to determine a pose of the cleaning device based on the corrected angular velocity of the cleaning device.
According to another aspect of the present invention, there is also provided a cleaning apparatus including the attitude determination device described in the above-described embodiment of the apparatus.
According to another embodiment of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program when executed performs the method described in any of the above embodiments.
According to another embodiment of the present invention, there is also provided an electronic apparatus, including a memory and a processor, the memory having a computer program stored therein, the processor being configured to execute the method described in any of the above embodiments by the computer program.
By the method, the first image shot by the first camera shooting device arranged on the cleaning device can be detected to determine the first ROI of the first plane included in the first image, further determine the first 3D point cloud information corresponding to the first ROI, correct the angular velocity of the cleaning device based on the first 3D point cloud information, and further determine the attitude of the cleaning device based on the corrected angular velocity of the cleaning device, and by the method, the angular velocity of the cleaning device can be corrected through the acquired first 3D point cloud information, further determine the attitude of the cleaning device through the corrected angular velocity, so that the purposes of improving the accuracy and stability of attitude estimation are achieved, and the problem that the pitch angle and the roll angle of a global coordinate system existing in the related technology mainly depend on an accelerometer to perform attitude estimation is effectively solved, resulting in a problem of large estimation error.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a method for determining an attitude according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining the pose of a map in an embodiment of the invention;
fig. 3 is a block diagram of the configuration of the posture determination apparatus according to the embodiment of the present invention.
[ detailed description ] A
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile device or a similar computing device. Taking the example of the method running on a mobile device, fig. 1 is a hardware structure block diagram of a method for determining a posture according to an embodiment of the present invention. As shown in fig. 1, the mobile device may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, which in an exemplary embodiment may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the mobile device. For example, the mobile device may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or with more functionality than that shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the gesture determination method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile device. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The invention is illustrated below with reference to examples:
in the present embodiment, a method for determining an attitude is provided, as shown in fig. 2, the method includes the following steps:
s202, detecting a first image shot by first camera equipment arranged on cleaning equipment, and determining a first ROI (region of interest) of a first plane included in the first image;
s204, determining first 3D point cloud information corresponding to the first ROI area;
s206, correcting the angular speed of the cleaning equipment based on the first 3D point cloud information;
and S208, determining the posture of the cleaning equipment based on the corrected angular speed of the cleaning equipment.
The controller or the decision module, or the device with shooting and updating capabilities (e.g., an intelligent sweeper, an intelligent scrubber, etc.), or other processing devices or processing units with similar processing capabilities, etc. may perform the above operations, wherein the controller or other executing bodies may exist separately or may be integrated in the cleaning device. The following description takes the controller to perform the above operations as an example (which is only an exemplary description, and in actual operation, other devices or modules may also perform the above operations):
in the above embodiments, the first camera device includes, but is not limited to, an RGB color camera, a grayscale camera, etc., the first plane and the cleaning device operation plane may be in parallel relationship, the first plane may have multiple types of planes, such as a floor, a carpet, a desktop, a cabinet table, etc., therefore, the first ROI areas of the multiple types of first planes included in the first image may be determined simultaneously, for example, when the first plane in the first image includes three types of planes, such as a floor, a desktop, and a cabinet table, the floor, the desktop, and the cabinet table in the first image may be detected simultaneously, and the first ROI areas corresponding to the floor, the desktop, and the cabinet table may be determined simultaneously, and priorities of the different types of first planes may be set preferentially, for example, when the first plane in the first image includes three types of planes, such as a floor, a desktop, and a cabinet table, etc., may be set preferentially, When the desktop or the counter is used, the priority of the first plane may be set as ground > desktop > cabinet top, and the three first planes included in the first image may be detected according to the setting of ground > desktop > cabinet top, and corresponding first ROI areas may be sequentially determined, and in addition, there may be a plurality of first images, so that the plurality of first images may be simultaneously detected, and the first ROI areas of one or more first planes included in the plurality of first images may be simultaneously determined, and the first images taken first may be preferentially detected according to the order in which the plurality of first images are taken by the first image capturing device to determine the first ROI areas of one or more first planes included in the first image, it should be noted that the above-mentioned illustration of the type of the first plane and the priority of the first image is only an exemplary embodiment, the type of the first plane and the priority of the first image are not limited to the above examples.
In the above embodiment, there may be a plurality of first ROI regions, and therefore, the first 3D point cloud information corresponding to the plurality of first ROI regions may be determined simultaneously, for example, when the first plane included in the first image is the ground and the first image is detected to determine a plurality of first ROI regions of the ground, the first 3D point cloud information corresponding to the plurality of first ROI regions may be determined simultaneously, and the corresponding first 3D point cloud information may be determined according to the priority of determining the first ROI regions, for example, when the first plane in the first image includes three planes, i.e., the ground, the desktop, and the cabinet top, the priority of the first plane may be set to be ground > the desktop > the cabinet top, and the three first planes included in the first image may be detected according to the ground > the desktop > the cabinet top, and the corresponding first ROI regions may be determined sequentially, then, the first 3D point cloud information corresponding to the first ROI region determined first may be preferentially determined according to the sequence of the determined first ROI region, and it should be noted that the example of the priority of the first ROI region is only an exemplary embodiment, and the priority of the first ROI region is not limited to the example.
In the above-described embodiment, a first image taken by a first image pickup device provided on a cleaning device may be detected to determine a first ROI region of a first plane included in the first image, further determining first 3D point cloud information corresponding to the first ROI area, correcting the angular speed of the cleaning device based on the first 3D point cloud information, and then, the attitude of the cleaning device can be determined based on the corrected angular velocity of the cleaning device, and by adopting the invention, the angular velocity of the cleaning device can be corrected through the acquired first 3D point cloud information, and then the attitude of the cleaning equipment is determined through the corrected angular velocity, so that the aim of improving the accuracy and stability of attitude estimation is fulfilled, and the problem that the estimation error is larger due to the fact that the attitude estimation needs to be carried out by relying on an accelerometer in the related technology is effectively solved.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI area comprises: under the condition that a target sensor is determined to be included in the cleaning equipment, determining 3D point cloud information of a target area acquired by the target sensor, wherein the first image is an image obtained by shooting the target area, and the difference between the shooting time of the first image and the time of acquiring the 3D point cloud information of the target area by the target sensor is smaller than a preset threshold value; registering all pixel points included in the first image with the 3D point cloud information of the target area; determining the first 3D point cloud information corresponding to the first ROI area based on a registration result. In the present embodiment, the predetermined threshold is a value that can be set in advance, and can be set to 3 milliseconds, 5 milliseconds, 10 milliseconds, 15 milliseconds, or the like, for example, when the predetermined threshold is 10 milliseconds, the difference between the shooting time of the first image and the time for acquiring the 3D point cloud information of the target area by the target sensor needs to be less than 10 milliseconds, in addition, the target sensor includes, but is not limited to, a 3D point cloud sensor, a TOF sensor, an RGBD sensor, etc., it should be noted that the setting of the predetermined threshold value is only an exemplary embodiment, in practical application, other predetermined threshold values may also be adopted, and the setting of the predetermined threshold values may also be adjusted according to practical application conditions, and in addition, the registration method of the first 3D point cloud and the first image may adopt, but is not limited to, a method of performing conversion alignment based on an external reference of a coordinate system of the 3D point cloud sensor and the first image capturing device.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI area includes: detecting a second image captured by the first image capturing apparatus in a case where it is determined that the object sensor is not included in the cleaning apparatus, and determining a second ROI area of a second plane included in the second image, wherein the second image is an image of a frame previous to the first image; extracting a first 2D feature point of the first ROI region, and extracting a second 2D feature point of the second ROI region; matching the first 2D feature points with the second 2D feature points based on a feature point matching mode; determining the first 3D point cloud information corresponding to the first ROI area based on a matching result. In the present embodiment, the feature point matching method includes an optical flow method, a descriptor matching method, and the like, but may be another method having a 2D feature matching function.
In one exemplary embodiment, determining the first 3D point cloud information corresponding to the first ROI region based on the matching result includes: determining plane constraints of the first camera equipment, and determining a target homography matrix based on the matching result and the plane constraints; determining a rotation matrix and a translation vector without scale between the first image and the second image based on the target homography matrix; determining an absolute position increment between the first image and the second image based on a first sensor and/or a first algorithm of the cleaning device; determining a scaled translation vector based on the non-scaled translation vector and the absolute position increment; triangularizing the scaled translation vector and the matching result to determine first 3D point cloud information corresponding to the first ROI area. In this embodiment, edge and/or wrong matched 2D feature points may be screened out based on the matching result and the ranac method, and then more and more accurate matched 2D feature points may be obtained from the unscreened matched 2D feature points through plane constraint, and further a target homography matrix may be determined based on the more accurate matched 2D feature points, so as to improve the accuracy and robustness of matching the 2D feature points, in addition, the first sensor includes, but is not limited to, a code wheel sensor, a wheel speed meter sensor, and the like, the first algorithm includes, but is not limited to, two adjacent frames of ICP algorithm of the 2D laser sensor, for example, the absolute position increment between the first image and the second image may be obtained by calculating code wheels of left and right wheels of the sweeper, or the absolute position increment between two frames may be calculated by using the two adjacent frames of ICP algorithm of the 2D laser sensor, if there is no sensor such as a code wheel or a wheel speed meter or there is no other algorithm (i.e. the first algorithm) to calculate the accurate translation scale, an approximate scale may be used to replace the absolute scale estimation value, for example, the instruction speed of the sweeper may be obtained according to the operation state of the sweeper, and then multiplied by the time interval between two frames to approximately obtain the absolute position increment between two frames, if the accurate translation vector between two frames cannot be obtained, or the approximate translation vector between two frames cannot be obtained, or alternative methods such as point cloud depth normalization may be used, for example, the scale of one translation vector is randomly specified first, then based on the translation vector of random scale, point cloud triangulation is performed to obtain the 3D coordinates of all interior points, based on the sweeper (here, the sweeper is taken as an example, and may also be other cleaning equipment, e.g., scrubber, etc.), assuming that the average depth of the current frame 3D point cloud is N meters (e.g., 2.5 meters, 3 meters, 3.5 meters, etc.), the above-obtained 3D point clouds of all interior points are normalized, and the normalization operation makes the 3D point cloud depth of all interior points N meters.
It should be noted that the above-mentioned illustration of the matching 2D feature points and the above-mentioned absolute position increment is only an exemplary embodiment, and the above-mentioned matching 2D feature points and the above-mentioned absolute position increment are not limited to the above-mentioned illustration.
In one exemplary embodiment, detecting a first image captured by a first image capturing apparatus provided on a cleaning apparatus, determining a first ROI region of a first plane included in the first image includes: detecting the first image by using a neural network model, and determining the first ROI area of the first plane included in the first image, wherein the neural network model is obtained by training an initial neural network model by using training data, the training data comprises a plurality of groups of data, and each group of data comprises the training image and the ROI area of the plane included in the training image. In this embodiment, the initial neural network model may be trained in advance by using training data, where the training data used in the training may be from a database, for example, images of the detected ROI region including the plane are collected in advance and form a database, and the first image is identified based on the trained neural network model.
In one exemplary embodiment, the correcting the angular velocity of the cleaning device based on the first 3D point cloud information comprises: screening a target 3D point cloud from the first 3D point cloud according to a target screening mode, and determining a first plane equation based on the target 3D point cloud; determining a target maximum 3D point cloud from the target 3D point cloud, and generating a target plane equation through least square optimization based on the target maximum 3D point cloud and a Huber robust kernel function, wherein the target maximum 3D point cloud is the 3D point cloud with the distance from the first plane equation being smaller than a preset distance threshold; and correcting the angular velocity based on the target plane equation. In this embodiment, the target screening method includes a ranac method, and the like, and of course, other methods with a screening function may also be used, and in addition, a first plane equation including the maximum 3D point cloud of the target may be used as an initial value and added with a Huber robust kernel function to generate a target plane equation through least square optimization, where through application of the Huber robust kernel function, the optimal target plane equation may be determined from the maximum 3D point cloud of the target by further weighting the external points in the 3D point cloud, so as to effectively reduce the influence of the external point data in the 3D point cloud or reduce the influence of the observation noise of the 3D point cloud.
In the above embodiment, the predetermined distance threshold is a preset value, and may be set to be 2 cm, 5 cm, 10 cm, and so on, for example, when the predetermined distance threshold is 5 cm, 3D point clouds less than 5 cm away from the first plane equation are determined from the target 3D point clouds, and then these 3D point clouds may be determined as the target maximum 3D point clouds.
In one exemplary embodiment, modifying the angular velocity based on the target plane equation comprises: determining a quaternion of a last-time attitude of the cleaning device, and determining a second plane equation of the last-time attitude based on the quaternion of the last-time attitude; determining a plane equation difference between the target plane equation and the second plane equation, and obtaining a gyroscope angular velocity measurement of the cleaning device; and determining an angular velocity correction amount based on the plane equation difference, and correcting the gyroscope angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity. In this embodiment, the second plane equation of the previous time attitude may be determined based on the quaternion of the previous time attitude of the cleaning apparatus, and then the plane equation difference between the optimal target plane equation and the second plane equation may be calculated, and the gyro angular velocity measurement value of the previous time attitude of the cleaning apparatus may be corrected based on the plane equation difference, so as to obtain the corrected angular velocity.
In an exemplary embodiment, after determining an angular velocity correction amount based on the plane equation difference, and correcting the gyroscope angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity, the method further comprises: updating the quaternion of the previous moment based on the corrected angular velocity to obtain an updated quaternion of the current moment attitude; carrying out normalization processing on the updated quaternion of the current time attitude, and converting the quaternion after the normalization processing into an Euler angle; determining an updated pose of the cleaning device based on the Euler angle. In this embodiment, the quaternion gradually loses the normalization characteristic due to factors such as calculation errors, and therefore, it is necessary to perform normalization processing on the updated quaternion (i.e., the normalization processing), and convert the quaternion after the normalization processing into a yaw angle, a pitch angle, a roll angle (i.e., the euler angle), and the like.
In one exemplary embodiment, before detecting a first image captured by a first image capturing apparatus provided on a cleaning apparatus, and determining a first ROI region of a first plane included in the first image, the method further includes: calibrating a parameter of a sensor included in the cleaning device, the parameter being at least one of: an internal parameter matrix and a distortion model of the first camera device; an internal reference to a gyroscope of the cleaning device; external reference between the first camera device coordinate system and the gyroscope coordinate system; an external reference between the gyroscope coordinate system and a body coordinate system of the cleaning device; a point cloud reference model of a target sensor included in the cleaning device; an external reference between the target sensor coordinate system and the gyroscope coordinate system. In this embodiment, the internal parameters of the gyroscope of the cleaning device include, but are not limited to, 3-axis or single-axis offsets, scale factors, the external parameters between the first camera device coordinate system and the gyroscope coordinate system include, but are not limited to, rotation matrices and translations, the external parameters between the gyroscope coordinate system and the body coordinate system of the cleaning device include, but are not limited to, rotation matrices and translations, the point cloud internal parameter model of the target sensor included in the cleaning device, and the external parameters between the target sensor coordinate system and the gyroscope coordinate system include, but are not limited to, rotation matrices and translations.
In the above embodiment, the calibration of the parameter of the sensor included in the cleaning apparatus may be performed by the cleaning apparatus when the cleaning apparatus is not shipped from a factory, or may be performed by a manager or a maintenance person after the cleaning apparatus is used, or may be adjusted according to an actual application condition of the cleaning apparatus, or the like.
It is to be understood that the above-described embodiments are only a few, but not all, embodiments of the present invention.
The following takes the above cleaning device as a sweeping robot as an example, and the present invention is specifically described with reference to specific embodiments:
the method comprises the following steps: and calibrating the internal parameter and the external parameter off line. External reference and internal reference calibration can be performed on each sensor included in the sweeper required to be used in advance, and the following parameters are mainly calibrated:
1. a camera (corresponding to the first image pickup apparatus described above) internal reference matrix and a distortion model. According to the difference of the selected camera mathematical models, the corresponding camera internal reference matrix and distortion model are also different, taking a pinhole model as an example, the internal reference matrix K comprises: focal length f x And f y Principal point offset coefficient x 0 And y 0 Axis tilt coefficient s, etc.;
Figure BDA0003718596760000121
the distortion model may be of the formula, where [ x [ ] c ;y c ] T Is the coordinate of the undistorted point, [ x ] corrected ;y corrected ] T Is the coordinates of the undistorted point, r is the center distance of the point, parameter [ k ] 1 ,k 2 ,k 3 ]Is a polynomial parameter of radial distortion, parameter [ p ] 1 ,p 2 ]Is the tangential distortion coefficient.
Figure BDA0003718596760000122
2. The internal reference of the gyroscope comprises: 3-axis or uniaxial offset, scale factor, etc.;
3. the external reference of the camera coordinate system and the gyroscope coordinate system comprises: rotation matrix and translation;
4. the external parameters of the gyroscope coordinate system and the body coordinate system comprise: rotation matrix and translation;
5. point cloud reference models for TOF sensors (e.g., the sweeper's own TOF sensor);
6. the external references to the TOF sensor coordinate system and the gyroscope coordinate system include: rotation matrix and translation.
Step two: plane detection
1. Detecting an object
The detection target is a plane in the image. Such a plane should be in a parallel relationship with the plane of travel of the cleaning device, which may be of various types of planes, such as floors, carpets, table tops, cabinet tops, and the like.
2. Detection algorithm
(1) Detection of 2D images by camera to obtain ROI area
The detection and segmentation algorithm based on the camera image adopts a deep learning method, compared with the traditional method, the deep learning method has obvious advantages in the image fields of recognition rate, accuracy rate, semantics and the like, the invention does not limit the specific implementation method of the plane detection and segmentation algorithm of the camera image, and the detection result of the algorithm is an ROI area or range on the image plane.
(2) Computing 3D point clouds of ROI regions
According to whether a point cloud sensor (corresponding to the target sensor) is provided, the method is divided into two cases:
a. if the camera does not have a 3D point cloud sensor. A first step of extracting 2D feature points (corresponding to the second 2D feature points) from an ROI region (corresponding to the second ROI region) of a previous frame image (corresponding to the second image); a second step of extracting 2D feature points (corresponding to the first 2D feature points) in an ROI region (corresponding to the first ROI region) of a current frame image (corresponding to the first image); thirdly, matching the feature points of the previous frame image and the current frame image, wherein the method is not limited to an optical flow method, a descriptor matching method and the like; fourthly, based on the 2D feature point matching result, utilizing epipolar constraint (corresponding to the plane constraint) of the camera to obtain a homography matrix (corresponding to the target homography matrix), wherein the method of Ranac can be adopted to restrain external points and screen internal points; fifthly, decomposing the homography matrix into a rotation matrix and a translation vector without scale between two frames of images; sixthly, obtaining translation vectors with scales according to other sensors or other algorithms of the sweeper, for example, absolute position increment between two frames of images can be obtained by adopting a coded disc of left and right wheels of the sweeper, or absolute position increment between two frames can be calculated by adopting an ICP algorithm of two adjacent frames of a 2D laser sensor, if a sensor (corresponding to the first sensor) such as a coded disc, a wheel speed meter and the like or other algorithm (corresponding to the first algorithm) is not available, accurate translation scale can be accurately calculated, and approximate scale can be adopted to replace the absolute scale, for example, the instruction speed of the sweeper is obtained according to the running state of the sweeper, and the absolute position increment between two frames can be calculated approximately by multiplying the instruction speed by two frames of interval time based on a uniform speed hypothesis; and seventhly, triangularization is carried out according to the relative poses of two adjacent frames and the 2D coordinates of the matched interior points to obtain the 3D coordinates of all the interior points, wherein the relative poses of the two adjacent frames refer to a rotation matrix and a translation vector with a scale between the two frames. (random sample consensus, which means that the parameters of a mathematical model can be estimated iteratively from a set of data sets containing "outliers").
In the sixth step, if the accurate translation vector between the two frames and the approximate translation vector between the two frames cannot be obtained, alternative methods such as point cloud depth normalization may also be used, for example, a scale of one translation vector is randomly specified, then point cloud triangulation is performed based on the translation vector of the random scale to obtain 3D coordinates of all interior points, based on the size of the operation scene of the sweeper, the obtained 3D point clouds of all interior points are normalized on the assumption that the average depth of the current frame 3D point cloud is N meters (e.g., 2.5 meters, 3 meters, 3.5 meters, etc.), and the normalization operation makes the depth of the 3D point cloud of all interior points to be N meters.
b. If the camera is equipped with a 3D point cloud sensor, for example, an RGBD camera or a TOF sensor. The method comprises the following steps that firstly, a 3D point cloud based on a point cloud sensor coordinate system is obtained through a point cloud sensor; secondly, acquiring the corresponding relation between each pixel in the camera image and each point cloud of the point cloud sensor according to the registration of the camera and the point cloud sensor; and thirdly, taking out the 3D point cloud of the corresponding region according to the pixel range of the ROI in the camera image.
If k ROI areas exist in the current frame image, 3D point cloud of corresponding areas of k (k is larger than or equal to 1) groups can be obtained.
3. RANSAC Algorithm detailed solution
The data is composed of "inliers," which are data that make up the model parameters, and "outliers," which are data that are not fit to the model. While RANSAC assumes: given a set of data that contains a small fraction of "interior points," there is a model that a program can estimate to fit the "interior points. Basic idea and flow: the model is estimated by iteratively selecting the data set until the model of the demand is estimated. The concrete implementation steps are as follows:
a. selecting a minimum data set which can be used for estimating a model;
b. computing a data model using the data set;
c. all data are brought into the model, and the number of 'interior points' is calculated; (accumulate data suitable for the current iteration extrapolation model within a certain error range);
d. comparing the number of the 'interior points' of the current model and the best model deduced before, and recording the model parameters of the maximum 'interior points' number and the 'interior points' number;
e. and repeating the first four steps until the iteration is finished or the current model is the required model (the number of the inner points is more than a certain number).
4. Solving a plane equation of a point cloud
Firstly, carrying out external point elimination and internal point screening on the point cloud extracted in the last step through a Randac process, and obtaining a plane equation parameter corresponding to the maximum internal point number;
secondly, the screened interior points are utilized, a plane equation (corresponding to the first plane equation) corresponding to the maximum number of the interior points is taken as an initial value, a Huber robust kernel function is added, and the optimal plane equation P is optimized through least square _opt (corresponding to the target plane equation above). The plane equation is expressed by the following equation, wherein A, B, C is the corresponding plane equation parameter:
A*x+B*y+C*z-1=0
if 3D point clouds of k corresponding areas exist in the current frame image, k plane equations can be obtained:
A1*x+B1*y+C1*z-1=0
A2*x+B2*y+C2*z-1=0
Ai*x+Bi*y+Ci*z-1=0
Ak*x+Bk*y+Ck*z-1=0
wherein, the value range of i is [1, 2., k ], Ai, Bi and Ci are parameters corresponding to the ith plane equation.
Step three: pose estimation method based on plane equation
First step, last moment attitude q _last Using quaternion, i.e. q _last =q _0 +q _1 *i+q _2 *j+q _3 K, wherein q _0 Is a real part parameter, [ q ] _1 ,q _2 ,q _3 ]The three imaginary parts correspond to parameters.
Where j, k, and m are the three imaginary vectors of the quaternion. The three imaginary vectors satisfy the following relationship:
Figure BDA0003718596760000161
second, using the previous time attitude q _last The plane equation (corresponding to the second plane equation) P of the last moment corresponding to the plane equation is obtained _last : d x + E y + F z-1 ═ 0, itD, E and F are coefficients of three dimensions of the plane equation, and the corresponding relation between the coefficients and each parameter of the quaternion is as follows:
D=q 1 q 3 -q 0 q 2
E=q o q 1 +q 2 q 3
F=q 0 q 0 +q 3 q 3 -0.5
thirdly, calculating an optimal plane equation P _opt And the plane equation P of the previous moment _last Is defined as e ═ e _x ,e _y ,e _z ]Wherein [ e ] _x ,e _y ,e _z ]The components of the plane difference (corresponding to the above plane equation difference) in the three dimensions x, y and z, respectively:
e x =B*F+C*E
e y =C*D+A*F
e z =A*E+B*D
if there are k plane equations for the 3D point cloud for k corresponding regions in the current frame image, the difference is defined as e-r _1 *e _1 +r _2 *e _2 +...+r _i *e _i +r _k *e _k Wherein, i has a value range of [1, 2.,. k ]],e _i =[e _xi ,e _yi ,e _zi ]Is the ith set of plane equations and the last time plane equation P _last The difference of (c):
e xi =Bi*F+Ci*E
e yi =Ci*D+Ai*F
e zi =Ai*E+Bi*D
r _k is the uncertainty of the ith plane equation, representing the confidence of the ith plane, which is, for example, proportional to the number of points in the 3D point cloud of the ith plane.
And fourthly, correcting the angular velocity of the gyroscope. Obtaining an angular velocity signal w from a gyroscope sensor _sensor, Adding e of the plane equation obtained in the last step into the angular velocity component through proportional and integral operation to obtain the corrected angular velocity w correct =[w x ,w y ,w z ]:
w _correct =w _sensor +k _prop *e+k _integ *e*Δt
Wherein [ w x ,w y ,w z ]X, y and z components, k, respectively, of the corrected relief angle velocity _prop Is the proportionality coefficient, k _integ Is the integral coefficient and at is the gyroscope sampling period.
And fifthly, updating the posture. Using corrected angular velocity w _correct To the last time attitude q _last Updating by using the corrected angular velocity, and updating the quaternion q _correct The calculation formula of (2) is as follows:
q _correct =q _last +Δq
wherein Δ q ═ Δ q 0 ,Δq 1 ,Δq 2 ,Δq 3 ]To update the increments, Δ q 0 Is the real part parameter of the update delta, [ Δ q [ ] 1 ,Δq 2 ,Δq 3 ]Three imaginary parameters of the update delta are respectively:
Δq 0 =-(q 1 *w x +q 2 *w y +q 3 *w z )*Δt/2
Δq 1 =(q 0 *w x +q 2 *w z -q 3 *w y )*Δt/2
Δq 2 =(q 0 *w y -q 1 *w z +q 3 *w x )*Δt/2
Δq 3 =(q 0 *w z +q 1 *w y -q 2 *w x )*Δt/2
sixthly, calculating a normalized quaternion q corresponding to the quaternion at the current moment _norm =[q _n0 ,q _n1 ,q _n2 ,q _n3 ]. Wherein q is _n0 Is a real part parameter, [ q ] _n1 ,q _n2 ,q _n3 ]Three imaginary parts correspond to the parameters. Calculating the formula:
Figure BDA0003718596760000181
wherein j is 0, 1, 2, 3.
Seventhly, normalizing the quaternion q at the current moment _norm Converting into the Euler angle at the current moment, wherein the conversion formula is as follows:
the euler angle can be obtained by the following relation based on quaternions,
Figure BDA0003718596760000182
is the yaw angle, theta is the pitch angle,
Figure BDA0003718596760000183
roll angle:
Figure BDA0003718596760000184
wherein, the value range of arctan and arcsin is
Figure BDA0003718596760000185
This does not cover all orientations of euler angles (for pitch angle, it is
Figure BDA0003718596760000186
Has been satisfied), therefore, for yaw and roll angles atan2 needs to be used instead of arctan:
Figure BDA0003718596760000187
wherein the value range of atan2 is
Figure BDA0003718596760000188
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for determining a posture is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device that has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram of a configuration of an attitude determination apparatus according to an embodiment of the present invention, and as shown in fig. 3, the apparatus includes:
a first determining module 32, configured to detect a first image captured by a first imaging device disposed on a cleaning device, and determine a first ROI area of a first plane included in the first image;
a second determining module 34, configured to determine first 3D point cloud information corresponding to the first ROI area;
a correction module 36 for correcting the angular velocity of the cleaning device based on the first 3D point cloud information;
a third determination module 38 for determining the attitude of the cleaning device based on the corrected angular velocity of the cleaning device.
In an exemplary embodiment, the second determining module 34 includes: the cleaning equipment comprises a first determining unit, a second determining unit and a control unit, wherein the first determining unit is used for determining 3D point cloud information of a target area acquired by a target sensor under the condition that the cleaning equipment comprises the target sensor, the first image is an image obtained by shooting the target area, and the difference between the shooting time of the first image and the time of acquiring the 3D point cloud information of the target area by the target sensor is smaller than a preset threshold value; a first matching unit, configured to register all pixel points included in the first image with the 3D point cloud information of the target area; a second determining unit, configured to determine the first 3D point cloud information corresponding to the first ROI area based on the registration result.
In an exemplary embodiment, the second determining module 34 includes: a third determination unit configured to detect a second image captured by the first image capturing apparatus, which is an image of a previous frame of the first image, and determine a second ROI region of a second plane included in the second image, when it is determined that the target sensor is not included in the cleaning apparatus; the extraction unit is used for extracting a first 2D feature point of the first ROI area and extracting a second 2D feature point of the second ROI area; the second matching unit is used for matching the first 2D feature points with the second 2D feature points based on a feature point matching mode; a fourth determining unit, configured to determine the first 3D point cloud information corresponding to the first ROI area based on a matching result.
In an exemplary embodiment, the fourth determining unit includes: a first determining subunit, configured to determine a plane constraint of the first image capturing apparatus, and determine a target homography matrix based on the matching result and the plane constraint; a second determining subunit configured to determine a rotation matrix and a translation vector without scale between the first image and the second image based on the target homography matrix; a third determination subunit for determining an absolute position increment between the first image and the second image based on a first sensor and/or a first algorithm of the cleaning device; a fourth determining subunit, configured to determine a scaled translation vector based on the non-scaled translation vector and the absolute position increment; and the fifth determining subunit is used for triangulating the translation vector with the scale and the matching result so as to determine first 3D point cloud information corresponding to the first ROI.
In an exemplary embodiment, the first determining module 32 includes: a fifth determining unit, configured to detect the first image by using a neural network model, and determine the first ROI region of the first plane included in the first image, where the neural network model is a model obtained after an initial neural network model is trained by using training data, the training data includes multiple sets of data, and each set of data includes a training image and a ROI region of a plane included in the training image.
In an exemplary embodiment, the modification module 36 includes: a sixth determining unit, configured to screen a target 3D point cloud from the first 3D point cloud according to a target screening manner, and determine a first plane equation based on the target 3D point cloud; the generating unit is used for determining a target maximum 3D point cloud from the target 3D point cloud, and generating a target plane equation through least square optimization based on the target maximum 3D point cloud and a Huber robust kernel function, wherein the target maximum 3D point cloud is a 3D point cloud which is less than a preset distance threshold value from the first plane equation; and the correcting unit is used for correcting the angular velocity based on the target plane equation.
In an exemplary embodiment, the correction unit includes: a sixth determining subunit, configured to determine a quaternion of a last time posture of the cleaning apparatus, and determine a second plane equation of the last time posture based on the quaternion of the last time posture; an acquisition unit for determining a plane equation difference between the target plane equation and the second plane equation and acquiring a gyroscope angular velocity measurement of the cleaning device; and the correction subunit is used for determining an angular velocity correction amount based on the plane equation difference value, and correcting the gyroscope angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity.
In an exemplary embodiment, the apparatus further comprises: the updating module is used for determining angular velocity correction based on the plane equation difference, correcting the measured value of the angular velocity of the gyroscope based on the angular velocity correction to obtain a corrected angular velocity, and then updating the quaternion of the attitude at the previous moment based on the corrected angular velocity to obtain an updated quaternion of the attitude at the current moment; the conversion module is used for carrying out normalization processing on the quaternion of the current time attitude and converting the quaternion after the normalization processing into an Euler angle; a fourth determination module to determine an updated pose of the cleaning device based on the Euler angle.
In an exemplary embodiment, the apparatus further includes: a calibration module, configured to calibrate a parameter of a sensor included in a cleaning device before detecting a first image captured by a first imaging device provided on the cleaning device and determining a first ROI region of a first plane included in the first image, where the parameter is at least one of: an internal parameter matrix and a distortion model of the first camera device; an internal reference to a gyroscope of the cleaning device; external reference between the first camera device coordinate system and the gyroscope coordinate system; an external reference between the gyroscope coordinate system and a body coordinate system of the cleaning device; a point cloud reference model of a target sensor included in the cleaning device; an external reference between the target sensor coordinate system and the gyroscope coordinate system.
There is also provided in this embodiment a cleaning apparatus which may include any of the attitude determination means described above.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, detecting a first image shot by a first camera device arranged on a cleaning device, and determining a first ROI (region of interest) of a first plane included in the first image;
s2, determining first 3D point cloud information corresponding to the first ROI area;
s3, correcting the angular speed of the cleaning equipment based on the first 3D point cloud information;
s4, determining the posture of the cleaning device based on the corrected angular speed of the cleaning device.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention further provide an electronic device, comprising a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to perform the following steps by a computer program:
s1, detecting a first image shot by a first camera device arranged on a cleaning device, and determining a first ROI (region of interest) of a first plane included in the first image;
s2, determining first 3D point cloud information corresponding to the first ROI area;
s3, correcting the angular speed of the cleaning equipment based on the first 3D point cloud information;
s4, determining the posture of the cleaning device based on the corrected angular speed of the cleaning device.
According to the determination method provided by the invention, the attitude of the sweeper is estimated by performing plane detection on a horizontal plane and constructing plane constraint through camera and/or TOF sensor data carried by the sweeper (taking the sweeper as an example, the sweeper can also be other cleaning equipment such as a scrubber) and then fusing 3-axis or single-axis angular velocity data of a gyroscope.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A method of pose determination, comprising:
detecting a first image shot by first camera equipment arranged on cleaning equipment, and determining a first ROI (region of interest) of a first plane included in the first image;
determining first 3D point cloud information corresponding to the first ROI area;
correcting the angular velocity of the cleaning device based on the first 3D point cloud information;
determining an attitude of the cleaning device based on the corrected angular velocity of the cleaning device.
2. The pose determination method of claim 1, wherein determining the first 3D point cloud information corresponding to the first ROI area comprises:
under the condition that a target sensor is determined to be included in the cleaning equipment, determining 3D point cloud information of a target area acquired by the target sensor, wherein the first image is an image obtained by shooting the target area, and the difference between the shooting time of the first image and the time of acquiring the 3D point cloud information of the target area by the target sensor is smaller than a preset threshold value;
registering all pixel points included in the first image with the 3D point cloud information of the target area;
determining the first 3D point cloud information corresponding to the first ROI area based on a registration result.
3. The pose determination method of claim 1, wherein determining the first 3D point cloud information corresponding to the first ROI area comprises:
detecting a second image captured by the first image capturing apparatus in a case where it is determined that the object sensor is not included in the cleaning apparatus, and determining a second ROI area of a second plane included in the second image, wherein the second image is an image of a frame previous to the first image;
extracting a first 2D feature point of the first ROI region, and extracting a second 2D feature point of the second ROI region;
matching the first 2D feature points with the second 2D feature points based on a feature point matching mode;
determining the first 3D point cloud information corresponding to the first ROI area based on a matching result.
4. The method of determining the pose as recited in claim 3, wherein determining the first 3D point cloud information corresponding to the first ROI area based on the matching result comprises:
determining plane constraints of the first camera equipment, and determining a target homography matrix based on the matching result and the plane constraints;
determining a rotation matrix and a translation vector without scale between the first image and the second image based on the target homography matrix;
determining an absolute position increment between the first image and the second image based on a first sensor and/or a first algorithm of the cleaning device;
determining a scaled translation vector based on the non-scaled translation vector and the absolute position increment;
triangularizing the scaled translation vector and the matching result to determine first 3D point cloud information corresponding to the first ROI area.
5. The method according to claim 1, wherein detecting a first image captured by a first imaging device provided on a cleaning device, and wherein determining a first ROI region of a first plane included in the first image comprises:
detecting the first image by using a neural network model, and determining the first ROI area of the first plane included in the first image, wherein the neural network model is obtained by training an initial neural network model by using training data, the training data comprises a plurality of groups of data, and each group of data comprises the training image and the ROI area of the plane included in the training image.
6. The pose determination method of claim 1, wherein correcting the angular velocity of the cleaning device based on the first 3D point cloud information comprises:
screening a target 3D point cloud from the first 3D point cloud according to a target screening mode, and determining a first plane equation based on the target 3D point cloud;
determining a target maximum 3D point cloud from the target 3D point cloud, and generating a target plane equation through least square optimization based on the target maximum 3D point cloud and a Huber robust kernel function, wherein the target maximum 3D point cloud is the 3D point cloud which is less than a preset distance threshold from the first plane equation;
and correcting the angular velocity based on the target plane equation.
7. The pose determination method of claim 6, wherein correcting the angular velocity based on the target plane equation comprises:
determining a quaternion of a last-time attitude of the cleaning device, and determining a second plane equation of the last-time attitude based on the quaternion of the last-time attitude;
determining a plane equation difference between the target plane equation and the second plane equation, and obtaining a gyroscope angular velocity measurement of the cleaning device;
and determining an angular velocity correction amount based on the plane equation difference, and correcting the gyroscope angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity.
8. The attitude determination method according to claim 7, wherein after determining an angular velocity correction amount based on the plane equation difference, and correcting the gyro angular velocity measurement value based on the angular velocity correction amount to obtain a corrected angular velocity, the method further comprises:
updating the quaternion of the attitude at the previous moment based on the corrected angular velocity to obtain the updated quaternion of the attitude at the current moment;
carrying out normalization processing on the quaternion of the current time attitude, and converting the quaternion after the normalization processing into an Euler angle;
determining an updated pose of the cleaning device based on the Euler angle.
9. The method according to claim 1, wherein before detecting a first image captured by a first imaging device provided on a cleaning device and determining a first ROI region of a first plane included in the first image, the method further comprises:
calibrating a parameter of a sensor included in the cleaning device, the parameter being at least one of:
an internal reference matrix and a distortion model of the first camera device;
an internal reference to a gyroscope of the cleaning device;
external reference between the first camera device coordinate system and the gyroscope coordinate system;
an external reference between the gyroscope coordinate system and a body coordinate system of the cleaning device;
a point cloud reference model of a target sensor included in the cleaning device;
an external reference between the target sensor coordinate system and the gyroscope coordinate system.
10. An attitude determination apparatus, comprising:
the first determination module is used for detecting a first image shot by first camera equipment arranged on the cleaning equipment and determining a first ROI (region of interest) of a first plane included in the first image;
the second determination module is used for determining first 3D point cloud information corresponding to the first ROI area;
a correction module for correcting the angular velocity of the cleaning device based on the first 3D point cloud information;
a third determination module to determine a pose of the cleaning device based on the corrected angular velocity of the cleaning device.
11. A cleaning device characterized by comprising the attitude determination device according to claim 10.
12. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 9.
13. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 9 by means of the computer program.
CN202210742705.5A 2022-06-28 2022-06-28 Gesture determining method and device, cleaning equipment, storage medium and electronic device Active CN114983302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210742705.5A CN114983302B (en) 2022-06-28 2022-06-28 Gesture determining method and device, cleaning equipment, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210742705.5A CN114983302B (en) 2022-06-28 2022-06-28 Gesture determining method and device, cleaning equipment, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN114983302A true CN114983302A (en) 2022-09-02
CN114983302B CN114983302B (en) 2023-08-08

Family

ID=83037170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210742705.5A Active CN114983302B (en) 2022-06-28 2022-06-28 Gesture determining method and device, cleaning equipment, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114983302B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346453A (en) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 Automatic robot recharging method and device, robot and storage medium
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
WO2021221333A1 (en) * 2020-04-29 2021-11-04 주식회사 모빌테크 Method for predicting position of robot in real time through map information and image matching, and robot
US20210347378A1 (en) * 2020-05-11 2021-11-11 Amirhosein Nabatchian Method and system for generating an importance occupancy grid map
CN114325634A (en) * 2021-12-23 2022-04-12 中山大学 Method for extracting passable area in high-robustness field environment based on laser radar
CN114663526A (en) * 2022-03-17 2022-06-24 深圳市优必选科技股份有限公司 Obstacle detection method, obstacle detection device, robot and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021221333A1 (en) * 2020-04-29 2021-11-04 주식회사 모빌테크 Method for predicting position of robot in real time through map information and image matching, and robot
US20210347378A1 (en) * 2020-05-11 2021-11-11 Amirhosein Nabatchian Method and system for generating an importance occupancy grid map
CN112346453A (en) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 Automatic robot recharging method and device, robot and storage medium
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN114325634A (en) * 2021-12-23 2022-04-12 中山大学 Method for extracting passable area in high-robustness field environment based on laser radar
CN114663526A (en) * 2022-03-17 2022-06-24 深圳市优必选科技股份有限公司 Obstacle detection method, obstacle detection device, robot and computer-readable storage medium

Also Published As

Publication number Publication date
CN114983302B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN112598757B (en) Multi-sensor time-space calibration method and device
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
JP3880702B2 (en) Optical flow detection apparatus for image and self-position recognition system for moving object
CN109708649B (en) Attitude determination method and system for remote sensing satellite
US20160165140A1 (en) Method for camera motion estimation and correction
CN108332752B (en) Indoor robot positioning method and device
CN104704384A (en) Image processing method, particularly used in a vision-based localization of a device
CN112230242A (en) Pose estimation system and method
CN112184824A (en) Camera external parameter calibration method and device
CN112880687A (en) Indoor positioning method, device, equipment and computer readable storage medium
CN113066127B (en) Visual inertial odometer method and system for calibrating equipment parameters on line
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
KR101737950B1 (en) Vision-based navigation solution estimation system and method in terrain referenced navigation
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
CN111524194A (en) Positioning method and terminal for mutual fusion of laser radar and binocular vision
CN113899364B (en) Positioning method and device, equipment and storage medium
CN104848861A (en) Image vanishing point recognition technology based mobile equipment attitude measurement method
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN111025330B (en) Target inclination angle detection method and device based on depth map
CN112179373A (en) Measuring method of visual odometer and visual odometer
CN112837314A (en) Fruit tree canopy parameter detection system and method based on 2D-LiDAR and Kinect
CN114983302B (en) Gesture determining method and device, cleaning equipment, storage medium and electronic device
CN115930948A (en) Orchard robot fusion positioning method
CN116184430A (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
CN114842224A (en) Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant