CN114557640B - Data processing method and device of cleaning robot and cleaning robot - Google Patents

Data processing method and device of cleaning robot and cleaning robot Download PDF

Info

Publication number
CN114557640B
CN114557640B CN202210154792.2A CN202210154792A CN114557640B CN 114557640 B CN114557640 B CN 114557640B CN 202210154792 A CN202210154792 A CN 202210154792A CN 114557640 B CN114557640 B CN 114557640B
Authority
CN
China
Prior art keywords
information
obstacle
cleaning robot
data
image stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210154792.2A
Other languages
Chinese (zh)
Other versions
CN114557640A (en
Inventor
彭冬旭
王行知
郑卓斌
王立磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinming Information Technology Co ltd
Original Assignee
Guangzhou Baole Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baole Software Technology Co ltd filed Critical Guangzhou Baole Software Technology Co ltd
Priority to CN202210154792.2A priority Critical patent/CN114557640B/en
Publication of CN114557640A publication Critical patent/CN114557640A/en
Application granted granted Critical
Publication of CN114557640B publication Critical patent/CN114557640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a data processing method and device of a cleaning robot and the cleaning robot, wherein the method comprises the steps of acquiring image stream data; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; when the positioning optimization condition is met, the marking information is processed, positioning data after optimization are obtained, the obstacle is accurately identified, the obstacle can be accurately avoided, meanwhile, the probability of missing the low obstacle area can be reduced, the accurate positioning of the machine is realized, and the accuracy of the positioning data is improved.

Description

Data processing method and device of cleaning robot and cleaning robot
Technical Field
The present disclosure relates to the field of cleaning robots, and in particular, to a data processing method and apparatus for a cleaning robot, and a cleaning robot.
Background
As robotics are becoming more popular, robots are being widely used in various industries. For example, cleaning robots having a sweeping function or the like are becoming more and more popular in home and business. Technological progress has prompted cleaning robots to upgrade from original random, pure gyroscope type to laser type or pure vision type, and laser type cleaning robots generally adopt a structure with a top fixed laser radar for drawing, and the structure can cause the robot not to reach a low obstacle area, such as the bed bottom of a bedroom or the bottom of a sofa tea table in a living room, so that large-area missed scanning is caused; the laser type cleaning robot based on the lifting radar structure can well solve the problem, but the laser radar of the laser type cleaning robot with the lifting radar structure can accumulate errors by simply relying on the odometer calculated by the gyroscope and the wheel group encoder during the action change from the completion of the lowering to the lifting, so that the position of the laser radar after the lifting is inaccurate.
In the implementation process, the inventor finds that at least the following problems exist in the conventional technology: in the working process of the existing cleaning robot, in order to reduce the leakage scanning in a low obstacle area, the positioning error is large, and the positioning data is inaccurate.
Disclosure of Invention
Based on the above, it is necessary to provide a data processing method and device for a cleaning robot and the cleaning robot, aiming at the problems of large positioning error, inaccurate positioning data, single data processing and low intelligent integration degree in the working process of the conventional cleaning robot.
In order to achieve the above object, an embodiment of the present invention provides a data processing method of a cleaning robot, including the steps of:
acquiring image stream data;
performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
screening the initial obstacle information to obtain optimized obstacle information;
marking the optimized obstacle information to obtain marking information;
and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
In one embodiment, the location optimization conditions include:
detecting that the laser radar of the cleaning robot is in a switching state; the switching state is the switching process of the laser radar from the beginning of the falling time to the finishing of the rising time.
In one embodiment, the step of marking the optimized obstacle information includes:
performing coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information;
according to the obstacle position information and the obstacle range information, marking the obstacles which are all positioned at the same position in a preset time period as static obstacles;
sequentially extracting key information of each static obstacle based on a preset interval distance to obtain each key information;
acquiring Lei Dadian cloud data, and processing each key information and radar point cloud data to obtain mark information; lei Dadian cloud data are acquired by laser radar of the cleaning robot.
In one embodiment, the step of screening the initial obstacle information includes:
screening the initial obstacle information based on a preset frame number as a group to obtain screened obstacle information;
and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the steps of:
and correcting the camera based on the preset correction parameters.
In one embodiment, the initial obstacle information includes a center distance of the obstacle from the machine, an obstacle radius, an obstacle center point angle, an obstacle category and probability information, a serial number of the obstacle, and time stamp information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the steps of:
processing the image stream data based on an ultra-light key point detection algorithm to obtain human body posture characteristic information;
and sending first warning information to the mobile terminal according to the human body posture characteristic information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the steps of:
performing human body feature processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait feature information;
performing calibration processing on human gait feature information according to the radar point cloud and the ToF algorithm to obtain calibrated human gait feature information;
and sending second warning information to the mobile terminal according to the calibrated human gait characteristic information.
On the other hand, the embodiment of the invention also provides a data processing device of the cleaning robot, which comprises:
a data acquisition unit for acquiring image stream data;
the initial recognition unit is used for carrying out obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
the screening unit is used for screening the initial obstacle information to obtain optimized obstacle information;
the marking unit is used for marking the optimized obstacle information to obtain marking information;
and the positioning optimization unit is used for processing the marking information when the positioning optimization condition is met, so as to obtain optimized positioning data.
On the other hand, the embodiment of the invention also provides a cleaning robot, which comprises a cleaning robot main body and a controller arranged on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
One of the above technical solutions has the following advantages and beneficial effects:
in each embodiment of the data processing method of the cleaning robot, the image stream data is acquired; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; when the positioning optimization condition is met, the marking information is processed, positioning data after optimization are obtained, the obstacle is accurately identified, the obstacle can be accurately avoided, meanwhile, the missing of a low obstacle area can be reduced, the accurate positioning of a machine is realized, and the accuracy of the positioning data is improved. The data processing method can be applied to the cleaning robot with the lifting radar function, can be used for identifying and positioning the obstacle, can solve the problem of missed scanning caused by the fact that other laser machines cannot enter under a low obstacle due to the fact that the machine body is too high while avoiding the obstacle, can also be used for realizing accurate positioning, reduces positioning errors, and improves positioning data accuracy.
Drawings
FIG. 1 is a schematic view of an application environment of a data processing method of a cleaning robot in one embodiment;
FIG. 2 is a first flow chart of a data processing method of the cleaning robot in one embodiment;
FIG. 3 is a second flow diagram of a data processing method of the cleaning robot in one embodiment;
FIG. 4 is a flow chart of an obstacle marking process step in one embodiment;
FIG. 5 is a flowchart illustrating steps of a human body posture recognition process according to an embodiment;
FIG. 6 is a flow chart of human gait recognition processing steps in one embodiment;
fig. 7 is a block diagram of a data processing apparatus of a cleaning robot in one embodiment.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, the term "plurality" shall mean two as well as more than two.
The data processing method of the cleaning robot can be applied to an application environment shown in fig. 1. The cleaning robot comprises a controller 102 and a sweeping robot main body 104, wherein the controller 102 is connected with the cleaning robot main body 104, and the controller 102 can be used for acquiring image flow data; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data. The cleaning robot may be a cleaning robot having a sweeping function. The cleaning robot further comprises a lifting laser radar mechanism and a camera, wherein the lifting laser radar mechanism can perform lifting scanning; the camera may be used to capture image stream data of the current environment.
In order to solve the problems of large positioning error and inaccurate positioning data in the working process of the existing cleaning robot, in one embodiment, as shown in fig. 2, a data processing method of the cleaning robot is provided, and the method is applied to the controller 102 in fig. 1 for illustration, and includes the following steps:
step S210, acquiring image stream data.
The image stream data can be acquired through a camera arranged on the cleaning robot. The camera may be, but is not limited to, a monocular camera or a binocular camera. Illustratively, the image stream data includes at least one frame of image data.
Specifically, an image of a current environment is shot in real time through a camera arranged on the cleaning robot, image stream data are obtained, the image stream data are intercepted, the intercepted image stream data are transmitted to a controller, and the controller receives the image stream data.
In one example, the controller may actively send a data request instruction to the camera, and the camera may transmit image stream data to the controller according to the data request instruction.
Step S220, performing obstacle recognition processing on the image stream data based on the AI (Artificial Intelligence ) recognition algorithm and the stereoscopic vision geometric algorithm, to obtain initial obstacle information.
The AI recognition algorithm can perform target recognition on the image based on the convolutional neural network, so as to obtain corresponding target object information. The stereoscopic geometry algorithm can be used to process images, and thus reconstruct three-dimensional geometry information of a scene.
For example, the controller performs AI identification processing on the image stream data based on an AI identification algorithm in combination with the database obtained by training, so as to obtain corresponding obstacle identification information. The controller can process the image stream data based on a stereoscopic vision geometric algorithm to construct three-dimensional geometric information of the obstacle. The controller obtains initial obstacle information according to the obstacle identification information and the three-dimensional geometric information of the obstacle.
Step S230, screening the initial obstacle information to obtain optimized obstacle information.
The controller can process each frame of image in the image stream data in sequence, so as to obtain initial barrier information of a corresponding frame. The controller can screen the initial obstacle information, screen the initial obstacle information which is stably identified, and then obtain the optimized obstacle information.
For example, the controller may compare the corresponding characteristic parameter value of the initial obstacle information with a preset threshold condition, and confirm the initial obstacle information, the corresponding characteristic parameter value of which satisfies the preset threshold condition, as the stably recognized obstacle information.
Step S240, marking the optimized obstacle information to obtain marked information.
The controller can perform marking processing on the optimized obstacle information, mark the obstacles which are all positioned at the same position in preset time to obtain marking information, and then the controller can confirm a special marking point at the front position based on the marking information.
And step S250, when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
For example, when the cleaning robot encounters a low obstacle region, the laser radar mechanism can be controlled to descend, so that the overall height of the cleaning robot is reduced, and the cleaning robot can scan and clean the low obstacle region. When the controller detects that the laser radar mechanism is in a switching state, the controller judges that the positioning optimization condition is met, further, the marker information can be processed, position matching and auxiliary judgment are carried out according to the marker information, and further, accurate positioning data after optimization is obtained.
In the above embodiment, by acquiring the image stream data; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; when the positioning optimization condition is met, the marking information is processed, positioning data after optimization are obtained, the obstacle is accurately identified, the obstacle can be accurately avoided, meanwhile, the missing of a low obstacle area can be reduced, the accurate positioning of a machine is realized, and the accuracy of the positioning data is improved. The data processing method can be applied to the cleaning robot with the lifting radar function, can be used for identifying and positioning the obstacle, can solve the problem of missed scanning caused by the fact that other laser machines cannot enter under a low obstacle due to the fact that the machine body is too high while avoiding the obstacle, can also be used for realizing accurate positioning, reduces positioning errors, and improves positioning data accuracy.
In one embodiment, as shown in fig. 3, a data processing method of a cleaning robot is provided, and the method is applied to the controller 102 in fig. 1 for illustration, and includes the following steps:
in step S310, image stream data is acquired.
Step S320, performing obstacle recognition processing on the image stream data based on the AI recognition algorithm and the stereoscopic vision geometric algorithm to obtain initial obstacle information.
Step S330, screening the initial obstacle information to obtain optimized obstacle information.
Step S340, marking the optimized obstacle information to obtain marked information.
The specific descriptions of the steps S310, S320, S330 and S340 are referred to the descriptions of the above embodiments, and are not repeated here.
Step S350, when the laser radar of the cleaning robot is detected to be in a switching state, processing the marking information to obtain optimized positioning data; the switching state is the switching process of the laser radar from the beginning of the falling time to the finishing of the rising time.
The controller can monitor the working state of the laser radar, when the laser radar is detected to be in a switching state, the position matching and auxiliary judgment can be carried out according to the marking information, and then the positioning data after the optimization is obtained, so that the problem that the positioning position cannot be accurately determined due to lifting of the laser radar in the existing cleaning robot is solved.
The controller may monitor the lifting action of the lidar in real time, and process the marking information when detecting that the lidar is in the process of changing the lifting action from the time of starting to the time of finishing the lifting action, thereby obtaining the optimized positioning data.
In the above-mentioned embodiment, can be based on the discernment and the location processing to the barrier, when solving the problem of keeping away the barrier, can also solve current cleaning robot and because of the fuselage is too high, advance the problem of leaking and sweeping that leads to under the low barrier, in addition, can be through carrying out the position matching optimization when detecting laser radar and rise, obtain accurate positioning data, realized accurate discernment barrier, and then can be accurate keep away the barrier, can reduce the leaking and sweeping of low barrier region simultaneously, and realize the accurate location to the machine, improved positioning data's degree of accuracy.
It should be noted that the cleaning effect of the cleaning robot of the present application can be improved by at least 10%, specifically, the cleaning effect is based on the number of short obstacles in the home environment of the user.
In one embodiment, as shown in fig. 4, the method for marking the optimized obstacle information to obtain marked information includes the following steps:
step S410, performing coordinate transformation processing on the optimized obstacle information to obtain obstacle position information and obstacle range information.
Step S420, according to the obstacle position information and the obstacle range information, the obstacles which are all positioned at the same position in the preset time period are marked as static obstacles.
Step S430, based on the preset interval distance, extracting key information of each static obstacle in sequence to obtain each key information.
Step S440, acquiring Lei Dadian cloud data, and processing each piece of key information and radar point cloud data to obtain mark information; lei Dadian cloud data are acquired by laser radar of the cleaning robot.
Specifically, the controller may perform coordinate conversion processing on the optimized obstacle information, for example, converting the optimized obstacle information from a machine coordinate system to a world coordinate system, and further obtain obstacle position information and obstacle range information corresponding to the world coordinate system. The controller may determine whether the obstacle position information and the obstacle range information are changed based on a preset period, and if the obstacle position information and the obstacle range information corresponding to the preset time period are not changed, determine that the obstacle corresponding to the obstacle position information and the obstacle range information is a stationary obstacle, and mark the obstacle located at the same position in the preset time period as the stationary obstacle. The controller sequentially extracts key information of each static obstacle based on the preset interval distance of movement of the machine, namely, the information of the corresponding static obstacle around the machine is extracted at intervals of preset distance, so that each key information is obtained. The controller can acquire radar point cloud data acquired by the laser radar, further process each key information by combining the radar point cloud data to obtain marking information, and take the marking information as a special marking point of the current position. When the laser radar cannot determine the position due to lifting, position matching and auxiliary judgment can be performed according to corresponding marking information, positioning optimization is achieved, accurate positioning data are obtained, accuracy of the positioning data is improved, and meanwhile the probability of missing scanning of a low obstacle area can be reduced.
In one embodiment, the step of screening the initial obstacle information includes:
screening the initial obstacle information based on a preset frame number as a group to obtain screened obstacle information; and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
The controller screens the initial obstacle information based on a preset frame number, for example, screens stably identified obstacles based on a group of five frames, and further obtains screened obstacle information. The controller can process the screened obstacle information by adopting an interpolation algorithm to obtain optimized obstacle information, so as to preliminarily predict the position of the corresponding obstacle.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the steps of: and correcting the camera based on the preset correction parameters.
The preset correction parameters can be stored in a memory of the cleaning robot in advance, and after the camera is electrified and started, the camera is corrected, so that the shooting quality of the camera is improved.
In one embodiment, the initial obstacle information includes a center distance of the obstacle from the machine, an obstacle radius, an obstacle center point angle, an obstacle category and probability information, a sequence number of the obstacle, and time stamp information.
In one embodiment, as shown in fig. 5, after the step of acquiring the image stream data, the steps further include:
step S510, processing the image stream data based on the ultra-light weight key point detection algorithm to obtain the human body posture characteristic information.
Step S520, according to the human body posture characteristic information, the first warning information is sent to the mobile terminal.
Specifically, the controller can perform AI identification in real time, perform human body posture detection based on an ultra-light weight key point detection algorithm after capturing a human body to enter a field of view, and further obtain human body posture characteristic information by processing image stream data. The controller can send first warning information to the mobile terminal when judging that the human body posture is changed from normal to falling based on the human body posture characteristic information, so that a user is reminded, the intelligent security function is achieved, the situation that the old or the child is in the family can be attended to, and the user is prevented from falling down and cannot be found in time. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, a smart bracelet, etc.
In the above embodiment, gesture detection based on AI recognition and action recognition based on ultra-light weight key point detection algorithm can timely send out early warning to the mobile terminal of the user when detecting that the human gesture falls to the feature, can play a role in intelligent security, and improves the intelligent integration degree of the cleaning robot through fusion processing of data.
In one embodiment, as shown in fig. 6, after the step of acquiring the image stream data, the steps further include:
step S610, human body feature processing is carried out on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm, and human body gait feature information is obtained.
And step S620, performing calibration processing on the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information.
Step S630, according to the calibrated human gait feature information, second warning information is sent to the mobile terminal.
Specifically, the controller performs human body AI identification processing on the image stream data based on an AI identification algorithm in combination with a database obtained by training within the field of view, so that corresponding human body identification information can be obtained. The controller can process the image stream data based on a stereoscopic vision geometric algorithm to construct three-dimensional geometric information of human gait. The controller can further obtain human gait characteristic information according to the human identification information and the three-dimensional geometric information of human gait. The controller can calibrate the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information, and further judge the position of the human body. The controller can send second warning information to the mobile terminal according to the calibrated human gait characteristic information, so that the user is reminded of the possibility of entering strangers, and the intelligent security function is achieved. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, a smart bracelet, etc.
In one example, the human gait feature information includes information of shoe length, step size, step frequency, and the like. The controller can store the authenticated effective human gait feature information of the user and the like in advance, further can compare the calibrated human gait feature information obtained through processing with the effective human gait feature information, and sends second warning information to the mobile terminal when the human gait feature information of the stranger is detected according to the comparison result, so that the possibility that the stranger enters is reminded of the user.
In the above embodiment, by identifying the human gait feature, when the human gait feature information of the stranger is detected, it is determined that the stranger enters the field of view range, and the second warning information is sent to the mobile terminal, so that the user is reminded of the possibility of entering the stranger, further measures are taken by the user, the function of intelligent security is achieved, and the intelligent integration degree of the cleaning robot is improved through the fusion processing of the data.
It should be understood that, although the steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in fig. 7, there is also provided a data processing apparatus of a cleaning robot, including:
a data acquisition unit 710 for acquiring image stream data.
The initial recognition unit 720 is configured to perform obstacle recognition processing on the image stream data based on the AI recognition algorithm and the stereoscopic geometric algorithm, so as to obtain initial obstacle information.
And a screening unit 730, configured to screen the initial obstacle information to obtain optimized obstacle information.
And a marking unit 740, configured to perform marking processing on the optimized obstacle information to obtain marking information.
And the positioning optimization unit 750 is used for processing the marking information to obtain optimized positioning data when the positioning optimization condition is met.
The specific definition of the data processing apparatus of the cleaning robot may be referred to the definition of the data processing method of the cleaning robot hereinabove, and will not be described herein. The respective modules in the data processing apparatus of the cleaning robot described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in the controller in the cleaning robot in a hardware mode or can be independent of the controller in the cleaning robot, and can also be stored in a memory in the cleaning robot in a software mode, so that the controller can conveniently call and execute the operations corresponding to the modules.
In one embodiment, there is also provided a cleaning robot including a cleaning robot main body and a controller provided on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
Wherein the cleaning robot may be a cleaning robot having a sweeping function. The cleaning robot main body is provided with a camera and a liftable laser radar mechanism.
The controller is used for executing the following steps of the data processing method of the cleaning robot:
acquiring image stream data; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
In the above-mentioned embodiment, can be applied to the cleaning robot that has the lift radar function, through discernment and location to the barrier, when solving the obstacle avoidance, can also solve other laser machines and because of the fuselage is too high, can not advance the problem of leaking and sweeping that leads to under the low barrier, and still can realize accurate location, reduce positioning error, improved the positioning data degree of accuracy.
In one embodiment, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method of the cleaning robot of any one of the above.
In one example, the computer program when executed by a processor performs the steps of:
acquiring image stream data; performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized obstacle information to obtain marking information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A data processing method of a cleaning robot, comprising the steps of:
acquiring image stream data;
performing obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
screening the initial obstacle information to obtain optimized obstacle information;
performing coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information; according to the obstacle position information and the obstacle range information, marking the obstacles which are all positioned at the same position in a preset time period as static obstacles; sequentially extracting key information of each static obstacle based on a preset interval distance to obtain each key information; acquiring Lei Dadian cloud data, and processing each piece of key information and the radar point cloud data to obtain marking information;
and when the condition of the switching process between the falling time and the rising time of the laser radar is met, performing position matching and auxiliary judgment according to the corresponding marking information to obtain optimized positioning data.
2. The data processing method of the cleaning robot according to claim 1, wherein the radar point cloud data is acquired by a laser radar of the cleaning robot.
3. The data processing method of a cleaning robot according to claim 1, wherein the step of screening the initial obstacle information includes:
screening the initial obstacle information based on a preset frame number as a group to obtain screened obstacle information;
and processing the screened obstacle information based on an interpolation algorithm to obtain the optimized obstacle information.
4. The data processing method of a cleaning robot according to claim 1, further comprising, before the step of acquiring image stream data, the steps of:
and correcting the camera based on the preset correction parameters.
5. The data processing method of a cleaning robot according to claim 1, wherein the initial obstacle information includes a center distance of an obstacle from a machine, an obstacle radius, an obstacle center point angle, an obstacle category and probability information, a serial number of the obstacle, and time stamp information.
6. The data processing method of a cleaning robot according to claim 1, further comprising, after the step of acquiring the image stream data, the steps of:
processing the image stream data based on an ultra-light key point detection algorithm to obtain human body posture characteristic information;
and sending first warning information to the mobile terminal according to the human body posture characteristic information.
7. The data processing method of a cleaning robot according to claim 6, further comprising, after the step of acquiring the image stream data, the steps of:
performing human body feature processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait feature information;
performing calibration processing on the human gait feature information according to the radar point cloud and the ToF algorithm to obtain calibrated human gait feature information;
and sending second warning information to the mobile terminal according to the calibrated human gait characteristic information.
8. A data processing apparatus of a cleaning robot, comprising:
a data acquisition unit for acquiring image stream data;
the initial recognition unit is used for carrying out obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
the screening unit is used for screening the initial obstacle information to obtain optimized obstacle information;
the marking unit is used for carrying out coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information; according to the obstacle position information and the obstacle range information, marking the obstacles which are all positioned at the same position in a preset time period as static obstacles; sequentially extracting key information of each static obstacle based on a preset interval distance to obtain each key information; acquiring Lei Dadian cloud data, and processing each piece of key information and the radar point cloud data to obtain marking information;
and the positioning optimization unit is used for carrying out position matching and auxiliary judgment according to the corresponding marking information when the condition of the switching process from the beginning of the descending time to the completion of the ascending time of the laser radar is met, so as to obtain optimized positioning data.
9. A cleaning robot, comprising a cleaning robot main body and a controller arranged on the cleaning robot; the controller is for performing the data processing method of the cleaning robot according to any one of claims 1 to 7.
CN202210154792.2A 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot Active CN114557640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210154792.2A CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210154792.2A CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Publications (2)

Publication Number Publication Date
CN114557640A CN114557640A (en) 2022-05-31
CN114557640B true CN114557640B (en) 2023-08-01

Family

ID=81714360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210154792.2A Active CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Country Status (1)

Country Link
CN (1) CN114557640B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
CN114051628A (en) * 2020-10-30 2022-02-15 华为技术有限公司 Method and device for determining target object point cloud set

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3428885A4 (en) * 2016-03-09 2019-08-14 Guangzhou Airob Robot Technology Co., Ltd. Map construction method, and correction method and apparatus
CN112748721A (en) * 2019-10-29 2021-05-04 珠海市一微半导体有限公司 Visual robot and cleaning control method, system and chip thereof
CN111427357A (en) * 2020-04-14 2020-07-17 北京石头世纪科技股份有限公司 Robot obstacle avoidance method and device and storage medium
CN111990929B (en) * 2020-08-26 2022-03-22 北京石头世纪科技股份有限公司 Obstacle detection method and device, self-walking robot and storage medium
CN113017492A (en) * 2021-02-23 2021-06-25 江苏柯林博特智能科技有限公司 Object recognition intelligent control system based on cleaning robot
CN113655789A (en) * 2021-08-04 2021-11-16 东风柳州汽车有限公司 Path tracking method, device, vehicle and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114051628A (en) * 2020-10-30 2022-02-15 华为技术有限公司 Method and device for determining target object point cloud set
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium

Also Published As

Publication number Publication date
CN114557640A (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN106296578B (en) Image processing method and device
JP5787642B2 (en) Object holding device, method for controlling object holding device, and program
WO2021063128A1 (en) Method for determining pose of active rigid body in single-camera environment, and related apparatus
JPWO2018146769A1 (en) Position control device and position control method
CN107016348B (en) Face detection method and device combined with depth information and electronic device
CN113848943B (en) Grid map correction method and device, storage medium and electronic device
US20120155706A1 (en) Range image generation apparatus, position and orientation measurement apparatus, range image processing apparatus, method of controlling range image generation apparatus, and storage medium
CN112000051A (en) Livestock breeding management system based on Internet of things
CN111243229A (en) Old people falling risk assessment method and system
US20230271325A1 (en) Industrial internet of things systems for monitoring collaborative robots with dual identification, control methods and storage media thereof
CN111684382A (en) Movable platform state estimation method, system, movable platform and storage medium
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN114557640B (en) Data processing method and device of cleaning robot and cleaning robot
JP3952460B2 (en) Moving object detection apparatus, moving object detection method, and moving object detection program
CN109313708B (en) Image matching method and vision system
JP2019012497A (en) Portion recognition method, device, program, and imaging control system
JP7375806B2 (en) Image processing device and image processing method
CN111445411B (en) Image denoising method, image denoising device, computer equipment and storage medium
JP7300331B2 (en) Information processing device for machine learning, information processing method for machine learning, and information processing program for machine learning
CN111553850A (en) Three-dimensional information acquisition method and device based on binocular stereo vision
CN113671458A (en) Target object identification method and device
CN110855891A (en) Method and device for adjusting camera shooting angle based on human body posture and robot
JP4664805B2 (en) Face edge detection device, face edge detection method, and program
WO2022208973A1 (en) Information processing device, information processing method, and program
CN117214966B (en) Image mapping method, device, equipment and medium of millimeter wave security inspection imaging equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240226

Address after: Unit 613, 6th Floor, Building 7, No.12, Jingsheng North 1st Street, Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100000 RMB

Patentee after: Beijing Xinming Information Technology Co.,Ltd.

Country or region after: China

Address before: Room 1202, building 3, poly Duhui, No. 290, Hanxi Avenue East, Zhongcun street, Panyu District, Guangzhou, Guangdong 511496

Patentee before: Guangzhou Baole Software Technology Co.,Ltd.

Country or region before: China