CN114557640A - Cleaning robot and data processing method and device thereof - Google Patents

Cleaning robot and data processing method and device thereof Download PDF

Info

Publication number
CN114557640A
CN114557640A CN202210154792.2A CN202210154792A CN114557640A CN 114557640 A CN114557640 A CN 114557640A CN 202210154792 A CN202210154792 A CN 202210154792A CN 114557640 A CN114557640 A CN 114557640A
Authority
CN
China
Prior art keywords
information
obstacle
cleaning robot
data
optimized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210154792.2A
Other languages
Chinese (zh)
Other versions
CN114557640B (en
Inventor
彭冬旭
王行知
郑卓斌
王立磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinming Information Technology Co ltd
Original Assignee
Guangzhou Baole Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baole Software Technology Co ltd filed Critical Guangzhou Baole Software Technology Co ltd
Priority to CN202210154792.2A priority Critical patent/CN114557640B/en
Publication of CN114557640A publication Critical patent/CN114557640A/en
Application granted granted Critical
Publication of CN114557640B publication Critical patent/CN114557640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a data processing method and device of a cleaning robot and the cleaning robot, wherein the method comprises the steps of acquiring image flow data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; when the positioning optimization condition is met, the marking information is processed, optimized positioning data are obtained, accurate obstacle recognition is achieved, then obstacles can be accurately avoided, the probability of missing scanning of a low obstacle region can be reduced, accurate positioning of a machine is achieved, and accuracy of the positioning data is improved.

Description

Cleaning robot and data processing method and device thereof
Technical Field
The present disclosure relates to the field of cleaning robots, and more particularly, to a method and an apparatus for processing data of a cleaning robot, and a cleaning robot.
Background
With the increasing popularity of robot technology, robots are widely used in various industries. For example, cleaning robots having a function of sweeping the floor and the like are becoming common in homes and businesses. The technological progress promotes the upgrading of the cleaning robot, and the cleaning robot is developed into a laser type or a pure vision type from an initial random type and a pure gyroscope type, while the existing laser type cleaning robot generally adopts a structure with a laser radar fixed at the top for establishing a drawing, and the structure can cause that the cleaning robot cannot reach a low obstacle area, such as the bed bottom of a bedroom or the bottom of a sofa tea table in a living room, and large-area missed scanning is caused; this problem can be solved well to the laser type cleaning machines people based on elevating radar structure, but the laser radar of the laser type cleaning machines people of this structure is in the action change period of accomplishing to descend to the rise, and the laser type cleaning machines people only relies on the odometer of gyroscope and wheelset encoder calculation can the accumulative error, leads to the position inaccuracy after the laser radar rises.
In the implementation process, the inventor finds that at least the following problems exist in the conventional technology: the existing cleaning robot has the defects of large positioning error and inaccurate positioning data in order to reduce the missing scanning of a low obstacle area in the working process.
Disclosure of Invention
Therefore, it is necessary to provide a data processing method and device for a cleaning robot and the cleaning robot, which are used for solving the problems of large positioning error, inaccurate positioning data, single data processing and low intelligent integration degree of the existing cleaning robot in the working process.
In order to achieve the above object, an embodiment of the present invention provides a data processing method for a cleaning robot, including the steps of:
acquiring image stream data;
performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
screening the initial obstacle information to obtain optimized obstacle information;
marking the optimized barrier information to obtain marked information;
and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
In one embodiment, the positioning optimization condition includes:
detecting that a laser radar of the cleaning robot is in a switching state; the switching state is the switching process of the laser radar from the time of starting descending to the time of finishing ascending.
In one embodiment, the step of marking the optimized obstacle information comprises:
carrying out coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information;
marking the obstacle objects which are positioned at the same position in a preset time period as static obstacles according to the obstacle position information and the obstacle range information;
sequentially extracting key information of each static obstacle based on a preset interval distance to obtain each key information;
acquiring radar point cloud data, and processing each key information and the radar point cloud data to obtain marking information; the radar point cloud data is acquired by laser radar of the cleaning robot.
In one embodiment, the step of screening the initial obstacle information comprises:
screening each initial obstacle information group based on a preset frame number to obtain screened obstacle information;
and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the following steps:
and correcting the camera based on preset correction parameters.
In one embodiment, the initial obstacle information includes center distance of the obstacle from the machine, obstacle radius, obstacle center point angle, obstacle category and probability information, serial number of the obstacle, and time stamp information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the following steps:
processing the image stream data based on an ultra-light key point detection algorithm to obtain human posture characteristic information;
and sending first warning information to the mobile terminal according to the human body posture characteristic information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the following steps:
carrying out human body characteristic processing on the image flow data based on an AI (artificial intelligence) recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait characteristic information;
calibrating the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information;
and sending second warning information to the mobile terminal according to the calibrated human gait feature information.
On the other hand, an embodiment of the present invention further provides a data processing apparatus for a cleaning robot, including:
a data acquisition unit for acquiring image stream data;
the initial identification unit is used for carrying out obstacle identification processing on the image stream data based on an AI (artificial intelligence) identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
the screening unit is used for screening the initial obstacle information to obtain optimized obstacle information;
the marking unit is used for marking the optimized obstacle information to obtain marking information;
and the positioning optimization unit is used for processing the marking information when the positioning optimization condition is met to obtain optimized positioning data.
On the other hand, the embodiment of the invention also provides a cleaning robot, which comprises a cleaning robot main body and a controller arranged on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
One of the above technical solutions has the following advantages and beneficial effects:
in each embodiment of the data processing method of the cleaning robot, image stream data is acquired; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; when satisfying the location optimization condition, handle mark information, obtain optimizing back location data, realize accurate discernment barrier, and then can accurately keep away the barrier, can reduce the hourglass of short barrier region simultaneously and sweep to the accurate location of realization to the machine has improved the degree of accuracy of location data. The data processing method can be applied to a cleaning robot with a lifting radar function, and can be used for identifying and positioning the obstacles, so that the problem of missed scanning caused by the fact that other laser machines cannot enter short obstacles due to overhigh machine bodies can be solved while the obstacles are avoided, accurate positioning can be realized, positioning errors are reduced, and the positioning accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of an application environment of a data processing method of a cleaning robot according to an embodiment;
FIG. 2 is a first flowchart of a data processing method of a cleaning robot according to an embodiment;
FIG. 3 is a second flowchart of a data processing method of the cleaning robot in one embodiment;
FIG. 4 is a schematic flow chart of the obstacle marking process step in one embodiment;
FIG. 5 is a flowchart illustrating the human gesture recognition processing steps in one embodiment;
FIG. 6 is a flow chart illustrating the steps of human gait recognition processing in one embodiment;
fig. 7 is a block diagram of a data processing device of the cleaning robot in one embodiment.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, the term "plurality" shall mean two as well as more than two.
The data processing method of the cleaning robot provided by the application can be applied to the application environment shown in fig. 1. The cleaning robot comprises a controller 102 and a sweeping robot main body 104, wherein the controller 102 is connected with the cleaning robot main body 104, and the controller 102 can be used for acquiring image flow data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data. The cleaning robot may be a cleaning robot having a sweeping function. The cleaning robot also comprises a lifting laser radar mechanism and a camera, and the lifting laser radar mechanism can perform lifting scanning; the camera may be used to collect image stream data of the current environment.
In order to solve the problems of large positioning error and inaccurate positioning data of the existing cleaning robot during working, in an embodiment, as shown in fig. 2, a data processing method of the cleaning robot is provided, which is exemplified by applying the method to the controller 102 in fig. 1, and includes the following steps:
step S210, image stream data is acquired.
The image flow data can be acquired by a camera arranged on the cleaning robot. The camera may be, but is not limited to, a monocular camera or a binocular camera. Illustratively, the image stream data includes at least one frame of image data.
Specifically, the image of the current environment is shot in real time through a camera arranged on the cleaning robot to obtain image stream data, the image stream data is intercepted, the intercepted image stream data is transmitted to a controller, and then the controller receives the image stream data.
In one example, the controller may actively send a data request instruction to the camera head, and the camera head transmits the image stream data to the controller according to the data request instruction.
Step S220, performing obstacle recognition processing on the image stream data based on an AI (Artificial Intelligence) recognition algorithm and a stereoscopic vision geometric algorithm, to obtain initial obstacle information.
The AI identification algorithm can identify the target of the image based on the convolutional neural network, and then the information of the corresponding target object is obtained. Stereoscopic geometry algorithms can be used to process images to reconstruct the three-dimensional geometry of the scene.
For example, the controller performs AI recognition processing on the image stream data based on an AI recognition algorithm on the obstacles in the field of view in combination with the trained database, and may further obtain corresponding obstacle recognition information. The controller can process the image stream data based on a stereoscopic vision geometric algorithm to construct and obtain the three-dimensional geometric information of the obstacle. The controller can further obtain initial obstacle information according to the obstacle identification information and the three-dimensional geometric information of the obstacle.
And step S230, screening the initial obstacle information to obtain optimized obstacle information.
The controller can process each frame of image in the image stream data in sequence to further obtain initial obstacle information of a corresponding frame. The controller can screen each initial obstacle information, and the initial obstacle information which is stably identified is screened and processed, so that optimized obstacle information is obtained.
For example, the controller may compare the corresponding characteristic parameter value of the initial obstacle information with a preset threshold condition, and determine the initial obstacle information whose corresponding characteristic parameter value satisfies the preset threshold condition as the stably recognized obstacle information.
And step S240, marking the optimized obstacle information to obtain marked information.
The controller can mark the optimized obstacle information, marks obstacles which are located at the same position within preset time to obtain mark information, and then the controller can determine the obstacles as special mark points of the front position based on the mark information.
And step S250, processing the mark information when the positioning optimization condition is met to obtain optimized positioning data.
For example, when the cleaning robot meets a low obstacle area, the overall height of the cleaning robot can be reduced by controlling the laser radar mechanism to descend, so that the cleaning robot can scan and clean the low obstacle area. When the controller detects that the laser radar mechanism is in a switching state, the controller judges that the positioning optimization condition is met, then the marking information can be processed, and position matching and auxiliary judgment are carried out according to the marking information, so that accurate optimized positioning data are obtained.
In the above embodiment, the image stream data is acquired; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; when satisfying the location optimization condition, handle mark information, obtain optimizing back location data, realize accurate discernment barrier, and then can accurately keep away the barrier, can reduce the hourglass of short barrier region simultaneously and sweep to the accurate location of realization to the machine has improved the degree of accuracy of location data. The data processing method can be applied to a cleaning robot with a lifting radar function, and can be used for identifying and positioning the obstacles, so that the problem of missed scanning caused by the fact that other laser machines cannot enter short obstacles due to overhigh machine bodies can be solved while the obstacles are avoided, accurate positioning can be realized, positioning errors are reduced, and the positioning accuracy is improved.
In one embodiment, as shown in fig. 3, a data processing method for a cleaning robot is provided, which is described by taking the method as an example applied to the controller 102 in fig. 1, and includes the following steps:
in step S310, image stream data is acquired.
Step S320, obstacle recognition processing is carried out on the image stream data based on the AI recognition algorithm and the stereoscopic vision geometric algorithm, and initial obstacle information is obtained.
And step S330, screening the initial obstacle information to obtain optimized obstacle information.
And step S340, marking the optimized obstacle information to obtain marked information.
For the detailed description of the steps S310, S320, S330 and S340, refer to the description of the above embodiments, and are not repeated herein.
Step S350, when detecting that the laser radar of the cleaning robot is in a switching state, processing the marking information to obtain optimized positioning data; the switching state is the switching process of the laser radar from the time of starting descending to the time of finishing ascending.
The controller can monitor laser radar's operating condition, when detecting that laser radar is in the switching state, can carry out position matching and supplementary judgement according to mark information, and then obtains optimizing back positioning data, solves among the current cleaning machines people, and laser radar leads to the problem of unable accurate definite position location because of going up and down.
For example, the controller can monitor the lifting action of the laser radar in real time, and when the laser radar is detected to be in the lifting action change process from the beginning of descending to the completion of ascending, the marking information is processed, and then the optimized positioning data is obtained.
Above-mentioned embodiment, can be based on discernment and location processing to the barrier, when the problem of obstacle is kept away in the solution, can also solve current cleaning machines people because of the fuselage is too high, can not advance the problem of sweeping neglected of leading to under the short barrier, in addition, can be through when detecting laser radar rising, carry out the position matching optimization, obtain accurate location data, accurate discernment barrier has been realized, and then can be accurate keep away the barrier, can reduce sweeping leaking of short barrier region simultaneously, and realize the accurate location to the machine, the degree of accuracy of location data has been improved.
It should be noted that the cleaning effect of the cleaning robot of the present application can be improved by at least 10%, specifically based on the number of short obstacles in the home environment of the user.
In one embodiment, as shown in fig. 4, the marking process is performed on the optimized obstacle information to obtain the marking information, and the method includes the following steps:
and step S410, performing coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information.
And step S420, marking the obstacles which are positioned at the same position in a preset time period as static obstacles according to the obstacle position information and the obstacle range information.
And step S430, sequentially extracting key information of each static obstacle based on the preset interval distance to obtain each key information.
Step S440, radar point cloud data are obtained, and all key information and the radar point cloud data are processed to obtain marking information; the radar point cloud data are acquired by laser radar of the cleaning robot.
Specifically, the controller may perform coordinate conversion processing on the optimized obstacle information, for example, convert the optimized obstacle information from a machine coordinate system to a world coordinate system, and further obtain obstacle position information and obstacle range information corresponding to the world coordinate system. The controller may determine whether the obstacle position information and the obstacle range information are changed based on a preset period, and if the corresponding obstacle position information and the corresponding obstacle range information are not changed within a preset time period, determine that the obstacle corresponding to the obstacle position information and the obstacle range information is a stationary obstacle, and mark the obstacle located at the same position within the preset time period as the stationary obstacle. The controller sequentially extracts key information of each static obstacle based on the preset interval distance of the movement of the machine, namely extracting the key information of the corresponding static obstacle around the machine at intervals of a preset distance, and further obtaining each key information. The controller can acquire radar point cloud data acquired by the laser radar, process each key information by combining the radar point cloud data to obtain marking information, and use the marking information as a special marking point of the current position. When laser radar leads to unable definite position because of going up and down, can carry out position matching and supplementary judgement according to corresponding mark information, realize the optimization to the location, obtain accurate positioning data, improved the degree of accuracy of positioning data, can reduce the probability of missing to sweep of short obstacle region simultaneously.
In one embodiment, the step of screening the initial obstacle information comprises:
screening each initial obstacle information group based on a preset frame number to obtain screened obstacle information; and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
The controller screens initial obstacle information for one group based on preset frame numbers, for example, screens stably identified obstacles for one group based on every five frames, and then obtains screened obstacle information. The controller can process the screened obstacle information by adopting an interpolation algorithm to obtain optimized obstacle information, and the position of the corresponding obstacle can be preliminarily predicted.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the steps of: and correcting the camera based on preset correction parameters.
The preset correction parameters can be stored in a storage of the cleaning robot in advance, and after the camera is started by electrifying, the camera is corrected first, so that the shooting quality of the camera is improved.
In one embodiment, the initial obstacle information includes center distance of the obstacle from the machine, obstacle radius, obstacle center point angle, obstacle category and probability information, serial number of the obstacle, and timestamp information.
In one embodiment, as shown in fig. 5, after the step of acquiring the image stream data, the following steps are further included:
step S510, processing image stream data based on the ultra-light key point detection algorithm to obtain human posture characteristic information.
And step S520, sending first warning information to the mobile terminal according to the human body posture characteristic information.
Specifically, the controller can perform AI identification in real time, after capturing that a human body enters a view field, perform human body posture detection based on an ultra-light key point detection algorithm, and further obtain human body posture characteristic information by processing image stream data. The controller can send first warning information to the mobile terminal when judging that the human gesture becomes "tumble" by "normal" based on human gesture characteristic information to this reminds the user, plays intelligent security's effect, can look after the family has old man or child's the condition, prevents that it from falling and can not in time discover. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, a smart band, and the like.
In the above embodiment, gesture detection based on AI discernment and action discernment based on super light key point detection algorithm when detecting human gesture and taking place to fall into the characteristic, can be timely send the early warning to user's mobile terminal, can play the effect of intelligent security protection, through the fusion processing to data, improved cleaning robot's intelligent integrated degree.
In one embodiment, as shown in fig. 6, after the step of acquiring the image stream data, the method further comprises the following steps:
and step S610, performing human body characteristic processing on the image stream data based on an AI (artificial intelligence) recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait characteristic information.
And S620, calibrating the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information.
And step S630, sending second warning information to the mobile terminal according to the calibrated human gait feature information.
Specifically, the controller performs human body AI identification processing within the field of view on the image stream data based on an AI identification algorithm in combination with a database obtained by training, and further obtains corresponding human body identification information. The controller can process the image flow data based on a stereoscopic vision geometric algorithm to construct and obtain three-dimensional geometric information of human gait. The controller can further obtain human gait feature information according to the human body identification information and the three-dimensional geometric information of the human gait. The controller can calibrate the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information, and then judge the position of the human body. The controller can send second warning information to the mobile terminal according to the calibrated human body gait feature information so as to remind the user of the possibility of strangers entering the mobile terminal, and the intelligent security function is achieved. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, an intelligent bracelet, and the like.
In one example, the human gait feature information includes information of shoe length, step size, step frequency, and the like. The controller can pre-store effective human body gait feature information authenticated by a user and the like, can further compare the calibrated human body gait feature information obtained by processing with the effective human body gait feature information, and sends second warning information to the mobile terminal according to a comparison result when detecting human body gait feature information of a stranger, so as to remind the user of the possibility of the stranger entering the mobile terminal.
In the embodiment, by identifying the human body gait characteristics, when the human body gait characteristic information of a stranger is detected, the fact that the stranger enters the field of view is judged, and second warning information is sent to the mobile terminal to remind a user of the possibility of entering the stranger, so that the user can take further measures to play an intelligent security role, and the intelligent integration degree of the cleaning robot is improved through the fusion processing of data.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is also provided a data processing apparatus of a cleaning robot, including:
a data acquisition unit 710 for acquiring image stream data.
An initial recognition unit 720, configured to perform obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm, to obtain initial obstacle information.
And the screening unit 730 is configured to screen the initial obstacle information to obtain optimized obstacle information.
And a marking unit 740, configured to perform marking processing on the optimized obstacle information to obtain marking information.
And a positioning optimization unit 750, configured to process the mark information when the positioning optimization condition is met, so as to obtain optimized positioning data.
For specific limitations of the data processing device of the cleaning robot, reference may be made to the above limitations of the data processing method of the cleaning robot, which are not described herein again. The respective modules in the data processing device of the cleaning robot described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a controller in the cleaning robot, and can also be stored in a memory in the cleaning robot in a software form, so that the controller can call and execute the corresponding operations of the modules.
In one embodiment, there is also provided a cleaning robot including a cleaning robot main body and a controller provided on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
The cleaning robot can be a cleaning robot with a sweeping function. The cleaning robot main body is provided with a camera and a liftable laser radar mechanism.
The controller is used for executing the following steps of the data processing method of the cleaning robot:
acquiring image stream data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
Above-mentioned embodiment, can use on the cleaning machines people who has the lifting radar function, through discerning and fixing a position the barrier, when solving and keeping away the barrier, can also solve other laser machine because of the fuselage is too high, can not advance the problem of sweeping missing that leads to under the short barrier, and still can realize accurate location, reduces positioning error, has improved the location data degree of accuracy.
In one embodiment, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method of the cleaning robot of any one of the above.
In one example, the computer program when executed by the processor implements the steps of:
acquiring image stream data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features of the above embodiments may not be described for the sake of brevity, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A data processing method of a cleaning robot is characterized by comprising the following steps:
acquiring image stream data;
performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
screening the initial obstacle information to obtain optimized obstacle information;
marking the optimized obstacle information to obtain marked information;
and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
2. The data processing method of a cleaning robot according to claim 1, wherein the positioning optimization condition includes:
detecting that a laser radar of the cleaning robot is in a switching state; the switching state is a switching process of the laser radar from a descending starting time to a rising finishing time.
3. The data processing method of a cleaning robot according to claim 2, wherein the step of performing a marking process on the optimized obstacle information includes:
performing coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information;
according to the obstacle position information and the obstacle range information, marking the obstacles which are positioned at the same position in a preset time period as static obstacles;
sequentially extracting key information of each static obstacle based on a preset spacing distance to obtain each key information;
acquiring radar point cloud data, and processing each piece of key information and the radar point cloud data to obtain the marking information; the radar point cloud data is acquired by laser radar of the cleaning robot.
4. The data processing method of a cleaning robot according to claim 2, wherein the step of screening the initial obstacle information includes:
screening each initial obstacle information group based on a preset frame number to obtain screened obstacle information;
and processing the screened obstacle information based on an interpolation algorithm to obtain the optimized obstacle information.
5. The data processing method of a cleaning robot according to claim 1, further comprising, before the step of acquiring image flow data, the steps of:
and correcting the camera based on preset correction parameters.
6. The data processing method of a cleaning robot according to claim 1, wherein the initial obstacle information includes a center distance of the obstacle from the machine, an obstacle radius, an obstacle center point angle, obstacle category and probability information, a serial number of the obstacle, and time stamp information.
7. The data processing method of a cleaning robot according to claim 1, further comprising, after the step of acquiring the image stream data, the steps of:
processing the image stream data based on an ultra-light key point detection algorithm to obtain human posture characteristic information;
and sending first warning information to the mobile terminal according to the human body posture characteristic information.
8. The data processing method of a cleaning robot according to claim 7, further comprising, after the step of acquiring image flow data, the steps of:
carrying out human body characteristic processing on the image flow data based on an AI (artificial intelligence) recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait characteristic information;
calibrating the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information;
and sending second warning information to the mobile terminal according to the calibrated human gait feature information.
9. A data processing device of a cleaning robot, comprising:
a data acquisition unit for acquiring image stream data;
the initial identification unit is used for carrying out obstacle identification processing on the image stream data based on an AI (artificial intelligence) identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
the screening unit is used for screening the initial obstacle information to obtain optimized obstacle information;
the marking unit is used for marking the optimized obstacle information to obtain marking information;
and the positioning optimization unit is used for processing the marking information when the positioning optimization condition is met to obtain optimized positioning data.
10. A cleaning robot is characterized by comprising a cleaning robot main body and a controller arranged on the cleaning robot; the controller is configured to perform the data processing method of the cleaning robot of any one of claims 1 to 8.
CN202210154792.2A 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot Active CN114557640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210154792.2A CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210154792.2A CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Publications (2)

Publication Number Publication Date
CN114557640A true CN114557640A (en) 2022-05-31
CN114557640B CN114557640B (en) 2023-08-01

Family

ID=81714360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210154792.2A Active CN114557640B (en) 2022-02-21 2022-02-21 Data processing method and device of cleaning robot and cleaning robot

Country Status (1)

Country Link
CN (1) CN114557640B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017152390A1 (en) * 2016-03-09 2017-09-14 广州艾若博机器人科技有限公司 Map construction method, and correction method and apparatus
CN111990929A (en) * 2020-08-26 2020-11-27 北京石头世纪科技股份有限公司 Obstacle detection method and device, self-walking robot and storage medium
CN112748721A (en) * 2019-10-29 2021-05-04 珠海市一微半导体有限公司 Visual robot and cleaning control method, system and chip thereof
CN113017492A (en) * 2021-02-23 2021-06-25 江苏柯林博特智能科技有限公司 Object recognition intelligent control system based on cleaning robot
WO2021208530A1 (en) * 2020-04-14 2021-10-21 北京石头世纪科技股份有限公司 Robot obstacle avoidance method, device, and storage medium
CN113655789A (en) * 2021-08-04 2021-11-16 东风柳州汽车有限公司 Path tracking method, device, vehicle and storage medium
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
CN114051628A (en) * 2020-10-30 2022-02-15 华为技术有限公司 Method and device for determining target object point cloud set

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017152390A1 (en) * 2016-03-09 2017-09-14 广州艾若博机器人科技有限公司 Map construction method, and correction method and apparatus
CN112748721A (en) * 2019-10-29 2021-05-04 珠海市一微半导体有限公司 Visual robot and cleaning control method, system and chip thereof
WO2021208530A1 (en) * 2020-04-14 2021-10-21 北京石头世纪科技股份有限公司 Robot obstacle avoidance method, device, and storage medium
CN111990929A (en) * 2020-08-26 2020-11-27 北京石头世纪科技股份有限公司 Obstacle detection method and device, self-walking robot and storage medium
CN114051628A (en) * 2020-10-30 2022-02-15 华为技术有限公司 Method and device for determining target object point cloud set
CN113017492A (en) * 2021-02-23 2021-06-25 江苏柯林博特智能科技有限公司 Object recognition intelligent control system based on cleaning robot
CN113655789A (en) * 2021-08-04 2021-11-16 东风柳州汽车有限公司 Path tracking method, device, vehicle and storage medium
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
翟敬梅;刘坤;徐晓;: "室内移动机器人自主导航系统设计与方法", no. 04 *
陆柽堂;林桂锋;林位龙;黄程怀;孙宝福;: "关于实现扫地机器人优化路径算法", no. 05 *

Also Published As

Publication number Publication date
CN114557640B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
JP6554169B2 (en) Object recognition device and object recognition system
CN105283129B (en) Information processor, information processing method
JP6973258B2 (en) Image analyzers, methods and programs
US20210375036A1 (en) Three-dimensional reconstruction method, apparatus and system, model training method and storage medium
CN107016348B (en) Face detection method and device combined with depth information and electronic device
KR100695945B1 (en) The system for tracking the position of welding line and the position tracking method thereof
CN104458748A (en) Aluminum profile surface defect detecting method based on machine vision
CN110378182B (en) Image analysis device, image analysis method, and recording medium
US20230271325A1 (en) Industrial internet of things systems for monitoring collaborative robots with dual identification, control methods and storage media thereof
WO2023165505A1 (en) Tooth correction effect evaluation method, apparatus, device, and storage medium
CN111243229A (en) Old people falling risk assessment method and system
CN112190258B (en) Seat angle adjusting method and device, storage medium and electronic equipment
KR101379438B1 (en) Monitorinr system and method for wire drawing machine
CN110855891A (en) Method and device for adjusting camera shooting angle based on human body posture and robot
CN114557640A (en) Cleaning robot and data processing method and device thereof
CN109313708B (en) Image matching method and vision system
CN114199127A (en) Automobile part size detection system and method based on machine vision
CN110263754B (en) Method and device for removing shading of off-screen fingerprint, computer equipment and storage medium
CN112748721A (en) Visual robot and cleaning control method, system and chip thereof
CN110338835A (en) A kind of intelligent scanning stereoscopic monitoring method and system
CN116245929A (en) Image processing method, system and storage medium
CN112515662B (en) Sitting posture assessment method, device, computer equipment and storage medium
CN115321322A (en) Control method, device and equipment for elevator car door and storage medium
CN114648493A (en) Fastening piece fastening method and system and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240226

Address after: Unit 613, 6th Floor, Building 7, No.12, Jingsheng North 1st Street, Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100000 RMB

Patentee after: Beijing Xinming Information Technology Co.,Ltd.

Country or region after: China

Address before: Room 1202, building 3, poly Duhui, No. 290, Hanxi Avenue East, Zhongcun street, Panyu District, Guangzhou, Guangdong 511496

Patentee before: Guangzhou Baole Software Technology Co.,Ltd.

Country or region before: China