Disclosure of Invention
Therefore, it is necessary to provide a data processing method and device for a cleaning robot and the cleaning robot, which are used for solving the problems of large positioning error, inaccurate positioning data, single data processing and low intelligent integration degree of the existing cleaning robot in the working process.
In order to achieve the above object, an embodiment of the present invention provides a data processing method for a cleaning robot, including the steps of:
acquiring image stream data;
performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
screening the initial obstacle information to obtain optimized obstacle information;
marking the optimized barrier information to obtain marked information;
and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
In one embodiment, the positioning optimization condition includes:
detecting that a laser radar of the cleaning robot is in a switching state; the switching state is the switching process of the laser radar from the time of starting descending to the time of finishing ascending.
In one embodiment, the step of marking the optimized obstacle information comprises:
carrying out coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information;
marking the obstacle objects which are positioned at the same position in a preset time period as static obstacles according to the obstacle position information and the obstacle range information;
sequentially extracting key information of each static obstacle based on a preset interval distance to obtain each key information;
acquiring radar point cloud data, and processing each key information and the radar point cloud data to obtain marking information; the radar point cloud data is acquired by laser radar of the cleaning robot.
In one embodiment, the step of screening the initial obstacle information comprises:
screening each initial obstacle information group based on a preset frame number to obtain screened obstacle information;
and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the following steps:
and correcting the camera based on preset correction parameters.
In one embodiment, the initial obstacle information includes center distance of the obstacle from the machine, obstacle radius, obstacle center point angle, obstacle category and probability information, serial number of the obstacle, and time stamp information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the following steps:
processing the image stream data based on an ultra-light key point detection algorithm to obtain human posture characteristic information;
and sending first warning information to the mobile terminal according to the human body posture characteristic information.
In one embodiment, after the step of acquiring the image stream data, the method further comprises the following steps:
carrying out human body characteristic processing on the image flow data based on an AI (artificial intelligence) recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait characteristic information;
calibrating the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information;
and sending second warning information to the mobile terminal according to the calibrated human gait feature information.
On the other hand, an embodiment of the present invention further provides a data processing apparatus for a cleaning robot, including:
a data acquisition unit for acquiring image stream data;
the initial identification unit is used for carrying out obstacle identification processing on the image stream data based on an AI (artificial intelligence) identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information;
the screening unit is used for screening the initial obstacle information to obtain optimized obstacle information;
the marking unit is used for marking the optimized obstacle information to obtain marking information;
and the positioning optimization unit is used for processing the marking information when the positioning optimization condition is met to obtain optimized positioning data.
On the other hand, the embodiment of the invention also provides a cleaning robot, which comprises a cleaning robot main body and a controller arranged on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
One of the above technical solutions has the following advantages and beneficial effects:
in each embodiment of the data processing method of the cleaning robot, image stream data is acquired; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; when satisfying the location optimization condition, handle mark information, obtain optimizing back location data, realize accurate discernment barrier, and then can accurately keep away the barrier, can reduce the hourglass of short barrier region simultaneously and sweep to the accurate location of realization to the machine has improved the degree of accuracy of location data. The data processing method can be applied to a cleaning robot with a lifting radar function, and can be used for identifying and positioning the obstacles, so that the problem of missed scanning caused by the fact that other laser machines cannot enter short obstacles due to overhigh machine bodies can be solved while the obstacles are avoided, accurate positioning can be realized, positioning errors are reduced, and the positioning accuracy is improved.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, the term "plurality" shall mean two as well as more than two.
The data processing method of the cleaning robot provided by the application can be applied to the application environment shown in fig. 1. The cleaning robot comprises a controller 102 and a sweeping robot main body 104, wherein the controller 102 is connected with the cleaning robot main body 104, and the controller 102 can be used for acquiring image flow data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data. The cleaning robot may be a cleaning robot having a sweeping function. The cleaning robot also comprises a lifting laser radar mechanism and a camera, and the lifting laser radar mechanism can perform lifting scanning; the camera may be used to collect image stream data of the current environment.
In order to solve the problems of large positioning error and inaccurate positioning data of the existing cleaning robot during working, in an embodiment, as shown in fig. 2, a data processing method of the cleaning robot is provided, which is exemplified by applying the method to the controller 102 in fig. 1, and includes the following steps:
step S210, image stream data is acquired.
The image flow data can be acquired by a camera arranged on the cleaning robot. The camera may be, but is not limited to, a monocular camera or a binocular camera. Illustratively, the image stream data includes at least one frame of image data.
Specifically, the image of the current environment is shot in real time through a camera arranged on the cleaning robot to obtain image stream data, the image stream data is intercepted, the intercepted image stream data is transmitted to a controller, and then the controller receives the image stream data.
In one example, the controller may actively send a data request instruction to the camera head, and the camera head transmits the image stream data to the controller according to the data request instruction.
Step S220, performing obstacle recognition processing on the image stream data based on an AI (Artificial Intelligence) recognition algorithm and a stereoscopic vision geometric algorithm, to obtain initial obstacle information.
The AI identification algorithm can identify the target of the image based on the convolutional neural network, and then the information of the corresponding target object is obtained. Stereoscopic geometry algorithms can be used to process images to reconstruct the three-dimensional geometry of the scene.
For example, the controller performs AI recognition processing on the image stream data based on an AI recognition algorithm on the obstacles in the field of view in combination with the trained database, and may further obtain corresponding obstacle recognition information. The controller can process the image stream data based on a stereoscopic vision geometric algorithm to construct and obtain the three-dimensional geometric information of the obstacle. The controller can further obtain initial obstacle information according to the obstacle identification information and the three-dimensional geometric information of the obstacle.
And step S230, screening the initial obstacle information to obtain optimized obstacle information.
The controller can process each frame of image in the image stream data in sequence to further obtain initial obstacle information of a corresponding frame. The controller can screen each initial obstacle information, and the initial obstacle information which is stably identified is screened and processed, so that optimized obstacle information is obtained.
For example, the controller may compare the corresponding characteristic parameter value of the initial obstacle information with a preset threshold condition, and determine the initial obstacle information whose corresponding characteristic parameter value satisfies the preset threshold condition as the stably recognized obstacle information.
And step S240, marking the optimized obstacle information to obtain marked information.
The controller can mark the optimized obstacle information, marks obstacles which are located at the same position within preset time to obtain mark information, and then the controller can determine the obstacles as special mark points of the front position based on the mark information.
And step S250, processing the mark information when the positioning optimization condition is met to obtain optimized positioning data.
For example, when the cleaning robot meets a low obstacle area, the overall height of the cleaning robot can be reduced by controlling the laser radar mechanism to descend, so that the cleaning robot can scan and clean the low obstacle area. When the controller detects that the laser radar mechanism is in a switching state, the controller judges that the positioning optimization condition is met, then the marking information can be processed, and position matching and auxiliary judgment are carried out according to the marking information, so that accurate optimized positioning data are obtained.
In the above embodiment, the image stream data is acquired; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; when satisfying the location optimization condition, handle mark information, obtain optimizing back location data, realize accurate discernment barrier, and then can accurately keep away the barrier, can reduce the hourglass of short barrier region simultaneously and sweep to the accurate location of realization to the machine has improved the degree of accuracy of location data. The data processing method can be applied to a cleaning robot with a lifting radar function, and can be used for identifying and positioning the obstacles, so that the problem of missed scanning caused by the fact that other laser machines cannot enter short obstacles due to overhigh machine bodies can be solved while the obstacles are avoided, accurate positioning can be realized, positioning errors are reduced, and the positioning accuracy is improved.
In one embodiment, as shown in fig. 3, a data processing method for a cleaning robot is provided, which is described by taking the method as an example applied to the controller 102 in fig. 1, and includes the following steps:
in step S310, image stream data is acquired.
Step S320, obstacle recognition processing is carried out on the image stream data based on the AI recognition algorithm and the stereoscopic vision geometric algorithm, and initial obstacle information is obtained.
And step S330, screening the initial obstacle information to obtain optimized obstacle information.
And step S340, marking the optimized obstacle information to obtain marked information.
For the detailed description of the steps S310, S320, S330 and S340, refer to the description of the above embodiments, and are not repeated herein.
Step S350, when detecting that the laser radar of the cleaning robot is in a switching state, processing the marking information to obtain optimized positioning data; the switching state is the switching process of the laser radar from the time of starting descending to the time of finishing ascending.
The controller can monitor laser radar's operating condition, when detecting that laser radar is in the switching state, can carry out position matching and supplementary judgement according to mark information, and then obtains optimizing back positioning data, solves among the current cleaning machines people, and laser radar leads to the problem of unable accurate definite position location because of going up and down.
For example, the controller can monitor the lifting action of the laser radar in real time, and when the laser radar is detected to be in the lifting action change process from the beginning of descending to the completion of ascending, the marking information is processed, and then the optimized positioning data is obtained.
Above-mentioned embodiment, can be based on discernment and location processing to the barrier, when the problem of obstacle is kept away in the solution, can also solve current cleaning machines people because of the fuselage is too high, can not advance the problem of sweeping neglected of leading to under the short barrier, in addition, can be through when detecting laser radar rising, carry out the position matching optimization, obtain accurate location data, accurate discernment barrier has been realized, and then can be accurate keep away the barrier, can reduce sweeping leaking of short barrier region simultaneously, and realize the accurate location to the machine, the degree of accuracy of location data has been improved.
It should be noted that the cleaning effect of the cleaning robot of the present application can be improved by at least 10%, specifically based on the number of short obstacles in the home environment of the user.
In one embodiment, as shown in fig. 4, the marking process is performed on the optimized obstacle information to obtain the marking information, and the method includes the following steps:
and step S410, performing coordinate conversion processing on the optimized obstacle information to obtain obstacle position information and obstacle range information.
And step S420, marking the obstacles which are positioned at the same position in a preset time period as static obstacles according to the obstacle position information and the obstacle range information.
And step S430, sequentially extracting key information of each static obstacle based on the preset interval distance to obtain each key information.
Step S440, radar point cloud data are obtained, and all key information and the radar point cloud data are processed to obtain marking information; the radar point cloud data are acquired by laser radar of the cleaning robot.
Specifically, the controller may perform coordinate conversion processing on the optimized obstacle information, for example, convert the optimized obstacle information from a machine coordinate system to a world coordinate system, and further obtain obstacle position information and obstacle range information corresponding to the world coordinate system. The controller may determine whether the obstacle position information and the obstacle range information are changed based on a preset period, and if the corresponding obstacle position information and the corresponding obstacle range information are not changed within a preset time period, determine that the obstacle corresponding to the obstacle position information and the obstacle range information is a stationary obstacle, and mark the obstacle located at the same position within the preset time period as the stationary obstacle. The controller sequentially extracts key information of each static obstacle based on the preset interval distance of the movement of the machine, namely extracting the key information of the corresponding static obstacle around the machine at intervals of a preset distance, and further obtaining each key information. The controller can acquire radar point cloud data acquired by the laser radar, process each key information by combining the radar point cloud data to obtain marking information, and use the marking information as a special marking point of the current position. When laser radar leads to unable definite position because of going up and down, can carry out position matching and supplementary judgement according to corresponding mark information, realize the optimization to the location, obtain accurate positioning data, improved the degree of accuracy of positioning data, can reduce the probability of missing to sweep of short obstacle region simultaneously.
In one embodiment, the step of screening the initial obstacle information comprises:
screening each initial obstacle information group based on a preset frame number to obtain screened obstacle information; and processing the screened obstacle information based on an interpolation algorithm to obtain optimized obstacle information.
The controller screens initial obstacle information for one group based on preset frame numbers, for example, screens stably identified obstacles for one group based on every five frames, and then obtains screened obstacle information. The controller can process the screened obstacle information by adopting an interpolation algorithm to obtain optimized obstacle information, and the position of the corresponding obstacle can be preliminarily predicted.
In one embodiment, before the step of acquiring the image stream data, the method further comprises the steps of: and correcting the camera based on preset correction parameters.
The preset correction parameters can be stored in a storage of the cleaning robot in advance, and after the camera is started by electrifying, the camera is corrected first, so that the shooting quality of the camera is improved.
In one embodiment, the initial obstacle information includes center distance of the obstacle from the machine, obstacle radius, obstacle center point angle, obstacle category and probability information, serial number of the obstacle, and timestamp information.
In one embodiment, as shown in fig. 5, after the step of acquiring the image stream data, the following steps are further included:
step S510, processing image stream data based on the ultra-light key point detection algorithm to obtain human posture characteristic information.
And step S520, sending first warning information to the mobile terminal according to the human body posture characteristic information.
Specifically, the controller can perform AI identification in real time, after capturing that a human body enters a view field, perform human body posture detection based on an ultra-light key point detection algorithm, and further obtain human body posture characteristic information by processing image stream data. The controller can send first warning information to the mobile terminal when judging that the human gesture becomes "tumble" by "normal" based on human gesture characteristic information to this reminds the user, plays intelligent security's effect, can look after the family has old man or child's the condition, prevents that it from falling and can not in time discover. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, a smart band, and the like.
In the above embodiment, gesture detection based on AI discernment and action discernment based on super light key point detection algorithm when detecting human gesture and taking place to fall into the characteristic, can be timely send the early warning to user's mobile terminal, can play the effect of intelligent security protection, through the fusion processing to data, improved cleaning robot's intelligent integrated degree.
In one embodiment, as shown in fig. 6, after the step of acquiring the image stream data, the method further comprises the following steps:
and step S610, performing human body characteristic processing on the image stream data based on an AI (artificial intelligence) recognition algorithm and a stereoscopic vision geometric algorithm to obtain human body gait characteristic information.
And S620, calibrating the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information.
And step S630, sending second warning information to the mobile terminal according to the calibrated human gait feature information.
Specifically, the controller performs human body AI identification processing within the field of view on the image stream data based on an AI identification algorithm in combination with a database obtained by training, and further obtains corresponding human body identification information. The controller can process the image flow data based on a stereoscopic vision geometric algorithm to construct and obtain three-dimensional geometric information of human gait. The controller can further obtain human gait feature information according to the human body identification information and the three-dimensional geometric information of the human gait. The controller can calibrate the human gait feature information according to the radar point cloud and the ToF algorithm to obtain the calibrated human gait feature information, and then judge the position of the human body. The controller can send second warning information to the mobile terminal according to the calibrated human body gait feature information so as to remind the user of the possibility of strangers entering the mobile terminal, and the intelligent security function is achieved. The mobile terminal may be, but is not limited to, a mobile phone, a tablet computer, an intelligent bracelet, and the like.
In one example, the human gait feature information includes information of shoe length, step size, step frequency, and the like. The controller can pre-store effective human body gait feature information authenticated by a user and the like, can further compare the calibrated human body gait feature information obtained by processing with the effective human body gait feature information, and sends second warning information to the mobile terminal according to a comparison result when detecting human body gait feature information of a stranger, so as to remind the user of the possibility of the stranger entering the mobile terminal.
In the embodiment, by identifying the human body gait characteristics, when the human body gait characteristic information of a stranger is detected, the fact that the stranger enters the field of view is judged, and second warning information is sent to the mobile terminal to remind a user of the possibility of entering the stranger, so that the user can take further measures to play an intelligent security role, and the intelligent integration degree of the cleaning robot is improved through the fusion processing of data.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is also provided a data processing apparatus of a cleaning robot, including:
a data acquisition unit 710 for acquiring image stream data.
An initial recognition unit 720, configured to perform obstacle recognition processing on the image stream data based on an AI recognition algorithm and a stereoscopic vision geometric algorithm, to obtain initial obstacle information.
And the screening unit 730 is configured to screen the initial obstacle information to obtain optimized obstacle information.
And a marking unit 740, configured to perform marking processing on the optimized obstacle information to obtain marking information.
And a positioning optimization unit 750, configured to process the mark information when the positioning optimization condition is met, so as to obtain optimized positioning data.
For specific limitations of the data processing device of the cleaning robot, reference may be made to the above limitations of the data processing method of the cleaning robot, which are not described herein again. The respective modules in the data processing device of the cleaning robot described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a controller in the cleaning robot, and can also be stored in a memory in the cleaning robot in a software form, so that the controller can call and execute the corresponding operations of the modules.
In one embodiment, there is also provided a cleaning robot including a cleaning robot main body and a controller provided on the cleaning robot; the controller is used for executing the data processing method of the cleaning robot.
The cleaning robot can be a cleaning robot with a sweeping function. The cleaning robot main body is provided with a camera and a liftable laser radar mechanism.
The controller is used for executing the following steps of the data processing method of the cleaning robot:
acquiring image stream data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
Above-mentioned embodiment, can use on the cleaning machines people who has the lifting radar function, through discerning and fixing a position the barrier, when solving and keeping away the barrier, can also solve other laser machine because of the fuselage is too high, can not advance the problem of sweeping missing that leads to under the short barrier, and still can realize accurate location, reduces positioning error, has improved the location data degree of accuracy.
In one embodiment, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method of the cleaning robot of any one of the above.
In one example, the computer program when executed by the processor implements the steps of:
acquiring image stream data; performing obstacle identification processing on the image stream data based on an AI identification algorithm and a stereoscopic vision geometric algorithm to obtain initial obstacle information; screening the initial obstacle information to obtain optimized obstacle information; marking the optimized barrier information to obtain marked information; and when the positioning optimization condition is met, processing the marking information to obtain optimized positioning data.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features of the above embodiments may not be described for the sake of brevity, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.