CN113925391A - Tumble detection method and device based on cleaning robot and cleaning robot - Google Patents
Tumble detection method and device based on cleaning robot and cleaning robot Download PDFInfo
- Publication number
- CN113925391A CN113925391A CN202111095161.XA CN202111095161A CN113925391A CN 113925391 A CN113925391 A CN 113925391A CN 202111095161 A CN202111095161 A CN 202111095161A CN 113925391 A CN113925391 A CN 113925391A
- Authority
- CN
- China
- Prior art keywords
- human body
- cleaning robot
- result
- image
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/24—Floor-sweeping machines, motor-driven
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/28—Floor-scrubbing machines, motor-driven
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4002—Installations of electric equipment
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4011—Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L9/00—Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
- A47L9/28—Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L9/00—Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
- A47L9/28—Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
- A47L9/2805—Parameters or conditions being sensed
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L9/00—Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
- A47L9/28—Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
- A47L9/2836—Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means characterised by the parts which are controlled
- A47L9/2852—Elements for displacement of the vacuum cleaner or the accessories therefor, e.g. wheels, casters or nozzles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
- A47L2201/04—Automatic control of the travelling movement; Automatic obstacle detection
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a cleaning robot-based fall detection method and device and a cleaning robot. Identifying a target image in real time through a cleaning robot to obtain an image identification result; generating image description information of a target image according to the existence of human body data in the image recognition result, and performing human body key point detection on the acquired target image according to the fact that the image description information contains human body falling information to obtain a key point detection result; controlling the cleaning robot to navigate to the current position of the human body according to the fact that the key point detection result is the human body falling result; whether the human body falls down or not is finally judged through the distance between the cleaning robot and the head of the human body. Compared with the prior art, the method can ensure the reliability of the fall detection result through triple progressive judgment means of image description detection, key point detection and distance detection, and has the advantages of higher accuracy and reliability of the fall detection result and the like.
Description
Technical Field
The invention relates to the technical field of cleaning robots, in particular to a cleaning robot and a tumble detection method and device based on the cleaning robot.
Background
With the continuous development of smart cities and smart homes, effectively protecting the old and the weak in the home of a user becomes a very important task. For example, if the old or the child falls while being at home, the old or the child may have serious consequences if not handled in time. Therefore, a camera connected with the smart phone is installed at home, and whether an accident occurs or not is checked in a remote monitoring mode; however, in this implementation, a specially-assigned person needs to pay attention to the system for a long time or check the system regularly, so that not only the real-time requirement cannot be met, but also the human input cost is too high. The scheme of combining artificial intelligence with the household cameras is also proposed, but the number of the general household cameras is 1-3, and even if the household cameras are specially arranged in a dead-angle-free monitoring mode, the intelligent analysis result can also be difficult to put into practical application because the quality of the acquired far and near images is different or the algorithm is easy to misjudge, so that the problem needs to be solved urgently.
Therefore, how to accurately and timely effectively judge and detect the tumbling state of the human body becomes a key point for technical problems to be solved urgently and for research all the time by technical personnel in the field.
Disclosure of Invention
The invention mainly aims to provide a tumble detection method and device based on a cleaning robot and the cleaning robot, so as to improve the accuracy and reliability of tumble detection effects of users and reduce the probability and possibility of false detection.
To achieve the above technical objects, one or more embodiments of the present invention can provide a cleaning robot-based fall detection method, which may include, but is not limited to, one or more of the following steps. Firstly, identifying a target image in real time through a cleaning robot to obtain an image identification result; the target image includes an image captured by controlling a camera on the cleaning robot. And secondly, generating image description information of the target image according to the human body data in the image recognition result. And thirdly, detecting key points of the human body on the acquired target image according to the fact that the image description information contains human body falling information, so as to obtain a key point detection result. And then, controlling the cleaning robot to navigate to the current position of the human body according to the fact that the key point detection result is the human body falling result. Finally, the invention can finally judge whether the human body falls down or not through the distance between the cleaning robot and the head of the human body.
To achieve the above technical objects, embodiments of the present invention can provide a cleaning robot including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the cleaning robot-based fall detection method according to any one of the embodiments of the present invention when executing the computer program.
To achieve the above technical objects, some embodiments of the present invention may also provide a cleaning robot-based fall detection apparatus, which may include, but is not limited to, a target image recognition module, a description information generation module, a key point detection module, a navigation progress control module, and a fall final determination module.
The target image identification module is used for identifying a target image in real time to obtain an image identification result; the target image includes an image captured by controlling a camera on the cleaning robot.
And the description information generation module is used for generating image description information of the target image according to the human body data in the image recognition result.
And the key point detection module is used for detecting the key points of the human body on the target image according to the image description information containing the falling information of the human body so as to obtain a key point detection result.
And the navigation advancing control module is used for controlling the cleaning robot to navigate to the position where the human body is located according to the key point detection result as the human body falling result.
And the fall final judgment module is used for finally judging whether the human body falls or not through the distance between the cleaning robot and the head of the human body.
To achieve the above technical objects, the present invention can also provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the cleaning robot-based fall detection method in any one of the embodiments of the present invention.
To achieve the above technical objects, embodiments of the present invention may also provide a computer program product, wherein when instructions of the computer program product are executed by a processor, the cleaning robot-based fall detection method according to any embodiment of the present invention is performed.
The invention has the beneficial effects that: the method can ensure the reliability of the tumble detection result through triple progressive judgment means of image description detection, key point detection and distance detection, and avoids the problem of avoiding dead angles by installing a large number of cameras based on the mode of acquiring images by a movable cleaning robot. The intelligent mobile cleaning robot is used for navigating to the position near the human body for final judgment, so that the possibility of misjudgment is greatly reduced by means of shortening the distance between the camera device and the human body, and the accuracy and reliability of the fall detection result are improved. The invention can inform the relevant personnel immediately after accurately judging the falling condition, thereby effectively reducing the probability of further danger possibly occurring after the old or the child falls. The invention realizes that replacement personnel can effectively monitor the tumbling state of the old or the child, and has the outstanding advantages of good user experience, low implementation cost and the like. In addition, the invention can be directly applied to cleaning robot equipment, and provides the cleaning robot with more intelligent functions closer to the requirements of users.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 shows a flow diagram of a cleaning robot-based fall detection method in accordance with one or more embodiments of the present invention.
Fig. 2 is a schematic diagram illustrating one implementation of the cleaning robot-based fall detection method according to one or more embodiments of the present disclosure.
Fig. 3 is a schematic structural diagram of a cleaning robot with a camera device installed therein according to one or more embodiments of the present invention.
FIG. 4 is a diagram illustrating the result of labeling the key points of the human body in the image according to one or more embodiments of the present invention.
FIG. 5 is a schematic diagram illustrating the transmission of generated image description information to an associated user in one or more embodiments of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the defects of high misjudgment rate, high investment cost, difficulty in practical application and the like of the conventional intelligent fall detection scheme, the invention can provide the fall detection method and device based on the cleaning robot and the cleaning robot so as to effectively overcome at least one problem in the conventional fall detection technology.
As shown in fig. 1, and in conjunction with fig. 2, one or more embodiments of the present invention may provide a cleaning robot-based fall detection method. The fall detection method of the present invention in combination with a cleaning robot includes, but is not limited to, at least one of the following steps, which are described in detail below.
Firstly, identifying a target image in real time to obtain an image identification result; the target image comprises an image acquired by controlling the camera device on the cleaning robot, and the movable cleaning robot has an image acquisition function and is provided with the camera device for acquiring the image.
Then, image description information of the target image in which the human body data exists is generated according to the human body data existing in the image recognition result. Therefore, the invention can carry out image description processing on the target image containing the human body target so as to obtain the description information of the target image containing the human body, and further carry out the judgment of the falling state.
As shown in fig. 5, some embodiments of the present invention may describe an image including a marker, where the marker can be set by a user, and in the embodiment of the present invention, if the marker is a human body, the image including the human body is described. According to the invention, the images acquired by the cleaning robot can be processed in a targeted manner, namely, the images including the human body are only subjected to image description processing, so that the problems of low processing speed and high requirement on equipment computing capacity caused by processing all the images acquired by the robot are solved. It can be understood that the embodiment of the present invention can perform image description on a plurality of targets of interest marked by a user, the targets of interest can include identifiers and other targets, such as pets in homes, gas cookers, etc., and can transmit the targets of interest to a user device, so that the user can quickly know the targets of interest; therefore, the real-time situation can be rapidly mastered by paying attention to the target description information without manual discrimination through images or videos, and the labor cost input is reduced.
And thirdly, detecting key points of the human body on the target image according to the fact that the image description information contains human body falling information, so as to obtain a key point detection result.
As shown in fig. 4, the one or more embodiments of the present invention specifically include, according to the key point detection result, a human body fall result: acquiring a first vertical coordinate of key points of the head of the human body and a second vertical coordinate of key points of the feet of the human body, and calculating a difference value d between the first vertical coordinate and the second vertical coordinate; and finally, determining that the detection result of the key point is the human body falling result according to the fact that the difference value d is smaller than a third preset value. It should be understood that, in the embodiment of the present invention, the human body fall results determined based on the image description determination and determined based on the keypoint detection result are both human body possible fall results, that is, the current determination result is not the final determination result.
Fig. 4 illustrates 17 keypoints p0 to p16, which have the coordinates of p0(x0, y0), p1(x1, y1), p2(x2, y2), p3(x3, y3), p4(x4, y4), p5(x5, y5), p6(x6, y6), p7(x7, y7), p8(x8, y8), p9(x9, y9), p10(x10, y10), p11(x11, y11), p12(x 63 12, y12), p13(x13, y13), p14(x14, y14), p14(x14, y14), p14, y14 (x14, y 14). Each key point represents a main recognizable position on the human body, for example, p0(x0, y0) can be used for representing the position of the nose of the human body, p3(x3, y3) and p4(x4, y4) can be used for representing the positions of the ears of the human body, and the position numbers can be set according to actual conditions, but are not limited to this. In the embodiment of the present invention, y0 may be used to represent a first vertical coordinate of the key point of the head of the human body, and (y15+ y16)/2 is used to represent a second vertical coordinate of the key point of the foot of the human body, so that a difference d | (y15+ y 16)/2-y 0|, and whether the human body may fall down is determined according to a comparison result between d | (y15+ y 16)/2-y 0| and a third preset value.
And then, controlling the cleaning robot to navigate to the position where the human body is located according to the fact that the key point detection result is the human body falling result. The invention can achieve the aim of drawing the distance between the camera device and the human body by means of the free movement characteristic of the cleaning robot, thereby realizing more accurate tumble detection and judgment.
Optionally, the controlling the cleaning robot to navigate to the position where the human body is located in the embodiment of the present invention includes: identifying a first coordinate of the position of the human body in a pixel coordinate system based on the target image, wherein the first coordinate is a coordinate in the pixel coordinate system; then, a second coordinate of the position of the human body can be determined according to the conversion from the pixel coordinate system to the camera coordinate system, wherein the second coordinate is a coordinate in the camera coordinate system; then, a third coordinate of the position of the human body can be determined according to the conversion from the camera coordinate system to the map coordinate system, wherein the third coordinate in the embodiment of the invention is a coordinate in the map coordinate system and can be directly used for path navigation of the cleaning robot; the cleaning robot can be controlled to move to the third coordinate based on the generated third coordinate of the position of the human body, so that the distance between the cleaning robot and the third coordinate is the second preset value, and the second preset value in the embodiment of the present invention can be set according to the needs of practical application, for example, 18cm, etc., but is not limited thereto.
It should be understood that, the specific formula or algorithm used for the conversion from the pixel coordinate system to the camera coordinate system and the conversion from the camera coordinate system to the map coordinate system can be set reasonably according to the parameters of the camera device, the parameters of the navigation map and the like, so as to achieve the purpose of coordinate conversion, and the detailed description is omitted.
Finally, whether the human body falls is finally judged according to the distance between the cleaning robot and the head of the human body, namely, the human body is finally confirmed to be in the falling state under the condition that the three judgment results are all falling during the specific implementation of the invention.
Specifically, the final judgment of whether the current human body falls down or not through the distance between the cleaning robot and the head of the human body according to the present invention includes: the distance between the camera device on the cleaning robot and the head of the human body is determined, and whether the robot falls down or not is judged according to the distance. Specifically, the human body falls according to the fact that the distance between a camera on the cleaning robot and the head of the human body is smaller than or equal to a preset value; or according to the invention, the falling detection result indicates that the human body does not fall down if the distance between the camera device and the head of the human body is greater than the first preset value. It is understood that the first preset value in the embodiment of the present invention may be specifically set according to factors such as the size of the cleaning robot, the specific use environment, and the height of the target user, for example, 35cm, and the like, but is not limited thereto.
Optionally, the process of determining the distance between the cleaning robot and the head of the human body in the embodiment of the present invention may include: the binocular camera is used for ranging, and the binocular ranging process can specifically comprise the steps of calibrating the binocular camera, correcting the binocular camera, matching the binocular camera, calculating depth information and the like, so that the distance from the head of a human body to the camera can be accurately determined; of course, the embodiment of the present invention may also directly acquire depth information in a 3D TOF (3D Time of Flight, 3D imaging by Time of Flight) mode, thereby directly determining the distance from the head of the human body to the camera. Based on the disclosure of the present invention, a suitable distance measuring means can be selected according to the product cost or design requirement, and the specific distance measuring means includes, but is not limited to, an infrared distance measuring scheme, etc. to achieve the current technical purpose of the present invention, i.e. to achieve the purpose of distance measurement.
As shown in FIG. 2, one or more embodiments of the invention may further include the steps of: and controlling a camera device on the cleaning robot to collect a Face image according to the final judgment result, performing image recognition processing on the Face image, and specifically, gradually correcting faces with different angles by using a Face Detection algorithm (Real-Time Rotation-initialization Face Detection and Tracking) with angles in a PCN (Progressive Calibration Network) algorithm to obtain a Face recognition result. Of course, the present invention may also adopt a classification algorithm (for example, a lightweight network mobility v2 or a shuffle mobility v2 or a resnet in a CNN model, where CNN refers to a probabilistic Neural Networks-Convolutional Neural network) to recognize the age or identity of a human face after the human face is corrected by using a PCN algorithm, and certainly is not limited thereto, and the purpose of detecting the human face can be achieved. The invention can obtain at least one of the results of age information, identity information, real-time facial expression and the like through the face recognition result, wherein the identity information comprises the old or the child.
Optionally, one or more embodiments of the present invention may further include the step of intelligently asking for help by the cleaning robot. The method specifically comprises the following steps: and controlling the cleaning robot to send out help-seeking information according to the final judgment result, wherein the help-seeking information can include but is not limited to sending fall notification information, image description information or dialed help-seeking telephone information to a specified terminal, and for example, the user can directly call for help after dialing 120 and sending out the conditions of the user address, age and the like through intelligent voice.
As shown in fig. 3, the present invention can also provide a cleaning robot having an image pickup device based on the same inventive concept as the fall detection method. The cleaning robot specifically comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and when the computer program is executed by the processor, the cleaning robot-based fall detection method in any embodiment of the invention is realized. The fall detection method is described in detail in the present specification, and will not be described herein again.
It should be understood that the cleaning robot according to the embodiments of the present invention includes, but is not limited to, at least one of an intelligent cleaning device for cleaning a scene, such as an intelligent sweeping robot, an intelligent mopping robot, an intelligent vacuum cleaner, and the like.
Compared with the traditional cleaning robot which only has a cleaning function, the cleaning robot has the functions of image acquisition, image identification and description, key point detection, target navigation and traveling based on image identification, tumble detection and the like by combining with an artificial intelligence technology, so that the cleaning robot plays an important role as a member in a family, the monitoring of the tumble of the old or the child is realized by replacing a user, the alarm is immediately given when the tumble condition is identified, and the user experience is excellent.
The invention also provides a tumble detection device based on the cleaning robot, based on the same technical concept as the tumble detection method of the invention. Corresponding to the detection method, the cleaning robot-based fall detection apparatus according to one or more embodiments of the present invention specifically includes, but is not limited to, a target image recognition module, a description information generation module, a key point detection module, a navigation progress control module, and a fall final determination module, and the apparatus is described in detail below.
The target image identification module can be used for identifying a target image in real time to obtain an image identification result; wherein the target image includes an image captured by controlling a camera on the cleaning robot.
The description information generation module is used for generating image description information of the target image according to the human body data in the image recognition result.
The key point detection module can be used for detecting the key points of the human body of the target image according to the fact that the image description information contains the falling information of the human body, and therefore the key point detection result is obtained. The key point detection module can be specifically used for acquiring a first vertical coordinate of a key point of the head of the human body and a second vertical coordinate of a key point of the foot of the human body, calculating a difference value between the first vertical coordinate and the second vertical coordinate, and determining that a key point detection result is a human body falling result according to the fact that the difference value between the first vertical coordinate and the second vertical coordinate is smaller than a third preset value, wherein the result is a possible falling result.
The navigation advancing control module can be used for controlling the cleaning robot to navigate and advance to the position where the human body is located according to the key point detection result as the human body falling result. The navigation progress control module in the embodiment of the invention is specifically configured to identify a first coordinate of the human body position in the pixel coordinate system based on the target image, determine a second coordinate of the human body position according to the conversion from the pixel coordinate system to the camera coordinate system, and determine a third coordinate of the human body position according to the conversion from the camera coordinate system to the map coordinate system. And then the navigation travel control module is used for controlling the cleaning robot to move to a third coordinate of the position of the human body, so that the distance between the cleaning robot and the third coordinate is a second preset value.
The fall final judgment module is used for finally judging whether the human body falls or not according to the distance between the cleaning robot and the head of the human body. The fall final judgment module is specifically used for determining the distance between a camera device on the cleaning robot and the head of the human body, and judging that the fall detection result is that the human body falls according to the fact that the distance between the camera device and the head is smaller than or equal to a preset value; or the distance between the camera device and the head is greater than a first preset value, and the falling detection result is judged to be that the human body does not fall.
Optionally, the fall detection apparatus in one or more embodiments of the present invention may further include a face recognition module, where the face recognition module is configured to collect a face image for a camera on the human body fall control cleaning robot according to the final determination result, perform image recognition processing on the collected face image to obtain a target face recognition result, and obtain at least one of age information and identity information according to the face recognition result.
Optionally, the fall detection device in the embodiment of the present invention may further include a distress message sending module, where the distress message sending module is configured to send out a distress message for the human body to fall according to the final determination result.
Based on the same technical concept as the cleaning robot-based fall detection method, the present invention may also provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the cleaning robot-based fall detection method in any of the embodiments of the present invention. Herein, the tumble detection method based on the cleaning robot has been described and recited in detail in the present specification, and is not described again here.
The present invention can also provide a computer program product that performs the cleaning robot-based fall detection method according to one or more embodiments of the present invention when instructions in the computer program product are executed by a cleaning robot processor, based on the same inventive concept. Herein, the tumble detection method based on the cleaning robot has been described and recited in detail in the present specification, and is not described again here.
In summary, the invention performs image acquisition, identification and description based on the cleaning robot, and performs key point detection and further distance detection, thereby ensuring the reliability and accuracy of the fall detection result of the invention through a multiple progressive judgment scheme. Compared with the scheme of installing one or more cameras indoors, the problem of avoiding dead angles by installing a large number of cameras is avoided based on the mode that the movable cleaning robot collects images. According to the embodiment of the invention, the cleaning robot capable of moving intelligently is navigated to the position near the human body to carry out final fall judgment, so that the possibility of misjudgment is greatly reduced by means of shortening the distance between the camera device of the cleaning robot and the human body, and the accuracy and reliability of the fall detection result are improved. The invention can also inform related personnel including family members or rescue personnel immediately after accurately judging the falling condition, thereby effectively reducing the probability of further danger possibly occurring after the old or children fall. The invention realizes that replacement personnel specially and effectively monitor the tumbling state of the old or the child, and has the outstanding advantages of very good user experience, lower realization cost and the like. In addition, the technical scheme provided by the invention can be directly applied to cleaning robot equipment, so that the functions which are more intelligent and closer to the actual requirements of users are endowed to the cleaning robot.
In the description herein, references to the description of the term "the present embodiment," "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM-Only Memory, or flash Memory), an optical fiber device, and a portable Compact Disc Read-Only Memory (CDROM). Additionally, the computer-readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic Gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic Gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A fall detection method based on a cleaning robot is characterized by comprising the following steps:
identifying a target image in real time to obtain an image identification result; the target image comprises an image acquired by controlling a camera device on the cleaning robot;
generating image description information of the target image according to the human body data in the image recognition result;
detecting key points of the human body on the target image according to the fact that the image description information contains human body tumbling information to obtain key point detection results;
controlling the cleaning robot to navigate to the position where the human body is located according to the fact that the key point detection result is a human body falling result;
and finally judging whether the human body falls down or not through the distance between the cleaning robot and the head of the human body.
2. The cleaning robot-based fall detection method according to claim 1, wherein the finally judging whether the human body falls or not by the distance between the cleaning robot and the head of the human body comprises:
determining a distance between a camera on the cleaning robot and the human head;
according to the fact that the distance is smaller than or equal to a preset value, the falling detection result indicates that the human body falls; or, according to the fact that the distance is larger than the first preset value, the falling detection result indicates that the human body does not fall.
3. The cleaning robot-based fall detection method according to claim 1 or 2, wherein the controlling the cleaning robot to navigate to the position where the human body is located includes:
identifying a first coordinate of a human body position in a pixel coordinate system based on the target image;
determining a second coordinate of the position of the human body according to the conversion from the pixel coordinate system to the camera coordinate system;
determining a third coordinate of the position of the human body according to the conversion from the camera coordinate system to a map coordinate system;
and controlling the cleaning robot to move towards the third coordinate so that the distance between the cleaning robot and the third coordinate is a second preset value.
4. The cleaning robot-based fall detection method according to claim 1 or 2, further comprising:
controlling a camera device on the cleaning robot to collect a face image according to the final judgment result that the human body falls down;
carrying out image recognition processing on the face image to obtain a face recognition result;
and obtaining at least one of age information and identity information according to the face recognition result.
5. The cleaning robot-based fall detection method according to claim 1, wherein the human body fall detection result according to the key point detection result comprises:
acquiring a first vertical coordinate of a key point of the head of the human body and a second vertical coordinate of a key point of the foot of the human body;
calculating a difference value between the first vertical coordinate and the second vertical coordinate;
and determining that the detection result of the key point is the human body falling result according to the fact that the difference value is smaller than the third preset value.
6. The cleaning robot-based fall detection method according to claim 1 or 2, further comprising:
and controlling the cleaning robot to send out distress information when the human body falls down according to the final judgment result.
7. A cleaning robot comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the cleaning robot-based fall detection method according to any one of claims 1 to 6 when executing the computer program.
8. A fall detection device based on a cleaning robot, characterized by comprising:
the target image identification module is used for identifying a target image in real time to obtain an image identification result; the target image comprises an image acquired by controlling a camera device on the cleaning robot;
the description information generation module is used for generating image description information of the target image according to the human body data in the image recognition result;
the key point detection module is used for detecting key points of the human body on the target image according to the image description information including the falling information of the human body so as to obtain a key point detection result;
the navigation advancing control module is used for controlling the cleaning robot to navigate to the position where the human body is located according to the fact that the key point detection result is the human body falling result;
and the fall final judgment module is used for finally judging whether the human body falls or not according to the distance between the cleaning robot and the head of the human body.
9. A computer-readable storage medium, having stored thereon a computer program which is executed by a processor to implement the cleaning robot-based fall detection method according to any one of claims 1 to 6.
10. A computer program product, characterized in that when instructions in the computer program product are executed by a processor, the cleaning robot-based fall detection method according to any one of claims 1 to 6 is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111095161.XA CN113925391A (en) | 2021-09-17 | 2021-09-17 | Tumble detection method and device based on cleaning robot and cleaning robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111095161.XA CN113925391A (en) | 2021-09-17 | 2021-09-17 | Tumble detection method and device based on cleaning robot and cleaning robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113925391A true CN113925391A (en) | 2022-01-14 |
Family
ID=79276129
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111095161.XA Pending CN113925391A (en) | 2021-09-17 | 2021-09-17 | Tumble detection method and device based on cleaning robot and cleaning robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113925391A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114831540A (en) * | 2022-04-19 | 2022-08-02 | 珠海格力电器股份有限公司 | Robot and method for using same |
CN114859749A (en) * | 2022-06-27 | 2022-08-05 | 忆月启函(盐城)科技有限公司 | Intelligent home management method and system based on Internet of things |
CN115153353A (en) * | 2022-07-25 | 2022-10-11 | 珠海格力电器股份有限公司 | Control method and device of sweeping robot, sweeping robot and storage medium |
CN116189238A (en) * | 2023-04-19 | 2023-05-30 | 国政通科技有限公司 | Human shape detection and identification fall detection method based on neural network |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203759831U (en) * | 2014-02-17 | 2014-08-06 | 崔健雄 | Fall-down monitoring robot |
CN104574813A (en) * | 2014-12-19 | 2015-04-29 | 龙凤娇 | Radar lamp monitoring system and method |
WO2016155789A1 (en) * | 2015-03-31 | 2016-10-06 | Nec Europe Ltd. | Fall detection system and method |
WO2016181731A1 (en) * | 2015-05-11 | 2016-11-17 | コニカミノルタ株式会社 | Fall detecting device, fall detecting method, and device for monitoring person to be monitored |
CN106530616A (en) * | 2016-12-06 | 2017-03-22 | 上海斐讯数据通信技术有限公司 | Automatic alarm method and system |
CN207400702U (en) * | 2017-04-11 | 2018-05-25 | 李晓宇 | A kind of Intelligent robot for sweeping floor |
CN108986404A (en) * | 2018-07-10 | 2018-12-11 | 深圳市赛亿科技开发有限公司 | Water heater and its human body tumble monitoring method, electronic equipment, storage medium |
CN110458061A (en) * | 2019-07-30 | 2019-11-15 | 四川工商学院 | A kind of method and company robot of identification Falls in Old People |
CN111767812A (en) * | 2020-06-18 | 2020-10-13 | 浙江大华技术股份有限公司 | Fall detection method, fall detection device and storage device |
CN112784676A (en) * | 2020-12-04 | 2021-05-11 | 中国科学院深圳先进技术研究院 | Image processing method, robot, and computer-readable storage medium |
CN113043267A (en) * | 2019-12-26 | 2021-06-29 | 深圳市优必选科技股份有限公司 | Robot control method, device, robot and computer readable storage medium |
-
2021
- 2021-09-17 CN CN202111095161.XA patent/CN113925391A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203759831U (en) * | 2014-02-17 | 2014-08-06 | 崔健雄 | Fall-down monitoring robot |
CN104574813A (en) * | 2014-12-19 | 2015-04-29 | 龙凤娇 | Radar lamp monitoring system and method |
WO2016155789A1 (en) * | 2015-03-31 | 2016-10-06 | Nec Europe Ltd. | Fall detection system and method |
WO2016181731A1 (en) * | 2015-05-11 | 2016-11-17 | コニカミノルタ株式会社 | Fall detecting device, fall detecting method, and device for monitoring person to be monitored |
CN106530616A (en) * | 2016-12-06 | 2017-03-22 | 上海斐讯数据通信技术有限公司 | Automatic alarm method and system |
CN207400702U (en) * | 2017-04-11 | 2018-05-25 | 李晓宇 | A kind of Intelligent robot for sweeping floor |
CN108986404A (en) * | 2018-07-10 | 2018-12-11 | 深圳市赛亿科技开发有限公司 | Water heater and its human body tumble monitoring method, electronic equipment, storage medium |
CN110458061A (en) * | 2019-07-30 | 2019-11-15 | 四川工商学院 | A kind of method and company robot of identification Falls in Old People |
CN113043267A (en) * | 2019-12-26 | 2021-06-29 | 深圳市优必选科技股份有限公司 | Robot control method, device, robot and computer readable storage medium |
CN111767812A (en) * | 2020-06-18 | 2020-10-13 | 浙江大华技术股份有限公司 | Fall detection method, fall detection device and storage device |
CN112784676A (en) * | 2020-12-04 | 2021-05-11 | 中国科学院深圳先进技术研究院 | Image processing method, robot, and computer-readable storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114831540A (en) * | 2022-04-19 | 2022-08-02 | 珠海格力电器股份有限公司 | Robot and method for using same |
WO2023202169A1 (en) * | 2022-04-19 | 2023-10-26 | 珠海格力电器股份有限公司 | Robot and method of using same |
CN114859749A (en) * | 2022-06-27 | 2022-08-05 | 忆月启函(盐城)科技有限公司 | Intelligent home management method and system based on Internet of things |
CN114859749B (en) * | 2022-06-27 | 2023-03-10 | 忆月启函(盐城)科技有限公司 | Intelligent home management method and system based on Internet of things |
CN115153353A (en) * | 2022-07-25 | 2022-10-11 | 珠海格力电器股份有限公司 | Control method and device of sweeping robot, sweeping robot and storage medium |
CN115153353B (en) * | 2022-07-25 | 2024-04-26 | 珠海格力电器股份有限公司 | Control method and device of sweeping robot, sweeping robot and storage medium |
CN116189238A (en) * | 2023-04-19 | 2023-05-30 | 国政通科技有限公司 | Human shape detection and identification fall detection method based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113925391A (en) | Tumble detection method and device based on cleaning robot and cleaning robot | |
CN110974088B (en) | Sweeping robot control method, sweeping robot and storage medium | |
CN114521836B (en) | Automatic cleaning equipment | |
JP5560794B2 (en) | Control device, control method and program | |
CN108403146B (en) | Three-dimensional ultrasonic imaging method and device based on multi-sensor information fusion | |
JP2019191145A (en) | Identification method for charging stand, device, robot, and computer readable storage | |
CN108122412B (en) | Method for monitoring robot to detect vehicle disorderly stop | |
EP4187348A1 (en) | Method and apparatus for movable robot to adjust pose of goods rack | |
CN109118811A (en) | Method, equipment and the computer readable storage medium of vehicle positioning stop position | |
CN114442624B (en) | Robot recharging control method, device and system | |
US10878228B2 (en) | Position estimation system | |
CN113965733A (en) | Binocular video monitoring method, system, computer equipment and storage medium | |
CN110163914B (en) | Vision-based positioning | |
CN116416518A (en) | Intelligent obstacle avoidance method and device | |
CN106264507A (en) | A kind of health monitoring system based on motion capture | |
CN110198471A (en) | Abnormality recognition method, device, smart machine and storage medium | |
CN117381783A (en) | Working boundary construction method of robot and robot | |
CN117077081A (en) | Human body pointing prediction method, device, robot and storage medium | |
JP2006041939A (en) | Monitor device and monitor program | |
CN115880428A (en) | Animal detection data processing method, device and equipment based on three-dimensional technology | |
CN110892449A (en) | Image processing method and device and mobile device | |
CN111222475B (en) | Pig tail biting detection method, device and storage medium | |
CN113947769A (en) | Traffic light detection method and device based on long-focus and short-focus double cameras and storage medium | |
CN108527366B (en) | Robot following method and device based on depth of field distance | |
CN113870524A (en) | Monitoring method, monitoring device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |