CN115781668A - Method for controlling nursing robot, nursing robot and storage medium - Google Patents

Method for controlling nursing robot, nursing robot and storage medium Download PDF

Info

Publication number
CN115781668A
CN115781668A CN202211407765.8A CN202211407765A CN115781668A CN 115781668 A CN115781668 A CN 115781668A CN 202211407765 A CN202211407765 A CN 202211407765A CN 115781668 A CN115781668 A CN 115781668A
Authority
CN
China
Prior art keywords
target
nursing
image
determining
nursing object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211407765.8A
Other languages
Chinese (zh)
Inventor
夏舸
楼卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202211407765.8A priority Critical patent/CN115781668A/en
Publication of CN115781668A publication Critical patent/CN115781668A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a control method of a nursing robot, the nursing robot and a computer readable storage medium, wherein the method comprises the following steps: determining a current working area of the nursing robot, and acquiring an area attribute associated with the working area; when the area attribute is a private attribute, filtering image information corresponding to a private position of a target nursing object in a target image acquired by the camera device, wherein the target image comprises an image of the target nursing object; and determining the user state of the target nursing object according to the filtered target image. When the current working area watched by the watching robot is a privacy area, the protection of the privacy of the target watched object is achieved by filtering the image information corresponding to the private position of the target watched object in the collected target image, and the effect of user experience is improved.

Description

Method for controlling nursing robot, nursing robot and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method for controlling a nursing robot, and a computer-readable storage medium.
Background
In the related art, the state of a nursing object can be monitored in real time by installing a color camera in places where the nursing object such as the old, the child and the like frequently appears, so that whether the nursing object has an accident or not can be determined according to a state monitoring result, and then an alarm is given in time when the nursing object has the accident. However, in a private place such as a bedroom or a bathroom, the use of color camera monitoring may cause the privacy of the care subject to be revealed.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The embodiment of the application provides a control method of a nursing robot, the nursing robot and a computer readable storage medium, and aims to solve the technical problem that privacy of a monitored object is revealed when a color camera is used for monitoring in privacy places such as bedrooms and toilets
In order to achieve the above object, an embodiment of the present invention provides a method for controlling a nursing robot, including:
determining a current working area of the nursing robot, and acquiring an area attribute associated with the working area;
when the area attribute is a private attribute, filtering image information corresponding to a private position of a target nursing object in a target image acquired by the camera device, wherein the target image comprises an image of the target nursing object;
and determining the user state of the target nursing object according to the filtered target image.
Optionally, the target image includes an infrared layer and a visible light layer; after the step of determining the work area where the nursing robot is currently located and acquiring the area attribute associated with the work area, the method further includes:
when the area attribute is a public attribute, determining the brightness degree of the current working environment of the nursing robot;
when the working environment is judged to be a dark environment according to the brightness degree, determining the user state of the target nursing object according to the infrared image layer of the target image;
and when the working environment is judged to be a bright environment according to the brightness degree, determining the user state of the target nursing object according to the visible light layer of the target image.
Optionally, after the step of determining the user status of the target care subject according to the filtered target image, the method includes:
when the user state of the target nursing object is an early warning state, acquiring body characteristic data of the target nursing object acquired by intelligent wearable equipment, or acquiring body characteristic data of the target nursing object acquired based on the camera device;
and sending alarm information when the body characteristic data is not in a preset health range.
Optionally, the target image includes a depth layer, an infrared layer, and a visible light layer; the step of filtering image information corresponding to the private position of the target nursing object in the target image acquired by the camera device comprises the following steps:
obtaining the height of the target nursing object or the skeleton information of the target nursing object in the depth layer of the target image;
determining the coordinates of the private position of the target nursing object according to the height or the skeleton information;
and filtering the image information of the target nursing object private position on the infrared layer and/or the visible light layer based on the coordinates of the target nursing object private position.
Optionally, before the step of filtering image information corresponding to a private position of a target care subject in a target image acquired by the image capturing device, the method includes:
determining facial features and/or body type information of each object located in the target image based on an infrared layer or a depth layer in the target image;
and if the object is determined to be a child and/or an old person according to the facial features and/or the body type information of each object, taking the determined child and/or old person as the target nursing object.
Optionally, after the step of determining the user status of the target care subject according to the filtered target image, the method further includes:
determining an alarm level corresponding to the target nursing object according to the user state of the target nursing object;
executing a corresponding nursing strategy according to the alarm level corresponding to the target nursing object; if the user state of the target nursing object is a normal state, determining that the alarm level of the target nursing object is a first level, and executing a nursing strategy corresponding to the first level; if the user state of the target nursing object is an early warning state, determining that the warning level of the target nursing object is a second level, and executing a nursing strategy corresponding to the second level; and if the user state of the target nursing object is a falling state, determining that the alarm level of the target nursing object is a third level, and executing a nursing strategy corresponding to the third level.
Optionally, after the step of determining the user status of the target care subject according to the filtered target image, the method includes:
and displaying the image information of the target nursing object in the infrared layer or the visible light layer of the target image at a terminal.
Optionally, the step of displaying, at a terminal, image information of a target care subject in the infrared layer or the visible light layer of the target image further includes:
according to the user state of the target object to be cared, determining an emergency scheme corresponding to the user state of the target object to be cared;
and displaying the emergency scheme corresponding to the user state of the target object to be cared on the terminal.
In order to achieve the above object, the present invention provides a nursing robot including: a memory, a processor and a control program of the nursing robot stored on the memory and capable of running on the processor, wherein the control program of the nursing robot realizes the steps of the control method of the nursing robot when being executed by the processor.
In order to achieve the above object, the present invention further provides a computer-readable storage medium storing a control program for a nursing robot, wherein the control program for the nursing robot is executed by a processor to implement the steps of the control method for the nursing robot.
According to the control method of the nursing robot, the nursing robot and the computer readable storage medium, a working area where the nursing robot is located is determined, an area attribute associated with the working area is obtained, and when the associated area attribute is a private attribute, the working area where the nursing robot is located is indicated to be a private area, so that image information corresponding to the private position of a target nursing object is filtered in a target image collected by a camera device, wherein the target image comprises an image of the target nursing object, and the user state of the target nursing object is determined according to the filtered target image. When the current nursing work area of the nursing robot is a privacy area, the image information corresponding to the privacy position of the target nursing object in the acquired target image is filtered, so that the privacy of the target nursing object is protected, and the user experience effect is improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a control method for a nursing robot according to an embodiment of the present invention;
fig. 2 is a detailed flowchart of step S10 in the second embodiment of the control method of the nursing robot according to the present invention;
fig. 3 is a detailed flowchart of step S20 in the third embodiment of the control method of the nursing robot according to the present invention;
fig. 4 is a detailed flowchart of step S30 in the fourth embodiment of the control method of the nursing robot according to the present invention;
fig. 5 is a schematic diagram of a terminal structure of a hardware operating environment according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, in an embodiment of the method for controlling a nursing robot according to the present invention, the method for controlling a nursing robot includes the steps of:
step S10: determining a work area where the nursing robot is located currently, and acquiring area attributes associated with the work area;
in this embodiment, the nursing robot includes an image pickup device. The user state of the target nursing object can be detected based on the target image acquired by the camera device. In order to protect the privacy of the target nursing object, when the working area where the robot is located is a privacy area, the image information of the privacy position of the target nursing object in the acquired target image can be filtered. Therefore, different area attributes can be set according to privacy degrees of different working areas, when the working area where the robot is located is determined, the area attribute associated with the working area can be obtained, and whether the collected target image needs to be filtered or not is determined according to the obtained area attribute. The current working area of the robot can be determined through the target image acquired by the camera device, or the current working area of the nursing robot can be determined through positioning the current position of the robot.
Step S20: when the area attribute is a private attribute, filtering image information corresponding to a private position of a target nursing object in a target image acquired by the camera device, wherein the target image comprises an image of the target nursing object;
in this embodiment, the target image acquired by the camera device includes a depth image layer, an infrared image layer, and a visible light image layer. The robot can use the corresponding image layers based on different requirements and scenes to nurse the target nursing object, so that nursing quality is improved. When the area attribute associated with the current working area of the nursing robot is a private attribute, the current working area is a private area, such as a toilet, a bathroom and the like, in order to protect the privacy of the target nursing object, image information corresponding to the private position of the target nursing object in the acquired target image needs to be filtered, at this time, the depth data in the depth map layer can be converted into point cloud data by acquiring the depth map layer in the target image, so that the coordinate of the private position of the target nursing object is acquired based on the point cloud data, and the image information corresponding to the private position of the target nursing object in the target image is filtered based on the coordinate of the private position. For example, to filter the image information of the position above the knee of the target nursing subject, the position coordinates of the knee of the target nursing subject in the acquired target image need to be determined first, so as to filter the image information of the position coordinates above the knee of the target nursing subject in the acquired target image. In some embodiments, the camera may be a depth camera through which depth images are collected, and the target care subject private location is filtered based on the collected depth images.
It is to be understood that the private location may be various body parts of the target care subject, such as a knee, a head, etc. The private positions can be arranged in a plurality of ways so as to filter corresponding sensitive areas on the body of the target nursing object.
Alternatively, the position from each part of the human body to the heel can be estimated according to a height algorithm, for example, the height of the human body is 160cm, and the height from the heel to the knee is 48cm. Therefore, the height of the target nursing object in the depth layer in the target image can be obtained firstly, the coordinate of the private position of the target nursing object is determined according to the height, and the image information of the private position of the target nursing object on the infrared layer and/or the visible light layer is filtered based on the coordinate of the private position of the target nursing object.
Optionally, in another embodiment, skeleton information of the target nursing object in the depth layer in the target image is acquired, so that coordinates of a private position of the target nursing object are determined according to the acquired skeleton information, and image information of the private position of the target nursing object in the infrared layer and/or the visible light layer is filtered based on the coordinates of the private position of the target nursing object. When the target nursing object has the privacy attribute, sensitive information of the privacy position in the target nursing object can be efficiently and reliably filtered based on the spatial coordinate information, and the privacy of the target nursing object can be better protected.
For example, when a nursing robot nurses a target nursing object, a currently located working area is determined, when the located working area is determined to be a toilet, an area attribute associated with the toilet is obtained, when the associated area attribute is a private attribute, height or skeleton information of the target nursing object in a depth layer in a target image is obtained, when the obtained height of the target nursing object is 160cm, the height from a heel to a knee is 48cm according to a height algorithm, so that the position of the knee of the target nursing object is determined, or a skeleton corresponding to the knee of the target nursing object is determined through the obtained skeleton information, so that the position coordinates of the knee are determined based on the skeleton corresponding to the knee, and further, based on the determined position coordinates of the knee, image information of the private position of the target nursing object in an infrared layer and/or a visible light layer is filtered, so that the private position of the target nursing object in a finally output image is filtered, and the effect of protecting the privacy of the target nursing object is achieved.
Step S30: and determining the user state of the target nursing object according to the filtered target image.
In this embodiment, the user status of the target care subject is determined from the target image. The user state includes an early warning state, a falling state and a normal state. When the user state of the target nursing object is detected to be a falling state, the alarm information can be directly sent. When the time that the target nursing object keeps the fixed posture is detected to be greater than or equal to the preset time, the target nursing object is possibly in a coma state, and the user state of the target nursing object is identified to be an early warning state. At the moment, in order to determine whether the target nursing object is actually in a coma state and avoid false alarm, the target nursing object is actually in the coma state and then alarm information is sent by acquiring body characteristic data of the target nursing object acquired by the intelligent wearable device or acquiring the body characteristic data of the target nursing object acquired based on the camera device when the body characteristic data is not in a preset health range. The intelligent wearable device can be a bracelet, a watch and the like which are suitable for being carried about, and is used for collecting body characteristic data of a target nursing object in real time. For example, blood pressure, heart rate, blood oxygen saturation, etc. of the target care subject are acquired in real time.
In some embodiments, the user status may be a behavioral action by the target care subject or an area in which the target care subject is located. For example, the guardian can set a corresponding risk behavior or a corresponding risk area according to the target nursing object, and when it is detected that the target nursing object performs the risk behavior or enters the risk area, the guardian can directly send alarm information to warn and stop the target nursing object in time after learning.
Optionally, since there is a certain difference between the body characteristic data of the old and the healthy young, and frequent false alarm warning may be caused according to a common standard, a physical examination may be performed on the cared subject, after the physical examination, a preset health range corresponding to appropriate body characteristic data is set by an authorized professional according to a physical examination result of the cared subject, and the set preset health range may not be changed by a user and needs to be changed by the professional, so as to avoid triggering and changing the target cared subject in daily activities, which causes frequent false alarm warning, and when it is detected that the body characteristic data of the target cared subject is not within the preset health range, alarm information is sent.
Optionally, in further embodiments, the physical characteristic data of the target care subject may be acquired based on a camera configured for the care robot. The heart rate detection function is provided through the adjacent flash lamp and the camera device, when the user state of the target nursing object is in an early warning state, the nursing robot prompts the target nursing object to place a hand on the front face close to the flash lamp and the camera through voice, so that the heart rate of the target nursing object is detected through the flash lamp and the camera device, the body temperature of the target nursing object is detected according to an infrared image layer in an acquired target image, when the heart rate and the body temperature of the target nursing object are not within a preset health range, the fact that the target nursing object is actually in a coma state is indicated, and then alarm information is sent.
Illustratively, after a target nursing object is subjected to physical examination, a preset health range corresponding to proper body characteristic data is set by a doctor or under the guidance of the doctor, when the nursing robot nurses the target nursing object, a currently located working area is determined, when the located working area is a toilet, an attribute associated with the toilet is a private attribute, height or skeleton information of the target nursing object in a deep layer in a target image is further obtained, when the obtained height of the target nursing object is 160cm, the height from the heel to the knee is 48cm according to a height algorithm, so that position coordinates of the knee of the target nursing object are determined, or a skeleton corresponding to the knee of the target nursing object is determined according to the obtained skeleton information, so that the position coordinates of the knee are determined, further based on the determined position coordinates of the knee, image information of the private position of the target nursing object in an infrared layer and/or a visible light layer is filtered, so that the private position of the target nursing object in a final output image is filtered, so that the nursing object is in an intelligent state when the target nursing object is in a preset health state, and the nursing object is further judged to be in a state that the target nursing object is in which the preset health range when the target nursing object is not taken, and the preset health range is set by the intelligent nursing device, so that the target nursing object is not taken, or the target nursing object is not taken, when the target nursing object is further set, the target nursing object is not taken, so that the target nursing object is not taken, and the preset health range is further set is set to be in the target nursing object is determined, and the target nursing object is further, and the early warning time is further set to be in the target nursing object, so that the target nursing object is not taken, and the target nursing object is not taken, when the target nursing object is not taken, the target nursing state is set to be in the target nursing object is further, and the target nursing object is set to be in the early warning device is further, so that the target nursing object is set to be in the target nursing state, and the target nursing object is set to be in the target nursing object, and the target nursing object, the target nursing object is set to be in the target nursing state, the target nursing state is set to be in the early warning device, the early warning device is further, and the early warning device is set to be in the early warning device, the early warning device is further, the early warning device is set to prevent the early warning device is further in the early warning device, and then sends alarm information to the guardian or the hospital to seek help.
In the technical scheme provided by this embodiment, a working area where a nursing robot is currently located is determined, an area attribute associated with the working area is acquired, when the area attribute associated with the working area is a private attribute, a height of a target nursing object in a depth layer of a target image or skeleton information of the target nursing object is acquired, so that coordinates of a private position of the target nursing object are determined according to the height or the skeleton information, image information of the private position of the target nursing object in an infrared layer and/or a visible light layer is filtered based on the coordinates of the private position of the target nursing object, privacy of the target nursing object is protected, an effect of user experience is improved, a user state of the target nursing object is determined according to the filtered target image, and when the user state of the target nursing object is in an early warning state, body characteristic data acquired by an intelligent wearable device is further acquired, or when the body characteristic data acquired by an image pickup device is based on the body characteristic data acquired by the target nursing object, warning information is sent when the body characteristic data of the target nursing object is not in a preset healthy range, and a timely warning effect is achieved.
Referring to fig. 2, in the second embodiment, based on the first embodiment, after the step S10, the method includes:
step S40: when the area attribute is a public attribute, determining the brightness degree of the current working environment of the nursing robot;
in this embodiment, when the area attribute associated with the work area where the nursing robot is currently located is a public attribute, it indicates that the work area where the robot is currently located is a public area, such as a hall, a balcony, or the like, and it is not necessary to filter the private position of the target nursing object in the captured target image. In order to better nurse a target nursing object, different layers can be adopted to detect the user state of the target nursing object in different environments, so that the nursing quality is improved.
Optionally, the light intensity value of the current working environment of the nursing robot can be obtained, the brightness degree of the working environment can be determined according to the obtained light intensity value, and then whether the current working environment of the nursing robot is a dark environment or a bright environment can be judged according to the brightness degree, so that different images can be collected according to different environments.
Step S50: when the working environment is judged to be a dark light environment according to the brightness degree, determining the user state of the target nursing object according to the infrared image layer of the target image;
in this embodiment, which type of layer is used to detect the target nursing object is determined according to the brightness degree of the current working environment of the nursing robot, so that the detected user state is more accurate. Under the dark light environment, the resolution ratio of the depth image layer in the acquired target image is too low, so that the target nursing object cannot be detected or the detection accuracy is low, and therefore, under the dark light environment, the target nursing object can be detected according to the infrared image layer of the target image so as to accurately detect the user state of the target nursing object, and therefore when the target nursing object is in an early warning state or a falling state, the target nursing object can be found and alarmed in time.
Step S60: and when the working environment is judged to be a bright environment according to the brightness degree, determining the user state of the target nursing object according to the visible light layer of the target image.
In this embodiment, in a bright environment, the infrared layer in the acquired target image is exposed, so that it is difficult to detect the user state of the target nursing object, and therefore, in the bright environment, the target nursing object can be detected according to the visible light layer of the target image, so as to ensure that the user state of the target nursing object is accurately determined, and further, when the target nursing object is in an early warning state or a falling state, the user state can be timely found and warned.
Alternatively, the user status may be the physical status of the target care subject, such as coma, fall, etc., or the behavioral actions performed by the target care subject, or the area in which the target care subject is located. Because the body conditions corresponding to different nursed objects are different, corresponding risk behaviors and/or risk areas can be set according to the body conditions of the target nursed object, when the target nursed object carries out the risk behaviors and/or enters the risk areas, the user state of the target nursed object is indicated to be in an early warning state, and warning information can be directly sent to the guardian and the target nursed object. The form of sending the alarm information can be outputting voice to remind a target nursing object, and sending the acquired target image to the guardian so that the guardian can know the target image, thereby achieving the effect of timely alarming.
Illustratively, a target nursing subject suffering from Alzheimer's disease especially needs to pay attention to monitoring abnormal behaviors, so a specific risk area and/or risk behaviors can be set, when the target nursing subject approaches a risk area such as a kitchen stove and the like and/or carries out risk behaviors such as climbing a ladder or going out alone, the user state of the target nursing subject is identified to be in an early warning state, and then reminding voice is output to remind the target nursing subject and inform a guardian of the user state. And handicapped people can set a safe region and a risk region according to their behavior ability, and when entering the risk region, the user state of the target nursing object is identified to be an early warning state, so that voice is output to remind the target nursing object and inform a guardian, timely warning is achieved, and the user experience effect is improved.
Alternatively, obstacle avoidance may be performed based on a camera device. The nursing robot can move along with the movement of the target nursing object during nursing, so that the distance between the nursing robot and the target nursing object can be determined through a target image acquired by the camera device, the nursing robot and the target nursing object can keep a proper distance to avoid collision or stumbling, the moving direction is predicted according to the moving speed of the target nursing object when the target nursing object exceeds the monitoring range of the camera device, continuous tracking is carried out, when the current user state of the target nursing object is detected to be a falling state, preset voice can be broadcasted for prompting, the response of the target nursing object can be recorded, and alarm information is sent to a guardian or a hospital. If the network abnormality occurs, the alarm information fails to be sent to the outside, or no response of the target or feedback of a guardian exists, the loudspeaker can be adjusted to the maximum volume, and an alarm sound is sent out.
In the technical scheme provided by the embodiment, when the area attribute associated with the working area where the nursing robot is currently located is determined to be a public attribute, the working area where the nursing robot is currently located is indicated to be a public area, the image information of the private position of the target nursing object in the target image does not need to be filtered, and then the user state of the target nursing object is determined according to the infrared image layer of the target image when the working environment is determined to be a dark light environment according to the brightness degree by determining the brightness degree of the working environment where the nursing robot is currently located, and the user state of the target nursing object is determined according to the visible light image layer of the target image when the working environment is determined to be a bright light environment according to the brightness degree. By determining the brightness degree of the current working environment of the nursing robot, the user state of the target nursing object is determined by acquiring the corresponding image layer in the target image in different environments, and the effect of improving the nursing quality is achieved.
Referring to fig. 3, in the third embodiment, based on any one of the above embodiments, before step S20, the method includes:
step S70: determining facial features and/or body type information of each object located in the target image based on an infrared layer or a depth layer in the target image;
step S80: and if the object is determined to be a child and/or an old person according to the facial features and/or the body type information of each object, taking the determined child and/or old person as the target nursing object.
In this embodiment the care robot can simultaneously care for a plurality of care subjects, which may be elderly and/or small children. After the camera device collects a target image, facial features and body type information of each object in the target image can be determined based on an infrared image layer in the target image, and then old people and children under a current working area are determined according to the facial features and the body type information of each object, so that after the old people and the children are identified, the old people and the children are taken as target nursing objects to be tracked and nursed in real time.
In some embodiments, the body type information of each object can be determined based on the depth map layer in the target image, so that children in the current working area can be determined according to the body type information, or the depth map layer data is converted into point cloud data to determine the skeleton information of each object, and then the old and children in the current working area can be determined according to the skeleton information.
Optionally, in order to improve the quality of nursing, a target nursing subject may be determined according to behavior of each nursing subject in different time periods and a user status of each nursing subject, and/or a priority of each nursing subject, so as to monitor a subject needing nursing preferentially without affecting monitoring of other nursing subjects. The priority of each care subject can be freely set by the user according to the actual situation. For example, when the user status of a care-giver is normal for a certain period of time, other care-givers may be preferentially monitored. For example, when the old people are in a normal state at rest or continuously for a period of time, the old people can monitor children in an early warning state preferentially. Of course, where space permits, it may be preferable to select locations where multiple subjects can be monitored simultaneously, so that multiple subjects can be monitored simultaneously.
It is understood that the nursing robot or the server may previously store priorities corresponding to the nursing subjects, so that the nursing robot may acquire the priorities corresponding to the nursing subjects stored in advance or the priorities corresponding to the nursing subjects stored in the server to determine the target nursing subject. The priorities of the nursing subjects in different time periods or different user states may be different, which is not specifically limited in this embodiment.
Optionally, the guardian may register the nursed object to be supervised in advance, and acquire and store the facial features and/or body type information of each nursed object through the camera device, so that when the nursing robot is used for monitoring, the facial features and/or body type information of each object in the acquired image is compared with the facial feature set and the body type information set which are stored in advance to determine each nursed object.
Optionally, in this embodiment, different processing may be performed for different user states of the cared subject to implement different cared tasks. For example, when the target nursing object is an infant, music can be played for soothing when the target nursing object is crying, and when the target nursing object is an old man, corresponding reminding voice can be output for warning when the target nursing object falls down or makes a risk behavior. The care subjects may be elderly people and children, pets, etc., and the guardian can determine the care subject to be cared for.
Illustratively, when the target nursing object is an infant, if crying of the infant is monitored, it is indicated that the current user state of the infant is an early warning state, at the moment, music can be played through a loudspeaker, a cradle is shaken to pacify the infant, the current user state of the infant is recorded, a video is taken down, and then the recorded user state and the recorded video are sent to a terminal, so that a guardian knows the user, and the guardian can connect with a line through voice after knowing the user state and can go to the site while pacifying the voice. When the target nursing object is a domestic pet, if the situation that the pet escapes from the door and the window is monitored, the state is an early warning state, the user is reminded in time, the picture is recorded and stored, and the picture is locally stored or uploaded to the cloud, so that the user can take and check the picture conveniently. When an infant or a pet is at home alone and no adult accompanies the infant or the pet, alarm information is sent to inform a guardian immediately when a stranger enters the infant or the pet, preset voice is broadcasted, deterrence is carried out, and the old and the child are informed to avoid avoiding in a room through intelligent wearing equipment such as a bracelet and the like.
In the technical scheme provided by the embodiment, in a target image acquired by a camera device, before image information corresponding to a private position of a target nursing object is filtered, facial features and/or body type information of each object in the target image is determined based on an infrared layer or a depth layer in the target image, and if it is determined that the object is a child and/or an old person according to the facial features and/or the body type information of each object, the determined child and/or the old person is/are taken as the target nursing object, so that a plurality of nursing objects are nursed, the nursing efficiency is improved, and the user experience is improved.
Referring to fig. 4, in the fourth embodiment, based on any one of the above embodiments, after step S30, the method further includes:
step S90: determining an alarm level corresponding to the target nursing object according to the user state of the target nursing object;
step S100: executing a corresponding nursing strategy according to the alarm level corresponding to the target nursing object; if the user state of the target nursing object is a normal state, determining that the alarm level of the target nursing object is a first level, and executing a nursing strategy corresponding to the first level; if the user state of the target nursing object is an early warning state, determining that the warning level of the target nursing object is a second level, and executing a nursing strategy corresponding to the second level; and if the user state of the target nursing object is a falling state, determining that the alarm level of the target nursing object is a third level, and executing a nursing strategy corresponding to the third level.
In this embodiment, the corresponding alarm level may be determined according to the user status of the target nursing subject, and then the corresponding nursing strategy may be executed according to the alarm level corresponding to the target nursing subject, so as to achieve effective nursing of the target nursing subject, and when there are a plurality of target nursing subjects, targeted nursing is implemented. The nursing strategy corresponding to each alarm level can be formulated according to actual requirements, or can be formulated according to the physical condition and behavior of the target nursing object.
Optionally, after the user state of the target nursing object is determined according to the filtered target image, the image information of the target nursing object in the infrared image layer or the visible image layer of the target image may be displayed at the terminal. So that the guardian and/or hospital can know the user state of the target nursing object in time.
Optionally, the step of displaying, at the terminal, image information of the target nursing object in the infrared layer or the visible light layer of the target image further includes: and determining an emergency scheme corresponding to the user state of the target object to be nursed according to the user state of the target object to be nursed, and then displaying the emergency scheme corresponding to the user state of the target object to be nursed on a terminal so that a guardian can rescue according to the emergency scheme.
For example, when it is detected that the current user state of the target nursing object is a normal state, and it is determined that the alarm level corresponding to the target nursing object is a first level, a nursing strategy that detects the user state of the target nursing object based on a preset interval time and preferentially nurses other target nursing objects in an early warning state or a falling state may be performed. When the current user state of a target nursing object is detected to be an early warning state, and the alarm level corresponding to the target nursing object is determined to be the second level, the method can execute the steps of acquiring body characteristic data of the target nursing object acquired by intelligent wearable equipment, or based on the body characteristic data of the target nursing object acquired by a camera device, sending alarm information to a guardian and/or a hospital when the body characteristic data is not in a preset health range, sending image information of the target nursing object in an infrared layer or a visible light layer of a target image to a terminal and displaying the image information, so that the guardian and/or the hospital can make a rescue scheme in advance according to the displayed image information, timely rescue when the target nursing object arrives at the site, and when the body characteristic data is in the preset health range, controlling a nursing robot to output voice to inquire the state of the target nursing object, determining whether a nursing strategy of the alarm information needs to be sent according to the response of the target nursing object, and if the alarm information is not needed, indicating that the target nursing object is in a normal state, changing the current early warning state into a normal state, and then switching to execute the nursing strategy corresponding to the normal state of the user state. When the fact that the target nursing object is in a falling state is detected, when the fact that the alarm level corresponding to the target nursing object is the third level is determined, the fact that the facial expression and the body state of the current target nursing object are obtained through the camera device can be executed, the voice is output to inquire the target nursing object, and the response is recorded, so that the injury situation of the target nursing object is determined according to the facial expression, the body state and the response of the target nursing object, the image information of the target nursing object in the infrared image layer or the visible image layer of the acquired target image and the injury situation of the target nursing object are sent to the terminal and displayed nursing strategies are carried out, and therefore a guardian and/or a hospital can timely carry out rescue.
In the technical solution provided in this embodiment, an alarm level corresponding to a target care subject is determined according to a user status of the target care subject, and a corresponding care policy is executed according to the alarm level corresponding to the target care subject, where if the user status of the target care subject is a normal status, the alarm level to the target care subject is determined to be a first level, the care policy corresponding to the first level is executed, if the user status of the target care subject is an early warning status, the alarm level to the target care subject is determined to be a second level, the care policy corresponding to the second level is executed, if the user status of the target care subject is a falling status, the alarm level to the target care subject is determined to be a third level, the care policy corresponding to the third level is executed, thereby achieving effective care of the target care subject, and when there are a plurality of target care subjects, targeted care can be achieved.
Referring to fig. 5, fig. 5 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a nursing robot.
As shown in fig. 5, the terminal may include: a processor 1001, e.g. a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display) or the like. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include an operating system, a network communication module, a user interface module, and a control program of the nursing robot.
In the terminal shown in fig. 5, the network interface 1004 is mainly used for connecting a background server and performing data communication with the background server; the processor 1001 may be configured to call a control program of the nursing robot stored in the memory 1005, and perform the following operations:
determining a current working area of the nursing robot, and acquiring an area attribute associated with the working area;
when the area attribute is a private attribute, filtering image information corresponding to a private position of a target nursing object in a target image acquired by the camera device, wherein the target image comprises an image of the target nursing object;
and determining the user state of the target nursing object according to the filtered target image.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
after the step of determining the work area where the nursing robot is currently located and acquiring the area attribute associated with the work area, the method further includes:
when the area attribute is a public attribute, determining the brightness degree of the current working environment of the nursing robot;
when the working environment is judged to be a dark environment according to the brightness degree, determining the user state of the target nursing object according to the infrared image layer of the target image;
and when the working environment is judged to be a bright environment according to the brightness degree, determining the user state of the target nursing object according to the visible light layer of the target image.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
when the user state of the target nursing object is an early warning state, acquiring body characteristic data of the target nursing object acquired by intelligent wearable equipment, or acquiring body characteristic data of the target nursing object acquired based on the camera device;
and sending alarm information when the body characteristic data is not in a preset health range.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
obtaining the height of the target nursing object or the skeleton information of the target nursing object in the depth layer of the target image;
determining the coordinates of the private position of the target nursing object according to the height or the skeleton information;
and filtering the image information of the target nursing object private position on the infrared layer and/or the visible light layer based on the coordinate of the target nursing object private position.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
determining facial features and/or body type information of each object located in the target image based on an infrared layer or a depth layer in the target image;
and if the object is determined to be a child and/or an old person according to the facial features and/or the body type information of each object, taking the determined child and/or old person as the target nursing object.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
determining an alarm level corresponding to the target nursing object according to the user state of the target nursing object;
executing a corresponding nursing strategy according to the alarm level corresponding to the target nursing object; if the user state of the target nursing object is a normal state, determining that the alarm level of the target nursing object is a first level, and executing a nursing strategy corresponding to the first level; if the user state of the target nursing object is an early warning state, determining that the alarm level of the target nursing object is a second level, and executing a nursing strategy corresponding to the second level; and if the user state of the target nursing object is a falling state, determining that the alarm level of the target nursing object is a third level, and executing a nursing strategy corresponding to the third level.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
and displaying the image information of the target nursing object in the infrared layer or the visible light layer of the target image at a terminal.
Further, the processor 1001 may call the control program of the nursing robot stored in the memory 1005, and also perform the following operations:
according to the user state of the target object to be cared, determining an emergency scheme corresponding to the user state of the target object to be cared;
and displaying the emergency scheme corresponding to the user state of the target object to be cared on the terminal.
In order to achieve the above object, the present invention also provides a nursing robot, comprising: the control program of the nursing robot is stored on the memory and can run on the processor, and when being executed by the processor, the control program of the nursing robot realizes the steps of the control method of the nursing robot.
In order to achieve the above object, the present invention further provides a computer-readable storage medium storing a control program for a nursing robot, the control program being executed by a processor to implement the steps of the control method for a nursing robot.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for causing a terminal device (which may be a nursing robot) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A control method for a nursing robot, which is applied to a nursing robot including an image pickup device, includes:
determining a work area where the nursing robot is located currently, and acquiring area attributes associated with the work area;
when the area attribute is a private attribute, filtering image information corresponding to a private position of a target nursing object in a target image acquired by the camera device, wherein the target image comprises an image of the target nursing object;
and determining the user state of the target nursing object according to the filtered target image.
2. The method of claim 1, wherein the target image includes an infrared layer and a visible light layer;
after the step of determining the work area where the nursing robot is currently located and acquiring the area attribute associated with the work area, the method further includes:
when the area attribute is a public attribute, determining the brightness degree of the current working environment of the nursing robot;
when the working environment is judged to be a dark environment according to the brightness degree, determining the user state of the target nursing object according to the infrared image layer of the target image;
and when the working environment is judged to be a bright environment according to the brightness degree, determining the user state of the target nursing object according to the visible light layer of the target image.
3. The method of any of claims 1 or 2, wherein the step of determining a user status of the target care subject from the filtered target image comprises, after the step of:
when the user state of the target nursing object is an early warning state, acquiring body characteristic data of the target nursing object acquired by intelligent wearable equipment, or acquiring body characteristic data of the target nursing object acquired based on the camera device;
and sending alarm information when the body characteristic data is not in a preset health range.
4. The method of claim 1, wherein the target image includes a depth layer, an infrared layer, and a visible light layer;
the step of filtering image information corresponding to the private position of the target nursing object in the target image acquired by the camera device comprises the following steps:
obtaining the height of the target nursing object or the skeleton information of the target nursing object in the depth layer of the target image;
determining the coordinates of the private position of the target nursing object according to the height or the skeleton information;
and filtering the image information of the target nursing object private position on the infrared layer and/or the visible light layer based on the coordinates of the target nursing object private position.
5. The method of claim 1, wherein the step of filtering image information corresponding to the private location of the target care subject from the target image captured by the camera device comprises:
determining facial features and/or body type information of each object located in the target image based on an infrared layer or a depth layer in the target image;
and if the object is determined to be a child and/or an old person according to the facial features and/or the body type information of each object, taking the determined child and/or old person as the target nursing object.
6. The method of claim 1, wherein the step of determining a user status of the target care subject based on the filtered target image further comprises:
determining an alarm level corresponding to the target nursing object according to the user state of the target nursing object;
executing a corresponding nursing strategy according to the alarm level corresponding to the target nursing object; if the user state of the target nursing object is a normal state, determining that the alarm level of the target nursing object is a first level, and executing a nursing strategy corresponding to the first level; if the user state of the target nursing object is an early warning state, determining that the warning level of the target nursing object is a second level, and executing a nursing strategy corresponding to the second level; and if the user state of the target nursing object is a falling state, determining that the alarm level of the target nursing object is a third level, and executing a nursing strategy corresponding to the third level.
7. The method of claim 1, wherein the step of determining the user status of the target care subject from the filtered target image is followed by:
and displaying the image information of the target nursing object in the infrared layer or the visible light layer of the target image at a terminal.
8. The method of claim 7, wherein the step of presenting image information of the target care subject in the infrared layer or the visible light layer of the target image at a terminal comprises:
according to the user state of the target object to be cared, determining an emergency scheme corresponding to the user state of the target object to be cared;
and displaying the emergency scheme corresponding to the user state of the target nursing object at the terminal.
9. A nursing robot, comprising: memory, a processor and a control program of a care robot stored on the memory and executable on the processor, the control program of a care robot realizing the steps of the control method of a care robot according to any one of claims 1 to 8 when executed by the processor.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a control program of a nursing robot, which when executed by a processor implements the steps of the control method of a nursing robot according to any one of claims 1 to 8.
CN202211407765.8A 2022-11-10 2022-11-10 Method for controlling nursing robot, nursing robot and storage medium Pending CN115781668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211407765.8A CN115781668A (en) 2022-11-10 2022-11-10 Method for controlling nursing robot, nursing robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211407765.8A CN115781668A (en) 2022-11-10 2022-11-10 Method for controlling nursing robot, nursing robot and storage medium

Publications (1)

Publication Number Publication Date
CN115781668A true CN115781668A (en) 2023-03-14

Family

ID=85436760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211407765.8A Pending CN115781668A (en) 2022-11-10 2022-11-10 Method for controlling nursing robot, nursing robot and storage medium

Country Status (1)

Country Link
CN (1) CN115781668A (en)

Similar Documents

Publication Publication Date Title
US10904492B2 (en) Video monitoring system
US11369321B2 (en) Monitoring and tracking system, method, article and device
US9501919B2 (en) Method and system for monitoring the activity of a subject within spatial temporal and/or behavioral parameters
JP6915421B2 (en) Watching support system and its control method
JP6720909B2 (en) Action detection device, method and program, and monitored person monitoring device
US10262517B2 (en) Real-time awareness of environmental hazards for fall prevention
CA3030850C (en) Systems and methods for use in detecting falls utilizing thermal sensing
KR101715218B1 (en) System and method for detecting the patient's fall by analyzing image
US20210219873A1 (en) Machine vision to predict clinical patient parameters
KR20170000123A (en) Care system for patient suffering from alzheimer's disease
KR102258832B1 (en) Silver care system
JP6142975B1 (en) Monitored person monitoring apparatus and method, and monitored person monitoring system
Bauer et al. Modeling bed exit likelihood in a camera-based automated video monitoring application
CN115781668A (en) Method for controlling nursing robot, nursing robot and storage medium
Ianculescu et al. Improving the Elderly’s Fall Management through Innovative Personalized Remote Monitoring Solution
KR102608941B1 (en) Abnormal behavior detecting system using artificial intelligence
US11361647B2 (en) System and method of wirelessly tracking a walking assistance tool
WO2022036624A1 (en) Monitoring method and apparatus, electronic device, and storage medium
Frenken et al. Criteria for quality and safety while performing unobtrusive domestic mobility assessments using mobile service robots
JP2021174189A (en) Method of assisting in creating menu of service, method of assisting in evaluating user of service, program causing computer to execute the method, and information providing device
JP2020194392A (en) Program for notifying of information, information notification device, and method executed by computer for notifying of information
WO2018039952A1 (en) Smart caregiving guardrail for bed
WO2018039953A1 (en) Smart caregiving system
WO2020241021A1 (en) Care management method, program, care management device, and care management system
WO2020003953A1 (en) Program executed by computer, information processing device, and method executed by computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination