CN113096808A - Event prompting method and device, computer equipment and storage medium - Google Patents

Event prompting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113096808A
CN113096808A CN202110446886.2A CN202110446886A CN113096808A CN 113096808 A CN113096808 A CN 113096808A CN 202110446886 A CN202110446886 A CN 202110446886A CN 113096808 A CN113096808 A CN 113096808A
Authority
CN
China
Prior art keywords
target
target object
data
action
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110446886.2A
Other languages
Chinese (zh)
Inventor
熊玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202110446886.2A priority Critical patent/CN113096808A/en
Publication of CN113096808A publication Critical patent/CN113096808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application relates to an event prompting method, an event prompting device, computer equipment and a storage medium, wherein the method comprises the following steps: receiving collected data from at least one pickup; extracting target action characteristics and target pain expression characteristics of the target object in the acquired data; determining whether a preset disease event occurs to the target object based on the target action feature and the target pain expression feature; and if the preset disease event occurs, sending prompt information corresponding to the preset disease event to contact equipment. By adopting the method and the device, the accuracy of disease event identification can be improved, and the event processing efficiency can be improved conveniently.

Description

Event prompting method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computers, and mainly relates to an event prompting method, an event prompting device, computer equipment and a storage medium.
Background
The smart home (home automation) is characterized in that a home is used as a platform, facilities related to home life are integrated by utilizing a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology and an audio and video technology, an efficient management system of home facilities and home schedule affairs is constructed, home safety, convenience, comfort and artistry are improved, and an environment-friendly and energy-saving living environment is realized.
The audio and video technology is an important part of smart home and is widely applied to families in the form of a family monitoring system. The existing home monitoring system mainly records a life picture of a home through an image pickup device (e.g., a camera), and can identify an actually occurring event (e.g., a disease event). However, the recognition accuracy is low, and the event of missed reminding or wrong reminding is easy to occur.
Disclosure of Invention
The embodiment of the application provides an event prompting method, an event prompting device, computer equipment and a storage medium, and can improve the accuracy of disease event identification. And the information is reported to the contact equipment after the occurrence of the preset disease event is identified, so that a user corresponding to the contact equipment can take measures to deal with the occurrence of the event in time, and the processing efficiency of the event is improved conveniently.
In a first aspect, an embodiment of the present application provides an event notification method, where:
receiving collected data from at least one pickup;
extracting target action characteristics and target pain expression characteristics of the target object in the acquired data;
determining whether a preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
and if the preset disease event occurs, sending prompt information corresponding to the preset disease event to contact equipment.
In a second aspect, an embodiment of the present application provides an event notification apparatus, where:
a communication unit for receiving the collected data from the at least one pickup;
the storage unit is used for storing preset disease events;
the processing unit is used for extracting target action characteristics and target pain expression characteristics of the target object in the acquired data; determining whether the preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
the communication unit is further configured to send prompt information corresponding to the preset disease event to the contact device if the preset disease event occurs.
In a third aspect, an embodiment of the present application provides a computer device, including a processor, a memory, a communication interface, and one or at least one program, where the one or at least one program is stored in the memory and configured to be executed by the processor, and the program includes instructions for some or all of the steps described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program makes a computer execute to implement part or all of the steps described in the first aspect.
The embodiment of the application has the following beneficial effects:
after the event prompting method, the event prompting device, the computer equipment and the storage medium are adopted, the collected data from at least one pickup device are received, the target action characteristic and the target pain expression characteristic of the target object in the collected data are extracted, and whether the target object has the preset disease event or not is determined based on the target action characteristic and the target pain expression characteristic. Therefore, the preset disease event is identified based on the target action characteristic and the target pain expression characteristic carried by the collected data collected by the pickup device, and the accuracy of identifying the disease event can be improved. And the information is reported to the contact equipment after the occurrence of the preset disease event is identified, so that a user corresponding to the contact equipment can take measures to deal with the occurrence of the event in time, and the processing efficiency of the event is improved conveniently.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained based on these drawings without creative efforts.
Wherein:
fig. 1 is a schematic flowchart of an event notification method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an event notification apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the application is applied to a monitoring system, and the monitoring system can be used in scenes such as home monitoring, hospital monitoring, school monitoring, market monitoring, prison monitoring, company monitoring and the like, and is not limited herein. The monitoring system comprises a pickup, a processing device and a contact device. The present application is not limited to the type and number of pickups, which may include an image picker (e.g., a camera), a voice picker (e.g., a microphone), a brain wave picker, a blood glucose picker, a blood oxygen picker, a heart rate picker, a temperature picker, a motion picker, etc. The pickup device may be used to collect data, such as image data or video data collected by the image pickup device, voice data collected by the voice pickup device, brain wave data collected by the brain wave pickup device, blood glucose data collected by the blood glucose pickup device, blood oxygen saturation data collected by the blood oxygen pickup device, heart rate data collected by the heart rate pickup device, temperature data collected by the temperature pickup device, and exercise data collected by the exercise pickup device, which are not limited herein.
It should be noted that the collected data is data collected after the target object authorizes the pickup, that is, when the pickup does not allow to collect the data of the target object, the data collection operation is not performed, so as to avoid disclosure of user privacy. For example, when the target object is authorized to the image pickup and the heart rate pickup, but not authorized to the voice pickup, the brain wave pickup, and the blood oxygen pickup, the acquired data includes image data or video data, and heart rate data, but does not include voice data, brain wave data, and blood oxygen data. And the acquisition frequency of the acquired data can be determined according to the target object to set the image pickup device to acquire in real time, i.e. continuously acquiring images or videos within the acquisition range of the image pickup device. As another example, the heart rate pickups for non-cardiac patients are set to collect at one hour intervals, or the heart rate pickups for cardiac patients are collected at a frequency of once per minute, etc.
The processing device can be used for processing the collected data to obtain a processing result. The processing device may also be configured to send the processing result to the contact device, or control the pickup device to perform data collection based on the processing result, which is not limited herein. The processing device and the contact device may include a Personal Computer (PC), a notebook computer, a mobile phone, an all-in-one machine, a palm computer, a tablet computer (pad), a server, an intelligent speaker, an intelligent television playing terminal, a vehicle-mounted terminal, or a portable device, and the like, which is not limited herein.
The connection relation among the processing device, the pickup and the contact device is not limited in the application, and the pickup and the processing device can be two separate devices which can be connected wirelessly. For example, the processing device is a smart speaker, the pickup device is an image pickup device, the image pickup device may be connected to the smart speaker through a wireless network, and the collected data sent by the image pickup device is sent to the smart speaker through the wireless network. The pick-up may alternatively be part of the processing device, for example, when the processing device is a cell phone and the pick-up is a camera and microphone on the cell phone, the camera and microphone may be connected to the processor of the cell phone by wires. The processing device and the contacting device may be separate devices that may be wirelessly connected. For example, the processing device and the contact device are both devices associated with an application program, and the processing device may send a processing result obtained by processing the acquired data sent by the pickup device to the contact device through the application program, thereby implementing wireless communication. The application program is not limited, and may be a monitoring application program, a health application program, or the like. The processing device may alternatively be part of the contact device, for example, the processing device may be a processor in the contact device, and after the processor obtains the processing result, the processing result may be sent to a display device of the processing device through a bus in the processing device.
The device in the monitoring system can be set by contacting the device or the processing device, for example, one image pickup device is arranged in each room, or different pickup devices are set for different people (for example, a heart disease patient sets a heart rate pickup device, a diabetes patient sets a blood sugar pickup device, a child and an old person set a smart bracelet, etc.), and the like. The devices in the monitoring system may also include lights, intelligent wheelchairs, intelligent sweeping devices, etc., which are not limited herein.
The preset disease event related to the embodiment of the application may include an event corresponding to at least one of the following diseases: heart disease, diabetes, stomach disease, cold, fever, liver disease, cardiovascular disease, cancer, kidney disease, oral disease, eye disease, pregnancy, etc., without limitation.
The event prompting method provided by the embodiment of the application can be executed by an event prompting device, wherein the event prompting device can be realized by software and/or hardware, can be applied to the processing equipment, identifies the preset disease event based on the target action characteristic and the target pain expression characteristic carried by the acquired data acquired by the pickup device, and can improve the accuracy of disease event identification. And the information is reported to the contact equipment after the occurrence of the preset disease event is identified, so that a user corresponding to the contact equipment can take measures to deal with the occurrence of the event in time, and the processing efficiency of the event is improved conveniently.
Referring to fig. 1, fig. 1 is a schematic flow chart of an event prompting method provided in the present application. The method applied to the processing device for example can include the following steps S101 to S104, wherein:
s101: the processing device receives the acquired data from the at least one pickup.
The present application is not limited to the type and number of pickers, and the number of pickers may be one or more (greater than or equal to 2). When the number of the pickups is one, the pickups are image pickups. The data acquisition is as described above and will not be described in detail here. The present application is not limited to the type and the number of the collected data, and the number of the collected data may be one or more. When the number of the collected data is plural, the collected data may be data collected by different pickups, or may be at least two data collected by one picker. The processing device may receive data currently acquired by all the pickers, or receive data of all the target objects, and may also receive data according to a preset acquisition frequency of the pickers, and the like, which is not limited herein.
Optionally, if one of the collected data is abnormal health data, step S102 is executed.
In the embodiment of the present application, the health data is used for describing the health state, and the abnormal health data refers to health data which does not conform to the normal range or does not conform to the variation trend. For example, the normal range of the heart rate is 60 to 100 times/minute, and if the heart rate detected by the heart rate pickup device is 120 times/minute or the change of the heart rate detected by the heart rate pickup device is unstable, it indicates that the acquired data of the heart rate pickup device is abnormal monitoring data. It can be understood that when abnormal health data are monitored, the target action characteristics and the target pain expression characteristics of the target object in the collected data collected by each pickup are extracted, and the accuracy rate of abnormal health data analysis can be determined. When the health data are normal, other collected data are not processed or the characteristics are not extracted, so that the power consumption is saved.
It should be noted that when the data collected by one of the pickups is abnormal health data, the other pickups that do not collect data may also be started, and the data collected in step S101 or S102 includes collected data including the abnormal health data, and may also include collected data of each picker before, during, or after the abnormal health data is collected, so that before and after the abnormal health data, the data collected by different pickups may be subjected to feature extraction on the target action feature and the target pain feature of the target object, so as to improve the accuracy of determining whether the target object has a preset disease event.
Optionally, before step S101, if the processing device receives a pickup detection instruction sent by the contact device, the processing device controls a pickup corresponding to the pickup detection instruction to start.
The pickup detection instruction is used to instruct the pickup to start, and the pickup may be all pickers in the monitoring system, or may be a designated pickup, and the like, which is not limited herein. For example, after the user corresponding to the contact device leaves home, the old or the young may send a pickup detection instruction to the processing device, so as to control the processing device to start the pickup corresponding to the pickup detection instruction, thereby implementing remote monitoring.
Further, if the processing device receives a data uploading instruction sent by the contact device, the data of the pickup corresponding to the data uploading instruction is sent to the contact device.
The data uploading instruction is used to instruct to upload data of a pickup, which may be all pickups in the monitoring system, or a designated pickup, such as an image pickup, and the like, and is not limited herein. For example, after the user corresponding to the contact device leaves home, the old or the child at home may send a data upload instruction to the processing device, so that the processing device is controlled to send the data of the pickup corresponding to the pickup detection instruction to the contact device, and the user corresponding to the contact device may view the monitoring data as needed, thereby implementing remote monitoring. Optionally, when the pickup corresponding to the data uploading instruction is an image pickup, the contact device may view the monitoring image as needed.
S102: the processing equipment extracts the target action characteristic and the target pain expression characteristic of the target object in the collected data.
In the embodiment of the present application, the target object may be a person whose data is acquired, for example, a person corresponding to a face image acquired by an image pickup device, a person corresponding to a voiceprint feature in voice data acquired by a voice pickup device, a person wearing a heart rate pickup device to acquire heart rate data, and the like.
The target object is not limited in the present application, and may be a designated monitoring object, for example, an old person, a child or a family member in a family, or may be an object that acquires data, for example, an acquaintance such as a relative, a neighbor, a community member, but not a designated monitoring object, or an unknown stranger. It can be understood that when monitoring the designated monitoring object, the feature extraction can be performed on the collected data by focusing on the information of the target object, and the accuracy rate of obtaining the target action feature and the target pain expression feature can be improved. When other collected non-specified monitoring objects are monitored, the disease monitoring effect of the monitoring range can be improved. It should be noted that, when the target object is a non-specified monitoring object, the collected data of the target object should not relate to the privacy of the user.
In embodiments of the application, the target motion characteristics may be used to determine a limb motion of the target object, such as jumping, curling, lifting hands, lifting feet, twitching, falling down, forking, holding down the abdomen, touching the forehead, and the like. The target motion characteristics may be described by the limb motion of the person, or may be described by the magnitude or position of the limb motion in the limb motion, etc.
The target pain expression may be used to determine a pain expression of the target subject. The painful expression is a micro expression, the micro expression is a part of psychological stress micro response, the duration of the psychological stress micro response is only twenty-fifth to one fifth of a second, the psychological stress micro response is a very quick expression, the expression starts from the instinct of human beings and is not controlled by the thought, and therefore the painful expression can reveal the real emotion hidden by the human beings. It is understood that different diseases correspond to different micro expressions, for example, a painful expression during twitching and a painful expression during pregnancy.
When a person makes a disease, some specific actions may be generated, such as falling down, cramping, stroking a painful part, etc. In the embodiment of the application, images corresponding to different authorized preset disease events can be obtained in advance, and actions or action characteristics corresponding to various preset disease events can be obtained by performing feature extraction on the images. After receiving the video data collected by the image pickup device, the limb shape of the target object can be acquired based on the image in the video data, so that whether the target object has a preset disease event or not can be deduced based on the action or action characteristic corresponding to the limb shape.
In addition, the expression may change when a person is ill due to physical discomfort. In the embodiment of the application, images corresponding to different preset disease events can be obtained in advance, and the painful expressions or the painful expression characteristics corresponding to the preset disease events can be obtained by performing feature extraction on the images. After receiving the video data collected by the image pickup device, the face image of the target object can be acquired based on the image in the video data, so that whether the preset disease event occurs in the target object can be deduced based on the pain expression characteristics corresponding to the face image.
If the voice data of the target object is suddenly interrupted or the sound is changed irregularly, the target object may have abnormal behaviors, so that whether the target object has a disease event or not can be determined based on the action characteristics corresponding to the abnormal behaviors. In addition, the voice data may also represent the disease characteristics of the target object, for example, a phenomenon that language disorder may occur at the time of onset of brain disease such as stroke, a phenomenon that voice is hoarse may occur at the time of respiratory disease such as cold, a groan may occur at the time of suffering, and the voice characteristics and emotional characteristics at the time of physical weakness are different from those at the time of health.
The motion data can be used to determine the body state and the limb actions (e.g., sitting posture, standing posture, etc.) of the target object, so that whether the target object has a preset disease event can be determined based on the parameters corresponding to the body state and/or the action characteristics corresponding to the limb actions.
The electroencephalogram data can reflect the thinking state of a target object, and a nervous mood can be generated when a disease event occurs in the target object, or the electroencephalogram data has different corresponding intentions and behaviors, so that the characteristics presented by the electroencephalogram data are different from those in a healthy state, and therefore the correlation characteristics between the electroencephalogram data and the disease event can be obtained in advance and used for determining whether the preset disease event occurs.
Blood glucose data, blood oxygen saturation data, heart rate data, temperature data, etc. are data for measuring a health state, so that whether the target subject has abnormal health data can be determined based on the above data, and whether the target subject has a preset disease event can be determined based on the abnormal health data.
Based on this, in the embodiment of the present application, the action features and/or the painful expression features of different collected data corresponding to the preset disease event may be stored in advance, so that after the collected data collected by various pickups are received, the action features and/or the painful expression features of the target object may be acquired based on the respective collected data. It should be noted that the collected data stored above is data authorized by the user, so that invasion of privacy of the user can be avoided.
The collected data and the action characteristics and/or the pain expression characteristics corresponding to the preset disease event can be stored in a block created on the block chain network. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer. Therefore, data are stored in a distributed mode through the block chain, data security is guaranteed, and meanwhile data sharing of information among different platforms can be achieved.
Before extracting the target motion characteristic and the target pain expression characteristic of the target object in the collected data, the collected data may be preprocessed, for example, filtered, grayscale converted, and the like, which is not limited herein. The method for extracting the target action features and the target pain expression features is not limited, and feature extraction can be performed through a first recognition model for recognizing the action features and through a second recognition model for recognizing the pain expression features. The first recognition model and the second recognition model may be separate models, or may be different branches of the same disease recognition model, and the like, and are not limited herein. The training sets of the first recognition model and the second recognition model can include images of patients with different etiologies, different age groups, different sexes, different regions, different characters and the like, so that the accuracy of feature extraction can be improved, and the accuracy of disease monitoring can be improved conveniently. It should be noted that the training sets of the first recognition model and the second recognition model are both authorized by the user. The training method of the first recognition model and the second recognition model is not limited, and weight adjustment can be performed through back propagation based on an error function obtained through forward propagation. The first recognition model can further recognize the disease event based on the action characteristics, and the second recognition model can further recognize the disease event based on the painful expression characteristics.
In one possible example, the number of acquired data is greater than or equal to 2, and step S102 includes the following steps a1 to a4, wherein:
a1: and carrying out target identification on each acquired data to obtain target data of the target object.
The method for target identification is not limited, and when the acquired data is video data or image data, the target identification can firstly obtain an image frame of the target object for face identification, and the image frame is used as the target data of the target object. When the collected data is voice data, the target recognition may first obtain a sound clip of the target object for voiceprint recognition, and then use the sound clip as target data of the target object. When the collected data is brain wave data, the target recognition may be to obtain a brain wave waveform of the target object for biological feature recognition, and then to use the brain wave waveform as the target data of the target object, and the like.
A2: and identifying the action and the painful expression of the target object in the target data to obtain at least two action characteristics and at least two painful expression characteristics of the target object, wherein each action characteristic corresponds to one occurrence time, and each painful expression characteristic corresponds to one occurrence time.
It is understood that at least one action feature and a pain expression feature may exist in different target data, each action feature and pain expression feature corresponding to an occurrence time. When the target data are video data, audio data, brain wave data and other data in a long period of time, the target data are split to obtain a plurality of segments, and the occurrence time of each segment is marked, so that the occurrence time corresponding to the action characteristic or the painful expression characteristic can be determined according to the occurrence time of each segment image frame. When the target data is instantaneous data such as image data, temperature data, blood oxygen data and the like, the occurrence time corresponding to the action characteristic or the painful expression characteristic is the acquisition time of the acquisition data corresponding to the action characteristic or the painful expression characteristic.
The method for identifying the motion feature and the painful expression feature is not limited in the present application, and the target data is taken as the video data for illustration, and in a possible example, the step a2 includes the following steps a21 to a24, where:
a21: the processing equipment carries out face recognition on the video data to obtain at least two image frames corresponding to the target object.
The image frame is an image containing a target object, that is, each frame image in the video data is subjected to face recognition, and the image containing the target object is taken as the image frame.
A22: the processing device determines a face angle and a limb shape of the target object in the image frame.
The human face angle can be described as an angle of a human face deviating from the front of a body, a straight line corresponding to the body in an image frame and a central line corresponding to the human face can be determined, and an included angle is formed between the straight line and the central line. The straight line corresponding to the body may be determined for the rectangle corresponding to the left and right arms and shoulders, or may be determined for the interval between the two legs, and the like, which is not limited herein. The central line corresponding to the face may be determined for a triangle determined by the bridge of the nose and the brow bone, or may be determined for lines corresponding to the bridge of the nose and the peak of the lips, and the like, which is not limited herein.
The limb shape may be described as the hand, foot, and body motion and amplitude of the target object, which may be determined from images corresponding to the limb portions in the image frames.
A23: and the processing equipment carries out pain expression recognition on each image frame based on the face angle to obtain at least two pain expression characteristics.
A24: the processing equipment performs motion recognition on each image frame based on the limb shape to obtain at least two motion characteristics.
It is understood that, in steps a 21-a 24, the video data is firstly subjected to face recognition to obtain at least two image frames corresponding to the target object, so that the image frame of the target object can be obtained. And then determining the face angle and the limb shape of the target object in the image frame, then carrying out painful expression recognition on the image frame based on the face angle, and carrying out action recognition on the image frame based on the limb shape, so that the accuracy of painful expression recognition and action recognition can be improved.
A3: the processing device selects a target motion feature of the target object from the at least two motion features based on the continuity of the occurrence time of each motion feature.
The method for selecting the target action characteristics is not limited, all action characteristics can be selected, and then the action characteristics are combined according to the continuity of the occurrence time to obtain the target action characteristics. Further, the target motion characteristics corresponding to the respective parts may be determined by combining the motion characteristics for the parts (for example, hand, foot, limb, etc.) corresponding to the motion characteristics.
The target action characteristic can also be selected from partial action characteristics, for example, the target action characteristic is selected based on an abnormal probability of the action characteristic, the abnormal probability is used for describing a probability of an abnormal action, and the abnormal action refers to an action caused by an instinct reaction of a target object in an abnormal state. The abnormal probability may be determined based on the continuity of the occurrence time of the motion feature, the continuity of the motion occurrence, and the like, and is not limited herein. It can be understood that when the abnormal probability is greater than a specified threshold, the probability that the target object has the preset disease event is greater, and therefore, the action feature corresponding to the abnormal probability is selected as the target action feature, and the accuracy of determining the target action feature can be further improved.
Or the acquisition distance of the target data can be determined based on the occurrence time continuity of the action features, and then the confidence of the action features is determined based on the type, the acquisition distance and the like of the target data, so that the action features with the confidence greater than a specified threshold value are selected as the target action features. It can be understood that the action can reflect the position of the target object, so that the acquisition distance of the target data corresponding to the action feature is determined based on the occurrence time and the continuity of the occurrence time of the action feature, and the accuracy rate of determining the acquisition distance can be improved. The confidence degrees of the motion characteristics which can be identified by different types of collected data are different, and the confidence degrees of different collection distances are different. Therefore, the confidence of each action characteristic is determined based on the type and the acquisition distance of the acquired data, so that the target action characteristic of the target object is selected and obtained according to the confidence, and the accuracy of determining the target action characteristic can be further improved.
In step a3, a target motion feature of the target object is determined based on the continuity of the occurrence time of each motion feature. That is to say, the target action characteristics are selected based on the time sequence characteristics of the action, so that the accuracy of determining the target action characteristics can be further improved.
A4: the processing device selects a target pain expression feature of the target object from the at least two pain expression features based on the continuity of the occurrence time of each pain expression feature.
The method for selecting the target pain expression characteristics is not limited, all pain expression characteristics can be selected, and the pain expression characteristics are combined according to the continuity of occurrence time to obtain the target pain expression characteristics. Furthermore, the target painful expressive features corresponding to the parts (such as eyes, face, mouth, etc.) corresponding to the painful expressive features can be determined by combining the painful expressive features.
The method and the device can also select partial pain expression features as the target pain expression features, for example, the target pain expression features are selected based on the abnormal probability of the pain expression features, or the acquisition distance of the target data can be determined based on the continuity of the occurrence time of the pain expression features, then the confidence coefficient of the pain expression features is determined based on the type, the acquisition angle and the like of the target data, and the like, and the description of the target action features can be referred to, and the details are not repeated herein.
S103: the processing device determines whether a preset disease event occurs in the target object based on the target motion characteristic and the target pain expression characteristic.
In the embodiment of the present application, if a predetermined disease event occurs, step S104 is executed. Otherwise, continuing to receive the collected data from the pickup. The present application is not limited to determining whether the target object has a predetermined disease event, and in one possible example, step S103 includes the following steps B1-B4, wherein:
b1: the processing device determines a first probability of occurrence of a preset disease event for the target object based on the target action characteristics.
Wherein the first probability is used for describing the probability that the action of the target object is possible to generate the preset disease event. The determination may be performed based on the first recognition model, or the matching may be performed based on the target motion characteristic and a preset motion characteristic corresponding to a preset disease event. In one possible example, step B1 includes the following steps B11 and B12, wherein:
b11: the processing device determines preset action characteristics corresponding to preset disease events.
The preset action characteristic is an action which may be generated when a preset disease event occurs, for example, when a heart attack occurs, the preset action characteristic may be a characteristic corresponding to an action such as falling, coma and the like.
B12: the processing equipment obtains a first probability of a preset disease event of the target object based on the matching degree between the target action characteristic and the preset action characteristic.
The method for obtaining the matching degree between the target action characteristic and the preset action characteristic is not limited, and the similarity and the association degree between the target action characteristic and the preset action characteristic can be obtained firstly; and then the similarity and the correlation are calculated to obtain the result.
The method for obtaining the first probability according to the matching degree between the target action characteristic and the preset action characteristic is not limited, the matching degree can be used as the first probability, and the determination can be performed according to the preset mapping relation between the matching degree and the first probability.
It can be understood that, in steps B11 and B12, the preset action features corresponding to the preset disease events are determined, and then the first probability that the target object has the preset disease events is obtained based on the matching degree between the target action features and the preset action features, so that the accuracy of determining the first probability can be improved.
B2: the processing device determines a second probability of the target object occurring the preset disease event based on the target painful expressive feature.
Wherein the second probability is used for describing the probability that the preset disease event may occur in the pain expression of the target object. The determination may be performed based on the second recognition model, or the matching may be performed based on the target pain expression feature and the preset pain expression feature corresponding to the preset disease event, and so on, refer to the description of step B1, and will not be described herein again.
The preset action characteristic and the preset pain expression characteristic are not limited, and the preset pain expression characteristic can be set based on the age, the sex, the character, the region, the etiology and the like of the target object, namely, the preset action characteristic and the preset pain expression characteristic can have the characteristics of the target object, so that the accuracy of determining the first probability and the second probability can be improved.
B3: the processing device determines a target probability of the target object occurring a preset disease event based on the first probability and the second probability.
The target probability may be a weighted value of the first probability and the second probability, or may be a maximum value or a minimum value between the first probability and the second probability, and the like, which is not limited herein. The preset weight values of the first probability and the second probability may be determined based on the correlation value between the preset disease event and the action or the painful expression, which is not limited in the present application.
B4: and when the target probability is greater than a preset threshold value, the processing equipment determines that a preset disease event occurs in the target object.
The preset threshold is not limited in the present application. In one possible example, a risk level of a preset disease event is determined; the preset threshold is determined based on the risk level.
The risk level is used to describe a level at which a preset disease event threatens the life of the target object, and may be determined according to the name of the disease, the part of the disease, the age, the body state, and other conditions of the target object, which are not limited herein.
It can be understood that the risk level of the preset disease event is determined first, and then the preset threshold for judging whether the preset disease event occurs is determined based on the risk level, so that the flexibility and accuracy of setting the preset threshold can be improved, and the processing efficiency of the preset disease event can be improved conveniently.
It can be understood that if the preset disease event is determined based on the target motion characteristics or the target pain expression characteristics, some error situations of old people and children may be hit. In steps B1-B4, a first probability and a second probability of occurrence of a preset disease event of the target object are determined based on the target action feature and the target pain expression feature, respectively, and then the target probability is determined based on the first probability and the second probability, so that whether the preset disease event occurs is determined based on a magnitude relation between the target probability and a preset threshold, and the accuracy of determining the occurrence of the preset disease event can be improved.
The preset disease event is not limited in the present application, and the preset disease event may be determined based on the basic information such as the medical history or the type of the target object, or may be understood as monitoring the disease event according to the type of the disease that may occur in the target object. The preset disease event can also be determined based on a preset disease corresponding to the target action characteristic (or the target pain expression characteristic), for example, if the target action characteristic is a fall, the disease type corresponding to the action characteristic of the fall is determined.
In one possible example, after the processing device extracts the target action feature and the target pain expression feature of the target object in the collected data, the processing device acquires basic information of the target object; the processing equipment determines a disease type corresponding to the target action characteristic; the processing device determines a preset disease event based on the basic information and the disease type of the target object.
The basic information includes information such as age, sex, home address, contact information, work unit, hobbies, eating habits and the like of the target object. The disease type is used for describing classification of diseases corresponding to preset disease events, and may be classified according to names of the diseases, parts of the diseases, risk levels of the diseases, and the like.
It can be understood that the basic information can determine the living habits of the target object or the health states corresponding to the living habits, and the preset disease events needing to be monitored are determined based on the basic information of the target object and the disease types corresponding to the target action characteristics, so that the accuracy of disease monitoring can be improved conveniently.
Furthermore, disease monitoring can be performed in sequence according to the probability of occurrence of preset disease events. For example, when the target object is a heart disease patient, it is determined whether the target object has a heart disease event, and then it is determined whether the target object has an event such as a cold. When the target object is a pregnant woman, whether the target object is to enter a gestation stage or whether an abortion event occurs is determined, and then whether the target object has an event such as a cold or the like is determined.
Optionally, after determining the preset disease events based on the basic information and the disease type of the target object, the processing device determines the identification priority of each preset disease event; the processing device determines a target probability of the preset disease event of the target object based on the identification priority of each preset disease event and the target pain expression characteristics.
Wherein the identification priority can be used to describe the priority of each pre-set disease event for determination. It can be understood that the identification priority of the preset disease event determined based on the basic information of the target object and the disease type corresponding to the target action feature is sequentially judged based on the target pain expression, so that the identification time can be reduced, and the identification efficiency can be improved.
S104: and the processing equipment sends prompt information corresponding to the preset disease event to the contact equipment.
The prompt information is not limited, and the prompt information may include text information corresponding to a preset disease event, and may further include an image or a video corresponding to the preset disease event.
In the method as shown in fig. 1, a processing device receives the collected data from at least one pickup, extracts the target motion characteristic and the target pain expression characteristic of the target object in the collected data, and determines whether the target object has a preset disease event based on the target motion characteristic and the target pain expression characteristic. Therefore, the preset disease event is identified based on the target action characteristic and the target pain expression characteristic carried by the collected data collected by the pickup device, and the accuracy of identifying the disease event can be improved. And the information is reported to the contact equipment after the occurrence of the preset disease event is identified, so that a user corresponding to the contact equipment can take measures to deal with the occurrence of the event in time, and the processing efficiency of the event is improved conveniently.
Optionally, the preset disease event is an emergency call event, and if the response information of the contact device is not received when a preset time period after the processing device sends the prompt information to the contact device arrives, the processing device sends a rescue request to the target hospital.
The emergency call event includes falling, falling and other events requiring emergency hospitalization. The emergency call event may be determined based on a preset risk level for the disease event. The preset time period may be set based on a risk coefficient of a preset disease event, for example, the preset time period of an event that life risk occurs is 10 seconds; life risk does not occur, but the preset time period for putting into life for treatment is 5 minutes. The target hospital may be a hospital that is close to the current location of the target object, may be a hospital that the target object frequently visits, and the like, and is not limited herein.
It can be understood that when the preset disease event is an emergency call event and a preset time period after the prompt message is sent to the contact device arrives, the response message of the contact device is not received yet, and a rescue request is sent to the target hospital, so that the timeliness of rescue can be improved.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a device for processing test cases according to the present application, and as shown in fig. 2, the event notification device 200 includes:
a communication unit 202 for receiving the collected data from the at least one pickup;
a storage unit 203 for storing preset disease events;
the processing unit 201 is configured to extract a target action feature and a target pain expression feature of the target object in the collected data; determining whether the preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
the communication unit 202 is further configured to send a prompt message corresponding to the preset disease event to a contact device if the preset disease event occurs.
In a possible example, the number of the acquired data is greater than or equal to 2, and the processing unit 201 is specifically configured to perform target identification on each acquired data to obtain target data of a target object; identifying the action and the painful expression of the target object in the target data to obtain at least two action characteristics and at least two painful expression characteristics of the target object, wherein each action characteristic corresponds to one occurrence time, and each painful expression characteristic corresponds to one occurrence time; selecting a target action characteristic of the target object from at least two action characteristics based on the continuity of the occurrence time of each action characteristic; and selecting a target pain expression characteristic of the target object from at least two pain expression characteristics based on the continuity of the occurrence time of each pain expression characteristic.
In a possible example, the target data includes video data, and the processing unit 201 is specifically configured to perform face recognition on the video data to obtain at least two image frames corresponding to the target object; determining a face angle and a limb shape of the target object in the image frame; carrying out pain expression recognition on each image frame based on the face angle to obtain at least two pain expression characteristics; and performing motion recognition on each image frame based on the limb shape to obtain at least two motion characteristics.
In one possible example, the processing unit 201 is specifically configured to determine a first probability that a preset disease event occurs in the target subject based on the target action feature; determining a second probability of the target object occurring the preset disease event based on the target painful expression features; determining a target probability of the target subject occurring the preset disease event based on the first probability and the second probability; and when the target probability is greater than a preset threshold value, determining that a preset disease event occurs in the target object.
In one possible example, the processing unit 201 is specifically configured to determine a preset action characteristic corresponding to a preset disease event; and obtaining a first probability of the target object generating the preset disease event based on the matching degree between the target action characteristic and the preset action characteristic.
In one possible example, the processing unit 201 is further configured to determine a risk level of the preset disease event; determining the preset threshold based on the hazard level.
In one possible example, the processing unit 201 is further configured to obtain basic information of the target object; determining a disease type corresponding to the target action characteristic; determining the preset disease event based on the basic information of the target object and the disease type.
For detailed processes executed by each unit in the event notification device 200, reference may be made to the execution steps in the foregoing method embodiments, which are not described herein again.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure. The computer device comprises the aforementioned pickup, processing device and contact device. By way of example, and as shown in fig. 3, a computer device 300 includes a processor 310, a memory 320, a communication interface 330, and one or more programs 340. The related functions implemented by the communication unit 202 shown in fig. 2 can be implemented by the communication interface 330, the related functions implemented by the storage unit 203 shown in fig. 2 can be implemented by the memory 320, and the related functions implemented by the processing unit 201 shown in fig. 2 can be implemented by the processor 310.
The one or more programs 340 are stored in the memory 320 and configured to be executed by the processor 310, the programs 340 including instructions for:
receiving collected data from at least one pickup;
extracting target action characteristics and target pain expression characteristics of the target object in the acquired data;
determining whether a preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
and if the preset disease event occurs, sending prompt information corresponding to the preset disease event to contact equipment.
In one possible example, the number of the collected data is greater than or equal to 2, and in the aspect of extracting the target action feature and the target pain expression feature of the target object in the collected data, the program 340 is specifically configured to execute the following steps:
carrying out target identification on each acquired data to obtain target data of a target object;
identifying the action and the painful expression of the target object in the target data to obtain at least two action characteristics and at least two painful expression characteristics of the target object, wherein each action characteristic corresponds to one occurrence time, and each painful expression characteristic corresponds to one occurrence time;
selecting a target action characteristic of the target object from at least two action characteristics based on the continuity of the occurrence time of each action characteristic;
and selecting a target pain expression characteristic of the target object from at least two pain expression characteristics based on the continuity of the occurrence time of each pain expression characteristic.
In one possible example, where the target data includes video data, the program 340 is specifically configured to perform the following steps in the aspect of identifying the motion and the painful expression of the target object in the target data to obtain at least two motion features and at least two painful expression features of the target object:
performing face recognition on the video data to obtain at least two image frames corresponding to the target object;
determining a face angle and a limb shape of the target object in the image frame;
carrying out pain expression recognition on each image frame based on the face angle to obtain at least two pain expression characteristics;
and performing motion recognition on each image frame based on the limb shape to obtain at least two motion characteristics.
In one possible example, in the determining whether the target subject has a preset disease event based on the target motion feature and the target agony expression feature, the program 340 is specifically configured to execute the following steps:
determining a first probability of occurrence of a preset disease event for the target object based on the target action characteristics;
determining a second probability of the target object occurring the preset disease event based on the target painful expression features;
determining a target probability of the target subject occurring the preset disease event based on the first probability and the second probability;
and when the target probability is greater than a preset threshold value, determining that a preset disease event occurs in the target object.
In one possible example, in terms of the determining a first probability of occurrence of a preset disease event in the target subject based on the target action feature, the program 340 is specifically configured to execute the following steps:
determining a preset action characteristic corresponding to a preset disease event;
and obtaining a first probability of the target object generating the preset disease event based on the matching degree between the target action characteristic and the preset action characteristic.
In one possible example, the program 340 is further for executing the instructions of:
determining a risk level for the predetermined disease event;
determining the preset threshold based on the hazard level.
In one possible example, after the extracting the target motion feature and the target pain expression feature of the target object in the collected data, the program 340 is further for executing the following steps:
acquiring basic information of the target object;
determining a disease type corresponding to the target action characteristic;
determining the preset disease event based on the basic information of the target object and the disease type.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for causing a computer to execute to implement part or all of the steps of any one of the methods described in the method embodiments, and the computer includes a pickup, a processing device, and a contact device.
Embodiments of the application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform to implement some or all of the steps of any of the methods recited in the method embodiments. The computer program product may be a software installation package and the computer comprises a pick-up, a processing device and a contacting device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will also appreciate that the embodiments described in this specification are presently preferred and that no particular act or mode of operation is required in the present application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, at least one unit or component may be combined or integrated with another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on at least one network unit. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a hardware mode or a software program mode.
The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. With such an understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An event prompting method is characterized by comprising the following steps:
receiving collected data from at least one pickup;
extracting target action characteristics and target pain expression characteristics of the target object in the acquired data;
determining whether a preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
and if the preset disease event occurs, sending prompt information corresponding to the preset disease event to contact equipment.
2. The method according to claim 1, wherein the number of the collected data is greater than or equal to 2, and the extracting the target action feature and the target pain expression feature of the target object from the collected data comprises:
carrying out target identification on each acquired data to obtain target data of a target object;
identifying the action and the painful expression of the target object in the target data to obtain at least two action characteristics and at least two painful expression characteristics of the target object, wherein each action characteristic corresponds to one occurrence time, and each painful expression characteristic corresponds to one occurrence time;
selecting a target action characteristic of the target object from at least two action characteristics based on the continuity of the occurrence time of each action characteristic;
and selecting a target pain expression characteristic of the target object from at least two pain expression characteristics based on the continuity of the occurrence time of each pain expression characteristic.
3. The method of claim 2, wherein the target data comprises video data, and wherein the identifying the motion and the painful expression of the target object in the target data to obtain at least two motion features and at least two painful expression features of the target object comprises:
performing face recognition on the video data to obtain at least two image frames corresponding to the target object;
determining a face angle and a limb shape of the target object in the image frame;
carrying out pain expression recognition on each image frame based on the face angle to obtain at least two pain expression characteristics;
and performing motion recognition on each image frame based on the limb shape to obtain at least two motion characteristics.
4. The method according to any one of claims 1-3, wherein the determining whether the target subject has a preset disease event based on the target action feature and the target distress expression feature comprises:
determining a first probability of occurrence of a preset disease event for the target object based on the target action characteristics;
determining a second probability of the target object occurring the preset disease event based on the target painful expression features;
determining a target probability of the target subject occurring the preset disease event based on the first probability and the second probability;
and when the target probability is greater than a preset threshold value, determining that a preset disease event occurs in the target object.
5. The method of claim 4, wherein determining the first probability of the target subject occurring a predetermined disease event based on the target action characteristic comprises:
determining a preset action characteristic corresponding to a preset disease event;
and obtaining a first probability of the target object generating the preset disease event based on the matching degree between the target action characteristic and the preset action characteristic.
6. The method of claim 4, further comprising:
determining a risk level for the predetermined disease event;
determining the preset threshold based on the hazard level.
7. The method according to any one of claims 1-3, wherein after said extracting target action features and target pain expression features of the target object in said acquired data, the method further comprises:
acquiring basic information of the target object;
determining a disease type corresponding to the target action characteristic;
determining the preset disease event based on the basic information of the target object and the disease type.
8. An event reminder device, comprising:
a communication unit for receiving the collected data from the at least one pickup;
the storage unit is used for storing preset disease events;
the processing unit is used for extracting target action characteristics and target pain expression characteristics of the target object in the acquired data; determining whether the preset disease event occurs to the target object based on the target action feature and the target pain expression feature;
the communication unit is further configured to send prompt information corresponding to the preset disease event to the contact device if the preset disease event occurs.
9. A computer device comprising a processor, a memory, a communication interface, and one or at least one program, wherein the one or at least one program is stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, the computer program causing a computer to execute to implement the method of any one of claims 1-7.
CN202110446886.2A 2021-04-23 2021-04-23 Event prompting method and device, computer equipment and storage medium Pending CN113096808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110446886.2A CN113096808A (en) 2021-04-23 2021-04-23 Event prompting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110446886.2A CN113096808A (en) 2021-04-23 2021-04-23 Event prompting method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113096808A true CN113096808A (en) 2021-07-09

Family

ID=76680041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110446886.2A Pending CN113096808A (en) 2021-04-23 2021-04-23 Event prompting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113096808A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781278A (en) * 2021-09-10 2021-12-10 未鲲(上海)科技服务有限公司 Event prompting method, device, equipment and storage medium based on feature recognition
CN113871019A (en) * 2021-12-06 2021-12-31 江西易卫云信息技术有限公司 Disease public opinion monitoring method, system, storage medium and equipment
CN114415528A (en) * 2021-12-08 2022-04-29 珠海格力电器股份有限公司 Intelligent household equipment reminding method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108234956A (en) * 2018-02-05 2018-06-29 龙马智芯(珠海横琴)科技有限公司 Medical care monitoring method, device and system, equipment
CN109255468A (en) * 2018-08-07 2019-01-22 北京优酷科技有限公司 A kind of method and server of risk prediction
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
CN110069652A (en) * 2018-08-30 2019-07-30 Oppo广东移动通信有限公司 Reminding method, device, storage medium and wearable device
WO2020024400A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Class monitoring method and apparatus, computer device, and storage medium
CN111814775A (en) * 2020-09-10 2020-10-23 平安国际智慧城市科技股份有限公司 Target object abnormal behavior identification method, device, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108234956A (en) * 2018-02-05 2018-06-29 龙马智芯(珠海横琴)科技有限公司 Medical care monitoring method, device and system, equipment
WO2020024400A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Class monitoring method and apparatus, computer device, and storage medium
CN109255468A (en) * 2018-08-07 2019-01-22 北京优酷科技有限公司 A kind of method and server of risk prediction
CN110069652A (en) * 2018-08-30 2019-07-30 Oppo广东移动通信有限公司 Reminding method, device, storage medium and wearable device
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
CN111814775A (en) * 2020-09-10 2020-10-23 平安国际智慧城市科技股份有限公司 Target object abnormal behavior identification method, device, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781278A (en) * 2021-09-10 2021-12-10 未鲲(上海)科技服务有限公司 Event prompting method, device, equipment and storage medium based on feature recognition
CN113871019A (en) * 2021-12-06 2021-12-31 江西易卫云信息技术有限公司 Disease public opinion monitoring method, system, storage medium and equipment
CN114415528A (en) * 2021-12-08 2022-04-29 珠海格力电器股份有限公司 Intelligent household equipment reminding method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200337631A1 (en) Systems, environment and methods for identification and analysis of recurring transitory physiological states and events using a portable data collection device
CN113096808A (en) Event prompting method and device, computer equipment and storage medium
CN110024038B (en) System and method for synthetic interaction with users and devices
JP6101684B2 (en) Method and system for assisting patients
US11164596B2 (en) Sensor assisted evaluation of health and rehabilitation
CN102149319B (en) Alzheimer's cognitive enabler
US20220054092A1 (en) Eyewear with health assessment, risk monitoring and recovery assistance
US20150099946A1 (en) Systems, environment and methods for evaluation and management of autism spectrum disorder using a wearable data collection device
CN105007808B (en) Access duration control system and method
EP2012655A1 (en) Interactive patient monitoring system using speech recognition
US11383130B2 (en) Pelvic floor muscle training device and system
CN110598611B (en) Nursing system, patient nursing method based on nursing system and readable storage medium
WO2015145424A1 (en) A system for conducting a remote physical examination
KR20160095464A (en) Contents Recommend Apparatus For Digital Signage Using Facial Emotion Recognition Method And Method Threof
US11635816B2 (en) Information processing apparatus and non-transitory computer readable medium
JP2018195164A (en) Analysis device, analysis program, and analysis method
WO2020049185A1 (en) Systems and methods of pain treatment
CN110610754A (en) Immersive wearable diagnosis and treatment device
JP2021090668A (en) Information processing device and program
EP3376462A1 (en) Behavior detection device, behavior detection method, and behavior detection program
CN113069718B (en) Control method of treadmill, cloud server, system and readable storage medium
KR20220021227A (en) Health care system and service method based on iris analysis
US20180113991A1 (en) Interactive Apparatus and Devices for Personal Symptom Management and Therapeutic Treatment Systems
Bellodi et al. Dialogue support for memory impaired people
EP4328928A1 (en) Method and device for controlling improved cognitive function training app

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051212

Country of ref document: HK