CN108922085B - Monitoring method, device, monitoring equipment and storage medium - Google Patents

Monitoring method, device, monitoring equipment and storage medium Download PDF

Info

Publication number
CN108922085B
CN108922085B CN201810791295.7A CN201810791295A CN108922085B CN 108922085 B CN108922085 B CN 108922085B CN 201810791295 A CN201810791295 A CN 201810791295A CN 108922085 B CN108922085 B CN 108922085B
Authority
CN
China
Prior art keywords
person
monitored
calling
area
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810791295.7A
Other languages
Chinese (zh)
Other versions
CN108922085A (en
Inventor
赵海杰
秦林婵
黄通兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN201810791295.7A priority Critical patent/CN108922085B/en
Publication of CN108922085A publication Critical patent/CN108922085A/en
Priority to PCT/CN2019/085945 priority patent/WO2020015439A1/en
Application granted granted Critical
Publication of CN108922085B publication Critical patent/CN108922085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
    • G08B3/1008Personal calling arrangements or devices, i.e. paging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Eye Examination Apparatus (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a monitoring method, a monitoring device, monitoring equipment and a storage medium. The method comprises the following steps: acquiring a face image of a person in a monitored area; if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored; determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored; and if any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored and exceeds the early warning duration, determining the calling level according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and carrying out corresponding calling. By the method, identity authentication can be performed through matching of the face image of the person to be monitored and the preset image, so that the calling accuracy is improved; and the targeted calling can be carried out according to the sight line of the person to be monitored or according to the sight line of the person to be monitored and the physical state of the person to be monitored.

Description

Monitoring method, device, monitoring equipment and storage medium
Technical Field
The embodiments of the present invention relate to the field of communications technologies, and in particular, to a monitoring method, an apparatus, a monitoring device, and a storage medium.
Background
At present, along with the acceleration of the life rhythm of people, the mental and physical consumption is increased, and the incidence rate of various diseases is higher and higher. When a patient who loses language ability and mobility ability after operation or due to other reasons is hospitalized or maintained, medical staff usually needs to watch the patient all the time so as to timely rescue the patient when the body of the patient is improper.
In the existing monitoring method for patients with inconvenience in sounding and restricted behavioral ability, in the process of monitoring users, an alarm is usually given by judging the time length for watching a preset area by a person to be monitored so as to inform medical staff. It lacks the pertinence at the in-process of carrying out guardianship, can not effectual calling medical personnel to medical personnel's work load has been increased.
Disclosure of Invention
The monitoring method, the monitoring device, the monitoring equipment and the storage medium provided by the invention can effectively call and reduce the workload of medical staff.
In a first aspect, an embodiment of the present invention provides a monitoring method, including:
acquiring a face image of a person in a monitored area;
if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored;
determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored;
and if any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored and exceeds the early warning duration, determining the calling level according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
In a second aspect, an embodiment of the present invention further provides a monitoring device, including:
the acquisition module is used for acquiring a face image of a person in a monitored area;
the personnel determining module is used for determining that the personnel corresponding to the face image is the personnel to be monitored when the face image is matched with a preset image;
the fixation point determining module is used for determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored;
and the calling module is used for determining the calling grade and carrying out corresponding calling according to the watched subarea or the watched subarea and the body state of the person to be monitored when any subarea corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored and exceeds the early warning time length.
In a third aspect, an embodiment of the present invention further provides a monitoring device, including:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the monitoring method provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the monitoring method provided in the embodiment of the present invention.
The embodiment of the invention provides a monitoring method, a monitoring device and a storage medium, and by using the technical scheme, the face image of a person in a monitoring area can be identified before the sight line of the person to be monitored is analyzed, so that calling caused by a person not to be monitored is avoided, and the calling accuracy is improved. In addition, when the sight line of the person to be monitored gazes any one of the subareas corresponding to different calling functions in the preset area for triggering calling exceeds the early warning time, the person to be monitored can make targeted calling according to the subarea gazed by the person to be monitored or according to the watched subarea and the physical state of the person to be monitored, so that the workload of medical care personnel is reduced, the pain of the person to be monitored is relieved, and the use experience of the person to be monitored and the medical care personnel is effectively improved.
Drawings
Fig. 1 is a schematic flow chart of a monitoring method according to an embodiment of the present invention;
fig. 2a is a schematic flowchart of a monitoring method according to a second embodiment of the present invention;
fig. 2b is a schematic view of an application scenario of the monitoring method according to the second embodiment of the present invention;
FIG. 2c is a schematic diagram of a default image provided in the second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a monitoring device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a monitoring device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a schematic flow chart of a monitoring method according to an embodiment of the present invention, which is applicable to monitoring a person to be monitored, for example, monitoring a person to be monitored who has language barrier and inconvenient actions, so as to call the person to be monitored when there is a call request. The method can be performed by a monitoring device provided by the embodiment of the invention, wherein the device can be implemented by software and/or hardware and is generally integrated on a monitoring device. In the embodiment, the monitoring device can be integrated into the existing devices, such as computers, televisions or medical instruments and the like; the monitoring device can also be a monitoring device which only realizes the monitoring method in the embodiment. The monitoring devices include, but are not limited to: camera, speaker and treater etc..
As shown in fig. 1, a monitoring method according to a first embodiment of the present invention includes the following steps:
s101, obtaining a face image of a person in a monitored area.
In this embodiment, the monitored area can be understood as an acquirable area of the image acquisition device. In the embodiment, only the face images of the personnel in the monitoring area can be processed, so that the image processing time can be effectively reduced, and the subsequent identification efficiency can be improved.
It should be noted that the monitoring method in this embodiment can be applied to the person to be monitored who has language disorder and inconvenient actions. Correspondingly, the position of the person to be monitored can be constant all the time or can be changed relative to the monitoring device. The monitoring area may comprise the maximum range in which the person to be monitored can move when the position of the user changes relative to the monitoring device.
In the step, the face image of the person in the monitored area can be acquired through an image acquisition device (such as a camera). The embodiment can perform monitoring analysis based on the face image after the face image is acquired.
Generally, in this step, the face image of the person in the monitored area may be directly obtained by the image acquisition device, or the whole body image of the person in the monitored area including the environmental information may be obtained by the image acquisition device, and then the face image is extracted. It is understood that the facial image may include background information (e.g., the surrounding environment of the person to be monitored).
S102, if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored.
In this embodiment, the preset image may be understood as a pre-recorded face image of the person to be monitored. The preset image can be used for identity authentication of a person to be monitored.
After the face image of the person in the monitoring area is obtained, the obtained face image can be matched with the preset image to finish the identity authentication of the person in the monitoring area, so that whether the person in the monitoring area is the person to be monitored or not is judged, and the person not to be monitored is prevented from being monitored. It can be considered in the present embodiment that when the face image coincides with the preset image, the obtained face image matches the preset image.
If all the characteristic information in the face image is consistent with the corresponding characteristic information in the preset image, the main characteristic information in the face image is consistent with the corresponding characteristic information in the preset image, or the characteristic information in the face image consistent with the corresponding characteristic information in the preset image reaches a certain threshold value. The feature information may be understood as a feature identifying the facial image, such as an eye feature, a mouth feature, a face feature, or the like. The main feature information may be understood as a feature capable of uniquely identifying a face image, such as an eye feature.
Specifically, in this step, it may be determined whether the face image matches the preset image by matching feature information in the acquired face image of the person in the monitored area with feature information in the preset image. If the facial image is matched with the target person, the target person can be determined to be the target person, and the target person can be monitored, wherein the characteristic information can include eye characteristics, such as spots, filaments, coronaries and/or crypts of the eyes.
Taking identification through eye features as an example: when the number and size of the spots in the eye image (the eye image is included in the face image) at the position close to the preset image are the same, and/or the shape and size of the coronal region are the same, and/or the size and shape of the crypt are the same, the eye image may be considered to match the preset image. It is understood that a preset deviation range is allowed when determining whether the face image matches the preset image.
S103, determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored.
After the person to be monitored is determined, the step can judge the sight line corresponding to the eye image in the face image of the person to be monitored so as to judge whether the person to be monitored has a call request. It should be noted that the face image in this step may include the eyes of the user, so as to determine the sight line of the person to be monitored based on the eye image in the eye image.
Generally, the step can be performed to locate the position of the eye image in the face image according to the face image of the person to be monitored, and then the line of sight can be determined according to the eye image. When the sight line is determined based on the eye image, the eye movement characteristics in the eye image can be analyzed to obtain the sight line of the person to be monitored.
Generally, when a common camera is used for acquiring a face image of a person to be monitored, the step can determine the sight line of the person to be monitored based on the eye features of the eye image in the face image of the person to be monitored and the eye features of the person to be monitored when the person to be monitored looks at a preset area; when an infrared camera (provided with an infrared lamp and used for forming light spot information on the eyes of the person to be monitored) is used for acquiring the face image of the person to be monitored, the sight line of the person to be monitored can be determined by comparing the position relationship between the center of the light spot and the center of the pupil.
The eye movement characteristics can be understood as characteristic information of the eye. Wherein, the eye movement characteristics can be determined by eyeball physiological information and/or eyeball information in an eye pattern. Ocular features may include, but are not limited to: pupil position, pupil shape, iris position, iris shape, eyelid position, canthus position, and/or spot (also known as purkinje spot) position.
When the position of the person to be monitored is fixed relative to the monitoring device, the embodiment can analyze only the eye features in the eye image; when the position of the person to be monitored changes relative to the monitoring device, the embodiment can also be used for analyzing in combination with the position information of the person to be monitored so as to accurately determine the sight line of the person to be monitored. The position information of the person to be monitored can be obtained by analyzing the face image. Generally, the position of the image acquisition device is fixed, and when the person to be monitored is in different positions, the face image acquired by the image acquisition device may have a certain difference.
S104, if the sight line of the person to be monitored gazes any one of the sub-areas corresponding to different calling functions in the preset area for triggering the calling exceeds the early warning duration, determining the calling level according to the gazed sub-area or according to the gazed sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
In this embodiment, the preset area may be understood as a designated area on the monitoring device, which may be used to trigger a call. The sub-areas may be understood as areas included in a preset area, different sub-areas may correspond to different calls, and different calls may correspond to different functions. The early warning duration can be understood as a time threshold value when a person to be monitored watches a preset area. The call level can be understood as a priority set according to the physical state of the person to be monitored and the sub-area or a priority determined according to the sub-area. The physical state can be understood as the state of illness of the person to be monitored. Such as the cold state, the postoperative state or the fracture rehabilitation state of the person to be monitored. Different body states in combination with sub-areas of gaze may determine different call classes.
It will be appreciated that the predetermined area may comprise a plurality of sub-areas, and that different sub-areas may correspond to different call classes. The specific position, number and shape of the preset region are not limited herein, and the number and shape of the sub-regions included in the preset region are also not limited, and those skilled in the art can set the position and shape according to actual requirements.
Generally, when a call request is made to a person to be monitored, the person to be monitored can watch a preset area to realize alarm. Wherein the monitoring device can be understood as a device for operating the monitoring method provided by the embodiment. If the duration that the person to be monitored watches any sub-area in the preset area is longer than the early warning duration, subsequent monitoring treatment can be carried out; if the eye position is not larger than the preset eye position, the eye images of the personnel in the monitored area can be continuously acquired until the time length that the personnel to be monitored watches any sub-area in the preset area exceeds the early warning time length.
Specifically, in this step, whether the person to be monitored watches any of the sub-regions in the preset region or not can be determined by matching the watching region corresponding to the sight line of the person to be monitored with each of the sub-regions in the preset region. If the person to be monitored watches any sub-area in the preset area, whether the duration of watching the sub-area by the person to be monitored exceeds the early warning duration or not can be further determined.
If the watching time of the person to be monitored in any sub-area in the preset area exceeds the early warning time length, the person to be monitored can be considered to have a calling request, so that the calling grade can be determined based on the sub-area watched by the person to be monitored or according to the watched sub-area and the body state of the person to be monitored, and corresponding calling is carried out based on the determined calling grade. If the time length that the sight line of the person to be monitored is watched on any sub-area in the preset area does not exceed the early warning time length or the person to be monitored is not watched on any sub-area in the preset area, the face image of the person in the monitoring area can be continuously acquired to monitor the person to be monitored.
After the sub-area watched by the person to be monitored is determined, the corresponding calling grade can be directly determined based on the sub-area watched by the person to be monitored, and corresponding calling is carried out; and the corresponding call grade can be determined according to the gazed sub-area and the physical state of the person to be monitored. When the call level is determined based on only the sub-area, the sub-area may have a correspondence with the call level. After the sub-regions are determined, the corresponding call class may be determined. When the call level is determined by the sub-area and the body state, the sub-area and the body state may have a correspondence with the call level. After the sub-regions are determined, the corresponding call levels may be determined in combination with the body state.
The physical state of the person to be monitored can be determined based on medical record information of the person to be monitored, and can also be obtained through monitoring equipment for monitoring the physical state. When the call level is determined, the method and the device combine the physical state of the person to be monitored, and can more accurately perform targeted calling. For example, different persons to be monitored may have different physical states. When different persons to be monitored watch the same subarea, the adaptive calling grade can be determined by combining the body states of the persons. For example, the preset region includes a sub-region a and a sub-region B. When the physical state of the person to be monitored is the post-operation state and looks at the subregion A, a higher calling level can be triggered. A first class call may be triggered if the call classes are classified into first, second, and third classes (where a first class may be the highest class). When the physical state of the person to be monitored is a rehabilitation state and looks at the subregion A, a lower call level, such as a third-class call, can be triggered. In this step, after the call level is determined according to the sub-area watched by the person to be monitored or according to the sub-area watched by the person to be monitored and the physical state of the person to be monitored, a corresponding call can be further performed based on the determined call level. It is understood that different call levels may correspond to different call modes, such as ringing or lighting; the method can also correspond to different medical staff, for example, if the calling level is higher, a doctor is called, and if the calling level is lower, a nurse is called.
The embodiment of the invention provides a monitoring method, by which the face images of people in a monitoring area can be identified before the sight line of the people to be monitored is analyzed, so that calling caused by people not to be monitored is avoided, and the calling accuracy is improved. In addition, when the sight of the person to be monitored gazes any one of the sub-areas corresponding to different calling functions in the preset area exceeds the early warning time, the person to be monitored can make targeted calling according to the sub-area gazed by the person to be monitored or according to the sub-area gazed by the person to be monitored and the physical state of the person to be monitored, so that the workload of medical staff is reduced, the pain of the person to be monitored is relieved, and the use experience of the person to be monitored and the medical staff is effectively improved.
Example two
Fig. 2a is a schematic flow chart of a monitoring method according to a second embodiment of the present invention, which is optimized based on the above embodiments. In this embodiment, if the face image is matched with a preset image, it is determined that a person corresponding to the face image is a person to be monitored, and the determining is further embodied as: extracting feature information from the facial image; and if the characteristic information is matched with preset characteristic information, determining that the person corresponding to the face image is the person to be monitored.
Further, in this embodiment, the sight line of the person to be monitored is determined according to the eye image in the face image of the person to be monitored, and is further optimized as follows: determining the position information of the person to be monitored according to the face image and the reference image of the person to be monitored; and determining the sight line of the person to be monitored according to the position information and the eye movement characteristics in the eye image of the person to be monitored.
On the basis of the optimization, the present embodiment determines the call level according to the gazed sub-area or according to the gazed sub-area and the physical state of the person to be monitored and makes a corresponding call, and specifically optimizes as follows: searching a preset grade data table, determining calling grades corresponding to the gazed sub-areas and the body states of the personnel to be monitored, and calling correspondingly; or searching a preset grade data table, determining the calling grade corresponding to the watched sub-region and carrying out corresponding calling.
Further, in this embodiment, before the acquiring the face image of the person in the monitored area, the optimizing further includes: and recording a face image containing the environmental information as a reference image when the person to be monitored looks at the reference point.
Further, in this embodiment, if the gaze of the person to be monitored gazes at any one of the sub-areas corresponding to different call functions in the preset area for triggering a call exceeds the early warning duration, determining the call level according to the gazed sub-area or according to the gazed sub-area and the physical state of the person to be monitored, and performing a corresponding call, further performing optimization including: and sending corresponding voice prompt to the person to be monitored according to the determined call grade. Please refer to the first embodiment for a detailed description of the present embodiment.
As shown in fig. 2a, a monitoring method provided by the second embodiment of the present invention includes the following steps:
s201, when the face image containing the environmental information is recorded as the reference image when the person to be monitored looks at the reference point.
In this embodiment, the reference point may be understood as a point displayed on the monitoring device at which the person to be monitored looks. The reference image may be understood as an image acquired when the person to be monitored gazes at the reference point, on the basis of which reference image the position information of the person to be monitored can be determined.
Generally, before monitoring the person to be monitored, the face image of the person to be monitored when looking forward at the reference point can be pre-recorded as the reference image. It should be noted that the face image may contain the background environment of the face of the person to be monitored. This step can more effectively specify the position information of the user by using the face image including the environment information as the reference image.
S202, acquiring a face image of a person in the monitored area.
And S203, extracting characteristic information from the face image.
In this embodiment, after the face image of the person in the monitored area is acquired, the feature information may be extracted from the face image through a feature extraction technique in this step, so as to identify the identity of the person to be monitored based on the extracted feature information. The feature information may include eye features, mouth features, face features, or the like.
S204, judging whether the characteristic information is matched with preset characteristic information, if so, executing S205; if not, go to S202.
In this embodiment, the preset feature information may be understood as feature data in a preset image. It is understood that the preset feature information corresponds to the feature information. For example, when the feature information is an eye feature, the preset feature information may be a preset eye feature; when the feature information is a mouth feature, the preset feature information may be a preset mouth feature; when the feature information is a facial feature, the preset feature information may be a preset facial feature.
After the feature information is extracted, the step may further determine whether the feature information matches the preset feature information. If so, the monitoring operation can be further executed, i.e. the step S205 is executed; if not, the method may return to continue acquiring the face image of the person in the monitored area, i.e., execute S202.
S205, determining that the person corresponding to the face image is the person to be monitored.
In this embodiment, if the feature information matches with the preset feature information, the person corresponding to the face image may be determined as a person to be monitored, so as to monitor the person to be monitored.
S206, determining the position information of the person to be monitored according to the face image and the reference image of the person to be monitored.
In this embodiment, when determining the sight line of the person to be monitored based on the eye image in the face image of the person to be monitored, the step may first perform feature comparison between the face image of the person to be monitored and the reference image, and determine the current position information of the person to be monitored, so as to determine the sight line of the person to be monitored based on the position information.
Specifically, the step may calculate a position deviation of the face image of the person to be monitored from the reference image based on the face image to determine the position information of the person to be monitored. In determining the positional deviation, the positional deviation of each feature information in the face image and the reference image may be determined based on the positional parameters of the feature information in the face image and the reference image.
S207, determining the sight line of the person to be monitored according to the position information and the eye movement characteristics in the eye image of the person to be monitored.
The step can determine the sight line of the person to be monitored by analyzing the eye movement characteristics and the position information in the eye image of the person to be monitored. Eye movement characteristics are understood to be eye movement characteristics, such as eye jump information (eye jump peak, velocity, number and/or average amplitude), eye shake information (eye shake frequency and/or duration) and/or gaze information (gaze duration, gaze frequency, gaze position and/or time of first gaze), etc. The eye movement characteristics may be determined by, among other things, pupil position, pupil shape, pupil diameter, iris position, iris shape, eyelid position, canthus position, flare (also known as purkinje's spot) characteristics, and/or eyelid characteristics.
Specifically, when determining the sight line of the person to be monitored, the step may first determine the sight line to be corrected of the person to be monitored based on the eye movement characteristics in the eye image, and then correct the sight line to be corrected based on the determined position information, so as to obtain the sight line of the person to be monitored. In addition, the step may also determine the sight line of the person to be monitored directly based on the determined position information, the eye movement characteristics in the eye image, and a predetermined sight line model. The predetermined sight line model establishes the corresponding relation of the position information, the eye movement characteristics and the sight line, and the sight line can be determined directly based on the position information and the eye movement characteristics.
It can be understood that, in this embodiment, the image capturing device may be a general camera, and may also be an infrared camera (the infrared camera is provided with an infrared lamp). When a face image is acquired using a general camera, a line of sight can be determined based on the eye features in the face image and the reference pupil center. The reference pupil center can be obtained by correction before the monitoring device is used by the person to be monitored, or can be pre-stored reference data acquired based on the iris characteristics of the person to be monitored.
S208, judging whether the sight of the person to be monitored gazes any sub-area corresponding to different calling functions in a preset area for triggering calling exceeds the early warning duration, if so, executing S209; otherwise, S202 is executed.
In this embodiment, after determining the sight line of the person to be monitored, the step may determine whether the sight line is looking at any sub-area corresponding to different call functions in the preset area for triggering the call. If the line of sight of the person to be monitored gazes at any sub-area in the preset area, it may be further determined whether the gazing duration exceeds the warning duration, and if so, S209 may be performed. Accordingly, if the line of sight of the person to be monitored does not look at any sub-area in the preset area, the process may return to S202. If the sight line of the person to be monitored is watched on one sub-area in the preset area, but the watching duration does not exceed the early warning duration, the method may return to S202 to continue monitoring.
S209, searching a preset grade data table, determining calling grades corresponding to the gazed subareas and the body states of the personnel to be monitored, and calling correspondingly; or searching a preset grade data table, determining the calling grade corresponding to the watched sub-region and carrying out corresponding calling.
In this embodiment, the preset level data table may be understood as a corresponding relationship between a sub-region in the preset region, a body state and a calling level, or a corresponding relationship between a sub-region in the preset region and a calling level.
After determining that the sight line of the person to be monitored gazes at any one of the sub-areas in the preset area exceeds the early warning duration, the step can search the preset grade data table according to the sub-area gazed by the person to be monitored or according to the sub-area gazed by the person to be monitored and the physical state of the person to be monitored, determine the corresponding calling grade, and carry out corresponding calling based on the calling grade.
The calling levels can be set according to different body states and different sub-areas or according to different sub-areas, and different calling levels can correspond to different butt-joint persons. After the calling grade is determined, the corresponding call can be sent to different terminals so as to complete the call of the corresponding personnel. The communication method is not limited herein, and those skilled in the art may transmit data in a wired manner or in a wireless manner according to specific requirements.
And S210, sending corresponding voice prompt to the person to be monitored according to the determined call level.
In this embodiment, after the corresponding call is made, the step may send a voice prompt corresponding to the call level to the person to be monitored through a voice playing device (such as a speaker) according to the determined call level. For example, the voice informs the person to be monitored that the corresponding medical care personnel are notified at present, and reminds the person to be monitored how to deal with the currently triggered call independently in combination with the call level, such as deep breathing, eye closing and relaxing, and the like. It is understood that the step may also display the corresponding reminder in text form on the display.
Fig. 2b is a schematic view of an application scenario of the monitoring method according to the second embodiment of the present invention, and in fig. 2b, the person in the monitoring area (in the acquisition area of the image acquisition device 221 b) is the person to be monitored 21 b. As shown in fig. 2b, the monitoring device 22b obtains the face image of the person 21b to be monitored in real time through the image collecting device 221b, and then analyzes the obtained face image to obtain the call level. Then, the wireless communication module 222b sends the calls corresponding to the different call grades to the corresponding terminals. Finally, the corresponding voice reminding is performed on the person to be monitored 21b through the speaker 223 b. It should be noted that the position relationship between the devices in the application scenario is only for illustrative purposes, and the specific position is not limited. The wireless communication module 222b only shows one communication method, and the specific communication method is not limited herein.
The number of terminals shown in fig. 2b is not limited, and different terminals can call different persons. If the call level is low, the monitoring device 22b may send a call request to the first terminal 231b through the wireless communication module 222b, and the first terminal 231b may call the nurse responsible for the person to be monitored 21 b; if the call level is higher, the monitoring device 22b can send a call request to the first terminal 231b and the second terminal 232b through the wireless communication module 222b, wherein the second terminal 232b can call the doctor in charge of the person to be monitored 21 b. If the call level reaches the highest level, the monitoring device 22b may send a call request to the first terminal 231b, the second terminal 232b and the third terminal 233b through the wireless communication module 222b, wherein the third terminal 233b may call the family of the person to be monitored 21 b.
FIG. 2c is a schematic diagram of a default image provided in the second embodiment of the present invention; fig. 2c shows one eye, and the number of eyes is not limited here, and two eyes may be included in the eye image in this embodiment to improve the accuracy of the determination. Taking the preset image as the eye image as an example, it is understood that the preset image may be a face image. As shown in fig. 2c, the preset image may be a pre-recorded eye image of the person to be monitored, and the preset feature information in the eye image may include a pupil feature and an iris feature.
Wherein, the part of the eye image positioned at the middle layer of the eyeball (the circular ring-shaped part between the black pupil and the white sclera) is an iris 22 c; the small circular hole (the circle at the innermost layer of the eye) at the center of the iris 22c is the pupil 21 c.
Illustratively, the specific flow of the monitoring method provided by this embodiment is as follows:
firstly, a face image when a person to be monitored looks at the reference point can be recorded as a reference image, and the face image contains environmental information. Then, the monitoring equipment acquires the face image of the person in the monitoring area in real time through an image acquisition device (such as a camera), extracts feature information from the acquired face image, and compares the extracted feature information with preset feature information. And if the matching degree of the characteristic information and the preset characteristic information is within a certain range, the characteristic information is considered to be matched with the preset characteristic information.
If the characteristic information is matched with the preset characteristic information, the person corresponding to the face image can be determined as the person to be monitored, and the position information of the person to be monitored is obtained by analyzing the face image and the reference image of the person to be monitored. After the position information of the person to be monitored is determined, the sight line of the person to be monitored is determined based on the position information and the eye movement characteristics in the eye image of the person to be monitored. And if the sight line of the person to be monitored gazes at any sub-area in the preset area and exceeds the early warning duration, determining the calling level according to the gazed sub-area or according to the gazed sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
The monitoring method provided by the second embodiment of the invention embodies the determination operation, the sight line determination operation and the calling operation of the person to be monitored, and also optimizes the reference image entry operation and the voice reminding operation. By the method, the identity of the personnel in the monitoring area can be identified based on the preset image, and the monitoring accuracy is improved. In addition, in the process of determining the call request of the person to be monitored, the position information of the person to be monitored can be determined based on the reference image and the face image of the person to be monitored, so that the call grade is determined based on the position information and the eye movement characteristics of the person to be monitored, the targeted call is carried out, and the workload of medical staff is effectively reduced. After calling, the monitoring device sends corresponding voice prompt to the to-be-monitored person to inform the to-be-monitored person that the medical care person is notified at present, so that the discomfort of the to-be-monitored person is relieved, and the use experience of the to-be-monitored person and the medical care person is effectively improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a monitoring device according to a third embodiment of the present invention, which is applicable to monitoring a person to be monitored, for example, monitoring a person to be monitored who has language barrier and inconvenient actions, so as to call the person to be monitored when there is a call request. Wherein the apparatus may be implemented by software and/or hardware and is typically integrated on the monitoring device.
As shown in fig. 3, the monitoring device comprises: an acquisition module 31, a person determination module 32, a line of sight determination module 33, and a calling module 34.
The acquiring module 31 is configured to acquire a face image of a person in a monitored area;
the person determining module 32 is configured to determine, when the face image matches a preset image, that a person corresponding to the face image is a person to be monitored;
the sight line determining module 33 is configured to determine a sight line of the person to be monitored according to an eye image in the face image of the person to be monitored;
and the calling module 34 is configured to determine a calling level according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and perform corresponding calling when any one of the sub-areas corresponding to different calling functions in the preset area where the gaze of the person to be monitored is used for triggering calling exceeds an early warning duration.
In this embodiment, the monitoring device first obtains the face image of the person in the monitored area through the obtaining module 31; secondly, when the face image is matched with a preset image, a person corresponding to the face image is determined to be a person to be monitored through a person determining module 32; then, the sight line of the person to be monitored is determined according to the eye image in the face image of the person to be monitored through a sight line determining module 33; and finally, when any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored through the calling module 34 and exceeds the early warning duration, determining the calling grade according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
The monitoring device provided by the embodiment can identify the face image of the person in the monitoring area before the sight line of the person to be monitored is analyzed, so that calling caused by the person not to be monitored is avoided, and the calling accuracy is improved. In addition, when the sight of the person to be monitored gazes any one of the sub-areas corresponding to different calling functions in the preset area exceeds the early warning time, the person to be monitored can make targeted calling according to the sub-area gazed by the person to be monitored or according to the sub-area gazed by the person to be monitored and the physical state of the person to be monitored, so that the workload of medical staff is reduced, the pain of the person to be monitored is relieved, and the use experience of the person to be monitored and the medical staff is effectively improved.
Further, the person determination module 32 includes: a feature extraction unit configured to extract feature information from the face image; and the personnel determining unit is used for determining that the personnel corresponding to the face image is the personnel to be monitored when the characteristic information is matched with the preset characteristic information.
On the basis of the above optimization, the sight line determining module 33 is specifically configured to: determining the position information of the person to be monitored according to the face image and the reference image of the person to be monitored; and determining the sight line of the person to be monitored according to the position information and the eye movement characteristics in the face eye image of the person to be monitored.
Based on the above technical solution, the calling module 34 is specifically configured to: when any sub-area corresponding to different calling functions in a watching preset area, in which the sight of the person to be monitored is used for triggering calling, exceeds the early warning duration, searching a preset grade data table, determining calling grades corresponding to the watching sub-areas and the body state of the person to be monitored, and carrying out corresponding calling; or searching a preset grade data table, determining the calling grade corresponding to the watched sub-region and carrying out corresponding calling.
Further, the monitoring device optimizes the following steps: and a reference image entering module 35, configured to enter, before the face image of the person in the monitored area is obtained, the face image including the environmental information as a reference image when the person to be monitored looks at the reference point.
Further, the monitoring device optimizes the following steps: and the voice reminding module 36 is configured to determine a call level according to the watched sub-region or according to the watched sub-region and the body state of the to-be-monitored person, and send a corresponding voice reminding to the to-be-monitored person according to the determined call level after determining a call level and performing a corresponding call if any one of the sub-regions corresponding to different call functions in the preset region where the gaze of the to-be-monitored person is used for triggering the call exceeds an early warning duration.
The monitoring device can execute the monitoring method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a monitoring device according to a fourth embodiment of the present invention. As shown in fig. 4, a monitoring device provided in a fourth embodiment of the present invention includes: one or more processors 41 and storage 42; the monitoring device may have one or more processors 41, and one processor 41 is illustrated in fig. 4; storage 42 is used to store one or more programs; the one or more programs are executable by the one or more processors 41 to cause the one or more processors 41 to implement a monitoring method according to any of the embodiments of the present invention.
The monitoring device may further comprise: a camera 43, an output device 44 and a communication device 45.
The processor 41, the storage device 42, the camera 43, the output device 44 and the communication device 45 in the monitoring device may be connected by a bus or other means, and the bus connection is taken as an example in fig. 4.
The storage device 42 of the monitoring apparatus is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the monitoring method provided in one or two embodiments of the present invention (for example, the modules in the monitoring apparatus shown in fig. 3 include the acquiring module 31, the person determining module 32, the line-of-sight determining module 33, and the calling module 34, and further include the reference image entry module 35 and the voice reminding module 36). The processor 41 executes various functional applications and data processing of the monitoring device by executing software programs, instructions and modules stored in the storage device 42, so as to implement the monitoring method in the above-mentioned method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The camera 43 can be used to enter an eye image of a person to be monitored as a preset image and acquire the eye image of the person in the monitoring area, wherein the camera 43 can be an input device, and the terminal device can further include other input devices besides the camera 43, such as a microphone or a keyboard. The output device 44 may include a display screen, a speaker, or an indicator light, among other display devices. The communication means 45 may send calls corresponding to different call classes to different terminals.
And, when the one or more programs included in the monitoring device are executed by the one or more processors 41, the programs perform the following operations: acquiring a face image of a person in a monitored area; if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored; determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored; and if any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored and exceeds the early warning duration, determining the calling level according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor is configured to perform a monitoring method, the method including: acquiring a face image of a person in a monitored area; if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored; determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored; and if any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored and exceeds the early warning duration, determining the calling level according to the watched sub-area or according to the watched sub-area and the body state of the person to be monitored, and carrying out corresponding calling.
Optionally, the program, when executed by the processor, may be further configured to implement a technical solution of a monitoring method provided in any embodiment of the present invention. From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A monitoring method, comprising:
acquiring a face image of a person in a monitored area;
if the face image is matched with a preset image, determining that a person corresponding to the face image is a person to be monitored;
determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored;
if any sub-area corresponding to different calling functions in a preset area for triggering calling is watched by the sight of the person to be monitored exceeds the early warning duration, the calling level is determined and corresponding calling is carried out according to the watched sub-area and the physical state of the person to be monitored, the preset area comprises a plurality of sub-areas, different sub-areas correspond to different calling levels, the physical state is obtained through monitoring equipment for monitoring the physical state, the preset area is a designated area on the monitoring equipment, the designated area is used for triggering calling, and the person to be monitored is a person with language barrier and inconvenient action.
2. The method according to claim 1, wherein if the facial image matches a preset image, determining that the person corresponding to the facial image is a person to be monitored comprises:
extracting feature information from the facial image;
and if the characteristic information is matched with preset characteristic information, determining that the person corresponding to the face image is the person to be monitored.
3. The method according to claim 1, wherein the determining the line of sight of the person to be monitored according to the eye image of the face image of the person to be monitored comprises:
determining the position information of the person to be monitored according to the face image and the reference image of the person to be monitored;
and determining the sight line of the person to be monitored according to the position information and the eye movement characteristics in the eye image of the person to be monitored.
4. The method of claim 1, wherein the determining the call level and the corresponding call according to the gazed sub-area or according to the gazed sub-area and the physical state of the person to be monitored comprises:
searching a preset grade data table, determining calling grades corresponding to the gazed sub-areas and the body states of the personnel to be monitored, and calling correspondingly; or the like, or, alternatively,
and searching a preset grade data table, determining the calling grade corresponding to the watched sub-region and carrying out corresponding calling.
5. The method of claim 1, further comprising, prior to said obtaining a facial image of a person in a monitored area:
and recording a face image containing the environmental information as a reference image when the person to be monitored looks at the reference point.
6. The method according to claim 1, wherein if the gaze of the person to be monitored gazes at any one of the sub-areas corresponding to different calling functions in the preset area for triggering the call exceeds the warning duration, determining a calling level according to the gazed sub-area or according to the gazed sub-area and the physical state of the person to be monitored, and performing a corresponding call, further comprising:
and sending corresponding voice prompt to the person to be monitored according to the determined call grade.
7. A monitoring device, comprising:
the acquisition module is used for acquiring a face image of a person in a monitored area;
the personnel determining module is used for determining that the personnel corresponding to the face image is the personnel to be monitored when the face image is matched with a preset image;
the sight line determining module is used for determining the sight line of the person to be monitored according to the eye image in the face image of the person to be monitored;
the calling module is used for determining calling levels and carrying out corresponding calling according to the watched sub-regions and the body states of the to-be-monitored personnel when any one of the sub-regions corresponding to different calling functions in a preset region for triggering calling exceeds early warning time length, the preset region comprises a plurality of sub-regions, different sub-regions correspond to different calling levels, the body states are obtained through monitoring equipment for monitoring the body states, the preset region is a designated region on the monitoring equipment, the designated region is used for triggering calling, and the to-be-monitored personnel are people with language barriers and inconvenient actions.
8. The apparatus of claim 7, wherein the people determination module comprises:
a feature extraction unit configured to extract feature information from the face image;
and the personnel determining unit is used for determining that the personnel corresponding to the face image is the personnel to be monitored when the characteristic information is matched with the preset characteristic information.
9. A monitoring device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executable by the one or more processors to cause the one or more processors to implement the monitoring method of any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the monitoring method according to any one of claims 1-6.
CN201810791295.7A 2018-07-18 2018-07-18 Monitoring method, device, monitoring equipment and storage medium Active CN108922085B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810791295.7A CN108922085B (en) 2018-07-18 2018-07-18 Monitoring method, device, monitoring equipment and storage medium
PCT/CN2019/085945 WO2020015439A1 (en) 2018-07-18 2019-05-08 Monitoring method and apparatus, monitoring device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810791295.7A CN108922085B (en) 2018-07-18 2018-07-18 Monitoring method, device, monitoring equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108922085A CN108922085A (en) 2018-11-30
CN108922085B true CN108922085B (en) 2020-12-18

Family

ID=64416215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810791295.7A Active CN108922085B (en) 2018-07-18 2018-07-18 Monitoring method, device, monitoring equipment and storage medium

Country Status (2)

Country Link
CN (1) CN108922085B (en)
WO (1) WO2020015439A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108922085B (en) * 2018-07-18 2020-12-18 北京七鑫易维信息技术有限公司 Monitoring method, device, monitoring equipment and storage medium
EP3712900A1 (en) * 2019-03-20 2020-09-23 Stryker European Holdings I, LLC Technique for processing patient-specific image data for computer-assisted surgical navigation
CN111210592A (en) * 2020-01-07 2020-05-29 珠海爬山虎科技有限公司 Video identification monitoring method, computer device and computer readable storage medium
CN115097933A (en) * 2022-06-13 2022-09-23 华能核能技术研究院有限公司 Concentration determination method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244528B2 (en) * 2012-04-05 2016-01-26 Lc Technologies, Inc. Gaze based communications for locked-in hospital patients
CN105472179A (en) * 2015-12-10 2016-04-06 深圳市鑫德亮电子有限公司 Method and system for controlling nurse station host to answer calls differentially according to nursing levels
CN105788114A (en) * 2014-12-17 2016-07-20 南通市第人民医院 System for ward on-demand calling
CN106200961A (en) * 2016-07-10 2016-12-07 上海青橙实业有限公司 Mobile terminal, wearable device and input method
CN106504271A (en) * 2015-09-07 2017-03-15 三星电子株式会社 Method and apparatus for eye tracking
CN107252310A (en) * 2017-06-08 2017-10-17 湖南暄程科技有限公司 A kind of comprehensive hospital monitoring system
CN107329562A (en) * 2017-05-18 2017-11-07 北京七鑫易维信息技术有限公司 Monitoring method and device
CN107527005A (en) * 2016-06-21 2017-12-29 通用汽车环球科技运作有限责任公司 Based on watch attentively information be used for determine user view apparatus and method
JP2018045386A (en) * 2016-09-13 2018-03-22 株式会社デンソー Line-of-sight measurement device
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10080529B2 (en) * 2001-12-27 2018-09-25 Medtronic Minimed, Inc. System for monitoring physiological characteristics
US20160284202A1 (en) * 2006-07-17 2016-09-29 Eloquence Communications, Inc. Method and system for advanced patient communication
US20090275807A1 (en) * 2008-05-02 2009-11-05 General Electric Company Method for managing alarms in a physiological monitoring system
US7983935B1 (en) * 2010-03-22 2011-07-19 Ios Health Systems, Inc. System and method for automatically and iteratively producing and updating patient summary encounter reports based on recognized patterns of occurrences
US8665096B2 (en) * 2010-12-21 2014-03-04 General Electric Company Alarm control method, physiological monitoring apparatus, and computer program product for a physiological monitoring apparatus
US8593275B2 (en) * 2011-03-08 2013-11-26 General Electric Company Wireless monitoring system and method with dual mode alarming
JP5345660B2 (en) * 2011-09-08 2013-11-20 本田技研工業株式会社 In-vehicle device identification device
CN203630908U (en) * 2013-12-12 2014-06-04 浙江中医药大学 Ward beeper controlled by eye movement signal
US9665198B2 (en) * 2014-05-06 2017-05-30 Qualcomm Incorporated System and method for optimizing haptic feedback
US9465981B2 (en) * 2014-05-09 2016-10-11 Barron Associates, Inc. System and method for communication
CN104216521B (en) * 2014-09-10 2017-11-28 苏州德品医疗科技股份有限公司 Eye for ward moves method of calling and system
WO2016071244A2 (en) * 2014-11-06 2016-05-12 Koninklijke Philips N.V. Method and system of communication for use in hospitals
CN106296796B (en) * 2015-06-04 2019-08-13 北京智谷睿拓技术服务有限公司 Information processing method, information processing unit and user equipment
CN204814294U (en) * 2015-06-18 2015-12-02 苏州德品医疗科技股份有限公司 ICU is eye movement calling system for disability
JP6685664B2 (en) * 2015-07-10 2020-04-22 パラマウントベッド株式会社 Patient status reporting device, patient status reporting system, and reporting method in patient status reporting device
JP6536324B2 (en) * 2015-09-30 2019-07-03 富士通株式会社 Gaze detection system, gaze detection method and gaze detection program
US10353475B2 (en) * 2016-10-03 2019-07-16 Microsoft Technology Licensing, Llc Automated E-tran application
CN107133612A (en) * 2017-06-06 2017-09-05 河海大学常州校区 Based on image procossing and the intelligent ward of speech recognition technology and its operation method
CN107616797A (en) * 2017-08-25 2018-01-23 深圳职业技术学院 A kind of critically ill patient calling system
CN108922085B (en) * 2018-07-18 2020-12-18 北京七鑫易维信息技术有限公司 Monitoring method, device, monitoring equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244528B2 (en) * 2012-04-05 2016-01-26 Lc Technologies, Inc. Gaze based communications for locked-in hospital patients
CN105788114A (en) * 2014-12-17 2016-07-20 南通市第人民医院 System for ward on-demand calling
CN106504271A (en) * 2015-09-07 2017-03-15 三星电子株式会社 Method and apparatus for eye tracking
CN105472179A (en) * 2015-12-10 2016-04-06 深圳市鑫德亮电子有限公司 Method and system for controlling nurse station host to answer calls differentially according to nursing levels
CN107527005A (en) * 2016-06-21 2017-12-29 通用汽车环球科技运作有限责任公司 Based on watch attentively information be used for determine user view apparatus and method
CN106200961A (en) * 2016-07-10 2016-12-07 上海青橙实业有限公司 Mobile terminal, wearable device and input method
JP2018045386A (en) * 2016-09-13 2018-03-22 株式会社デンソー Line-of-sight measurement device
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN107329562A (en) * 2017-05-18 2017-11-07 北京七鑫易维信息技术有限公司 Monitoring method and device
CN107252310A (en) * 2017-06-08 2017-10-17 湖南暄程科技有限公司 A kind of comprehensive hospital monitoring system

Also Published As

Publication number Publication date
CN108922085A (en) 2018-11-30
WO2020015439A1 (en) 2020-01-23

Similar Documents

Publication Publication Date Title
CN108922085B (en) Monitoring method, device, monitoring equipment and storage medium
CN117991885A (en) Display system
US20200000334A1 (en) Monitoring neurological functional status
US8764194B2 (en) Handheld computing device for administering a gaze nystagmus test
US20180174489A1 (en) Cardiopulmonary resuscitation guidance method, computer program product and system
WO2014140834A1 (en) Systems and methods for audible facial recognition
JP2014003593A (en) Recognition and feedback of facial and vocal emotions
CN107993630A (en) Display terminal eye care method, device, terminal and storage medium
WO2019019805A1 (en) Field of view detection method and system based on head-mounted detection device, and detection apparatus
KR20200104758A (en) Method and apparatus for determining a dangerous situation and managing the safety of the user
JP3786952B2 (en) Service providing apparatus, disappointment determination apparatus, and disappointment determination method
US10936060B2 (en) System and method for using gaze control to control electronic switches and machinery
KR20200104759A (en) System for determining a dangerous situation and managing the safety of the user
CN104484588A (en) Iris security authentication method with artificial intelligence
CN106681509A (en) Interface operating method and system
US11317800B2 (en) Method of monitoring eye strain and related optical system
KR101728707B1 (en) Method and program for controlling electronic device by wearable glass device
WO2017016941A1 (en) Wearable device, method and computer program product
CN112089970A (en) User state monitoring method, neck massager and device
US20220189626A1 (en) Systems and methods for detecting and addressing quality issues in remote therapy sessions
KR101727155B1 (en) Smart glasses using brain wave
CN112784655A (en) Living body detection method and device based on gazing information and detection equipment
CN108594873B (en) Bathtub control method and device
JP2021033676A (en) Information processing apparatus and program
KR20160016149A (en) System and method for preventing drowsiness by wearable glass device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant