CN114724071A - Information detection method and device, electronic equipment and storage medium - Google Patents

Information detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114724071A
CN114724071A CN202210404600.9A CN202210404600A CN114724071A CN 114724071 A CN114724071 A CN 114724071A CN 202210404600 A CN202210404600 A CN 202210404600A CN 114724071 A CN114724071 A CN 114724071A
Authority
CN
China
Prior art keywords
target
detection
matching
user
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210404600.9A
Other languages
Chinese (zh)
Inventor
张殿炎
强丽丽
郝石磊
于博文
旷章辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202210404600.9A priority Critical patent/CN114724071A/en
Publication of CN114724071A publication Critical patent/CN114724071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an information detection method, an information detection apparatus, an electronic device, and a storage medium, wherein the method includes: responding to the situation that the target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under the determined living body detection index; the in-vivo detection indexes include: a detection index sequence composed of a plurality of detection colors and/or a detection index sequence composed of a plurality of detection actions; performing target detection on a first video to be detected, and determining a first detection result corresponding to a target user under a living body detection index; under the condition that the first detection result indicates that the detection is passed, matching a target video frame corresponding to a target user in a first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result; and controlling the target equipment to execute the target operation in response to the matching result being successful in matching.

Description

Information detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of in-vivo detection technologies, and in particular, to an information detection method, an information detection apparatus, an electronic device, and a storage medium.
Background
Hospitals are places that people often touch in life. Generally, when a user visits a hospital, staff needs to verify staff information through credentials such as a social security card and a triage of the user, and the user can visit the hospital after the verification is passed. However, the manual verification method is complicated and has low efficiency; further, the phenomenon of crowding of people in a hospital can be caused, and the safety of a hospital scene is low.
Disclosure of Invention
In view of the above, the present disclosure provides at least an information detecting method, an information detecting apparatus, an electronic device, and a storage medium.
In a first aspect, the present disclosure provides an information detection method, including:
responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under a determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection actions;
performing target detection on the first video to be detected, and determining a first detection result corresponding to the target user under the living body detection index;
under the condition that the first detection result indicates that the detection is passed, matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result;
and controlling the target equipment to execute the target operation in response to the matching result being successful in matching.
By adopting the method, when the target user arrives at an image acquisition area in a hospital scene, such as a hospital entrance area, a triage room area, a consulting room area and the like, a first video to be detected of the target user under the determined living body detection index can be acquired; under the condition that a first detection result corresponding to a first video to be detected indicates that the detection is passed, matching a target video frame corresponding to a target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result; and in response to the matching result that the matching is successful, the target device is controlled to execute the target operation, so that the automatic verification of the target user information is realized, and compared with a mode of manually verifying the information, the efficiency and the accuracy of information verification are improved, the phenomenon of crowding of people in the hospital is relieved, and the operation efficiency of a hospital scene is improved. By the method, the contact between medical care personnel and the patient can be reduced, and the safety of hospital scenes is improved.
Meanwhile, the target detection is carried out on the first video to be detected, whether the first detected video is the video acquired in real time can be determined, so that the safety problem caused by successful matching when the user carries out face matching by using the video recorded in advance or the face images of other people is relieved, the behavior that other users except the target user break into the target area of the hospital is relieved, and the safety of the target area in the hospital is improved.
In a possible implementation manner, the any target area is a hospital entrance area in the hospital scene, and a face library corresponding to the any target area stores a face image of any registered user;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking each frame of face image stored in a face library corresponding to the entrance and exit area of the hospital as a first candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the first candidate face image to obtain a first matching value between the target user and each frame of the first candidate face image;
when any first matching value is larger than or equal to a set first matching threshold value, the matching result is successful;
and under the condition that each first matching value is smaller than the set first matching threshold, the matching result is matching failure.
The image acquisition equipment is arranged in the entrance and exit area of the hospital, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used for being matched with each frame of face image in the face library corresponding to the entrance and exit area of the hospital, the matching result is obtained, the face matching of the target user is achieved, and after the matching is successful, the target equipment is controlled to execute target operation, for example, the entrance guard equipment in the entrance and exit area of the hospital can be controlled to be opened, and the safety of a hospital scene is improved.
In a possible implementation manner, any one target area is a diagnosis room area in the hospital scene, and the face library corresponding to any one target area stores face images of a visiting user who needs to visit the diagnosis room on the current date and face images respectively corresponding to accompanying users corresponding to the visiting user;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking each frame of face image stored in a face library corresponding to the diagnosis room area as a second candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the second candidate face image to obtain a second matching value between the target user and each frame of the second candidate face image;
when any second matching value is larger than or equal to a set second matching threshold value, the matching result is successful;
and under the condition that each second matching value is smaller than the set second matching threshold, the matching result is matching failure.
The image acquisition device is arranged in the triage room area, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used for being matched with each frame of face image in the face library corresponding to the triage room area to obtain a matching result, the face matching of the target user is realized, and after the matching is successful, the target device is controlled to execute the target operation, so that the automatic triage operation aiming at the target user is realized, and the triage efficiency is improved.
In one possible implementation, the controlling, in response to the matching result being that the matching is successful, the target device performs a target operation, including:
responding to the matching result that the matching is successful, and acquiring identification information corresponding to the target user;
and controlling the target equipment to update the current identification information sequence based on the identification information corresponding to the target user to obtain the updated identification information sequence.
In the embodiment of the disclosure, the identification information corresponding to the target user is obtained by responding to the matching result as the matching is successful, and the target device is controlled to update the current identification information sequence by using the identification information to obtain the updated identification information sequence, so that the user can see and diagnose according to the user sequence indicated by the updated identification information sequence, the automatic triage operation of the target user is realized, the triage efficiency is improved, and the consumption of paper resources such as a triage list is reduced. Meanwhile, the contact between medical care personnel and the personnel in the clinic is reduced, and the safety of the hospital is improved.
In a possible embodiment, the method further comprises:
generating prompt information for prompting the target user to register under the condition that the matching result is that the matching is failed;
and responding to the successful registration operation of the target user, and storing the acquired face image corresponding to the target user into a face library corresponding to the diagnosis room area.
In the embodiment of the disclosure, when the matching result is that the matching fails, prompt information for prompting the user to perform registration operation can be generated, the target user is flexibly prompted, and interaction between the target device and the target user is realized, so that the target user can perform subsequent operation according to the prompt information, and the diagnosis efficiency of the target user is improved. Meanwhile, in response to the success of the registration operation of the target user, the acquired face image corresponding to the target user is stored in a face library corresponding to the triage room area, so that data support is provided for the subsequent triage operation on the target user.
In a possible implementation manner, any one target area is a clinic area in the hospital scene, and the face library corresponding to any one target area stores face images of users to be treated who have undergone triage, and face images respectively corresponding to accompanying users corresponding to the users to be treated;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking the face image corresponding to the current user stored in the face library corresponding to the clinic area as a third candidate face image; the current user is a current visiting user who should enter the visiting room area at the current moment, or the current visiting user who should enter the visiting room area at the current moment and an accompanying user corresponding to the current visiting user;
matching the target video frame corresponding to the target user with the third candidate face image to obtain a third matching value;
and when the third matching value is greater than or equal to a set third matching threshold, the matching result is a successful matching.
Here, by arranging the image acquisition device in the clinic area, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used to match with the face image of the current user in the face library corresponding to the clinic area, so as to obtain a matching result, thereby realizing control over the user entering the clinic area and improving the safety of the clinic area.
In one embodiment, before the acquiring the first video to be detected of the target user under the determined living body detection index in response to the target user arriving at the image acquisition area in the hospital scene, the method further includes:
responding to registration operation triggered by the target user, and acquiring a second video to be detected of the target user under the determined living body detection index;
performing target detection on the second video to be detected, and determining a third detection result corresponding to the target user under the determined in-vivo detection index;
and generating indication information indicating successful registration in response to the third detection result indicating that the detection is passed.
In the embodiment of the disclosure, when the target user performs registration operation, the target detection is performed on the second to-be-detected video of the target user under the living body detection index, after the detection is passed, indication information indicating that the registration is successful is generated, the registration operation is completed, the safety problem caused by the registration operation performed by the non-living body user is relieved, and the safety of the hospital scene is improved.
In a possible implementation manner, in a case that the first detection result indicates that the detection is passed, the matching a target video frame corresponding to the target user in the first video to be detected and candidate face images in a face library respectively corresponding to any one of the target areas to obtain a matching result includes:
under the condition that the first detection result indicates that the detection is passed, performing living body detection on a plurality of frames of face images included in the first detection result, and determining a second detection result corresponding to the target user; each frame of face image is matched with one detection index in the living body detection indexes;
and under the condition that the second detection result indicates that the target user belongs to a living body, matching the target video frame in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas respectively to obtain a matching result.
After the first detection result indicates that the detection is passed, live body detection can be performed on the multiple frames of face images, whether a target user in the first video to be detected belongs to a live body or not is determined, and a second detection result corresponding to the target user is obtained; when the second detection result indicates that the target user belongs to the living body, the target video frame is matched with the candidate face image, so that the safety problem caused by face matching of the target video frame in the first video to be detected after the user obtains the first video to be detected by using the images of other people is relieved, and the safety of a hospital scene is improved.
In a possible implementation manner, the performing living body detection on multiple frames of face images included in the first detection result and determining a second detection result corresponding to the target user includes:
performing living body detection on each frame of face image included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image;
and determining that the second detection result corresponding to the target user belongs to the living body under the condition that the corresponding intermediate detection result indicates that the number of the face images of the target user belongs to the living body is greater than or equal to the target number.
In this case, the living body detection is performed on the multiple frames of face images included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image, and then the intermediate detection result corresponding to the multiple frames of face images can be used to flexibly determine the second detection result corresponding to the target user.
The following description of the effects of the apparatus, the electronic device, and the like refers to the description of the above method, and is not repeated here.
In a second aspect, the present disclosure provides an information detecting apparatus, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under a determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection actions;
the determining module is used for carrying out target detection on the first video to be detected and determining a first detection result corresponding to the target user under the living body detection index;
the matching module is used for matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result under the condition that the first detection result indicates that the detection is passed;
and the control module is used for controlling the target equipment to execute the target operation in response to the matching result being successful.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate via the bus when the electronic device is running, and the machine-readable instructions, when executed by the processor, perform the steps of the information detection method according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the information detection method according to the first aspect or any one of the embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic flow chart illustrating an information detection method provided by an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a registration operation in an information detection method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an architecture of an information detection apparatus provided in an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Hospitals are places that people often touch in life. Generally, when a user visits a hospital, staff needs to verify staff information through credentials such as a social security card and a triage of the user, and the user can visit the hospital after the verification is passed. For example, after the user arrives at a consulting room needing to be seen, the office staff needs to perform triage operation by using the registration list of the user, so that the user can see the consulting room. However, the manual verification method is complicated and has low efficiency; further, the phenomenon of crowding of people in a hospital can be caused, and the safety of a hospital scene is low. Meanwhile, the registration sheet and the like are used for information checking in real time, so that waste of resources such as paper and the like is caused.
In order to alleviate the above problems, the embodiments of the present disclosure provide an information detection method and apparatus, an electronic device, and a storage medium.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It should be noted that: if the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
For the convenience of understanding the embodiments of the present disclosure, a detailed description will be first given of an information detection method disclosed in the embodiments of the present disclosure. The main body of execution of the information detection method provided by the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device or a server; for example, the server may be a local server or a cloud server; the terminal device can be a mobile device or a computing device. In some possible implementations, the information detection method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a schematic flow chart of an information detection method provided in the embodiment of the present disclosure is shown, where the method includes S101-S104, specifically:
s101, responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under a determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: and at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection operations.
S102, performing target detection on the first video to be detected, and determining a first detection result corresponding to the target user under the living body detection index.
And S103, under the condition that the first detection result indicates that the detection is passed, matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to any target area respectively to obtain a matching result.
And S104, controlling the target equipment to execute the target operation in response to the matching result being successful.
By adopting the method, when the target user arrives at an image acquisition area in a hospital scene, such as a hospital entrance area, a triage room area, a clinic visit area and the like, a first video to be detected of the target user under the determined living body detection index can be acquired; under the condition that a first detection result corresponding to a first video to be detected indicates that the detection is passed, matching a target video frame corresponding to a target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result; and in response to the matching result that the matching is successful, the target device is controlled to execute the target operation, so that the automatic verification of the target user information is realized, and compared with a mode of manually verifying the information, the efficiency and the accuracy of information verification are improved, the phenomenon of crowding of people in the hospital is relieved, and the operation efficiency of a hospital scene is improved. By the method, the contact between medical care personnel and the patient can be reduced, and the safety of hospital scenes is improved.
Meanwhile, the target detection is carried out on the first video to be detected, whether the first detected video is the video acquired in real time can be determined, so that the safety problem caused by successful matching when the user carries out face matching by using the video recorded in advance or the face images of other people is relieved, the behavior that other users except the target user break into the target area of the hospital is relieved, and the safety of the target area in the hospital is improved.
S101 to S104 will be specifically described below.
For S101:
in a hospital scene, when a doctor seeing user and an attendant user arrive at the gate of the hospital, the information of the doctor seeing user and the attendant user can be verified at the gate of the hospital, and after the verification is successful, the doctor seeing user and the attendant user can enter the hospital. After the visiting user and the accompanying user enter the hospital, if the visiting user does not register, manual registration or registration operation on registration equipment in the hospital can be carried out; after the registration is successful, the doctor can go to the consulting room corresponding to the registration operation for triage operation, and when the doctor turns to go to the consulting room for consulting, the doctor and the accompanying user corresponding to the doctor can go to the consulting room for consulting. The process of verifying the information of the visiting user or the accompanying user at the hospital door can be as follows: and determining whether the medical insurance card information of the visiting user is matched with the face image of the visiting user, whether the identity information of the visiting user is matched with the face image of the visiting user, and the like.
In practice, the target area may be any one of a hospital entrance area, a triage room area, a clinic area, a registration area, and the like in a hospital scene. An image acquisition device can be arranged in the target area, and an acquisition area corresponding to the image acquisition device is included in the target area, so that the image acquisition device can acquire an image of the target area.
After the target user arrives at the image acquisition area corresponding to the image acquisition equipment of any target area in the hospital scene, the living body detection index corresponding to the target user is generated in response to the target user arriving at the image acquisition area in the hospital scene. The living body detection index comprises a detection index sequence formed by a plurality of detection colors and/or a detection index sequence formed by a plurality of detection actions. The number of detection colors and detection actions in the detection index sequence can be set according to the requirement of the security level, for example, the higher the security level is, the larger the number of detection colors and detection actions is. The content of the detection color and the content of the detection action can be set according to actual needs. For example, the detection action may be: mouth opening, nodding, left turning, right turning, etc. The target user may be a visiting user, or may be the visiting user and a accompanying user corresponding to the visiting user.
After the living body detection index corresponding to the target user is generated, the living body detection index can be issued to the image acquisition device in the target area, the first video to be detected of the target user in the living body detection index is acquired by the image acquisition device, and then the execution main body can acquire the first video to be detected from the image acquisition device. The image acquisition device can be a camera, an access control device, a mobile device with a camera, and the like.
For example, when the living body detection index includes a color sequence (i.e., a detection index sequence) formed by a plurality of detection colors, the screen of the image capturing device flashes according to the color sequence indicated by the living body detection index, and acquires a first video to be detected of the target user during the color flashing process.
When the living body detection index comprises an action sequence (namely a detection index sequence) formed by a plurality of detection actions, the image acquisition equipment sequentially sends action indication information of the detection actions, so that a target user completes the specified action according to the action indication information, and a first video to be detected of the target user in the process of completing the specified action is acquired.
In order to improve the accuracy of target detection, the living body detection index may include a color sequence made up of a plurality of detection colors and an action sequence made up of a plurality of detection actions. When the living body detection index comprises a color sequence formed by a plurality of detection colors and an action sequence formed by a plurality of detection actions, the screen of the image acquisition equipment can flash colors according to the color sequence indicated by the living body detection index, and the image acquisition equipment can sequentially send action indication information of the detection actions to acquire a first video to be detected, wherein the first video to be detected is used for the target user to finish the specified action in the color flashing process.
For S102:
in implementation, target detection can be performed on each video frame in the first video to be detected, and a first detection result corresponding to a target user under the living body detection index is determined. For example, a video frame may be input to a neural network for detection, and a detection index corresponding to the video frame is determined; further, according to the detection indexes respectively corresponding to the video frames, whether the detection index sequence corresponding to the first video to be detected is matched with the detection index sequence in the determined living body detection index is determined, and a first detection result corresponding to the target user is obtained, namely if the detection index sequence is matched with the detection index sequence in the determined living body detection index, the first detection result is that the detection is passed; if not, the first detection result is that the detection is not passed.
For example, when the living body detection index includes a detection index sequence formed by a plurality of detection colors and does not include a detection index sequence formed by a plurality of detection actions, each video frame included in the first video to be detected may be input into the dazzle color detection neural network, so as to obtain a target color corresponding to each video frame. For example, the dazzle color detection neural network may output the probability of the video frame under each preset color, and select the color with the highest probability as the target color corresponding to the video frame. Determining a target color sequence corresponding to the first video to be detected according to the target colors respectively corresponding to the video frames, and further determining whether the target color sequence corresponding to the first video to be detected is matched with a detection index sequence formed by a plurality of detection colors included in the living body detection index; if so, determining that a first detection result corresponding to the target user is a detection pass; and if not, determining that the first detection result corresponding to the target user is that the detection is failed.
When the in-vivo detection index includes a detection index sequence formed by a plurality of detection actions and does not include a detection index sequence formed by a plurality of detection colors, when the detection actions included in the in-vivo detection index are multiple, the action indication information of each detection action can be displayed in sequence, and a current first video to be detected of a target user executing corresponding actions based on the action indication information is acquired. When detecting that the motion information included in the current first video to be detected is not matched with the detection motion currently displayed by the image acquisition equipment, determining that the first detection result corresponding to the target user is a detection failure, and ending the information detection process corresponding to the target user. When detecting that the action information included in the current first video to be detected is matched with the detection action currently displayed by the image acquisition equipment, controlling the image acquisition equipment to display action indication information of the next detection action in the living body detection indexes, and returning to the step of acquiring the current first video to be detected, in which the target user executes corresponding actions based on the action indication information, until each detection action in the living body detection indexes is displayed. When it is detected that the motion information included in each current first video to be detected is matched with the detection motion currently displayed by the image acquisition equipment, namely the detected motion sequence is consistent with a detection index sequence formed by a plurality of detection motions in the living body detection index, determining that a first detection result corresponding to the target user is passed through detection. The action detection neural network can be used for detecting whether action information included in the current video to be detected is matched with a detection action currently displayed by the image acquisition equipment.
When the living body detection index comprises a detection index sequence formed by a plurality of detection colors and a detection index sequence formed by a plurality of detection actions, the image acquisition equipment can be controlled to sequentially display action indication information of each detection action in the living body detection index and synchronously flicker and display each detection color in the living body detection index, and then the image acquisition equipment can acquire a current first video to be detected of a target user executing a specified action corresponding to the action indication information under the mapping of each detection color displayed in a flickering mode.
When the method is implemented, the action detection and the dazzle color detection can be performed on the current first video to be detected in parallel by using the dazzle color detection neural network and the action detection neural network; or, the motion detection neural network may be used to perform motion detection on the current first video to be detected, and then the dazzle color detection neural network is used to perform dazzle color detection; or, the colorful detection neural network can be used for carrying out colorful detection on the current first video to be detected, and then the action detection neural network is used for carrying out action detection. The motion detection and the glare detection may refer to the above description, and are not described herein again.
For S103 and S104:
when the first detection result indicates that the detection is passed, a candidate face image in a face library corresponding to any target area may be determined, for example, when the target area is a hospital entrance/exit area, the face library corresponding to the target area may store a registered face image of any user; and each face image in the face library can be used as a candidate face image. And matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images corresponding to the target area respectively to obtain a matching result. The matching result may include a matching success or a matching failure. For example, a face matching neural network may be used to determine the matching result between the target video frame and each candidate face image.
In implementation, when the first to-be-detected video is subjected to target detection to obtain a first detection result, the quality scores of all video frames in the first to-be-detected video can also be obtained. The quality score value is related to information such as light, definition, face angle and face shielding area of the video frame, and the video frame corresponding to the maximum quality score is selected as a target video frame corresponding to a target user.
Before the first detection result is obtained, a face image corresponding to each detection index (for example, a detection color and/or a detection action) in the living body detection index may be determined based on the quality score corresponding to each video frame, so that the first detection result is generated by using the face image, the dazzling detection result, or the action detection result corresponding to each detection index.
In an implementation manner, when the first detection result indicates that the detection is passed, matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library respectively corresponding to the any target area to obtain a matching result includes:
step A1, under the condition that the first detection result indicates that the detection is passed, performing living body detection on a plurality of frames of face images included in the first detection result, and determining a second detection result corresponding to the target user; wherein each frame of face image is matched with one of the living body detection indexes.
Step a2, when the second detection result indicates that the target user belongs to a living body, matching the target video frame in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas, respectively, to obtain a matching result.
In step a1, when the living body detection index includes a detection index sequence made up of a plurality of detection colors, for each detection color, a quality score of each video frame matching the detection color is determined. And selecting the video frame with the maximum quality score from the video frames matched with the detection color as a face image corresponding to the detection color and used for living body detection. And then the face image which corresponds to each detection color and is used for carrying out living body detection can be obtained. For example, if the detection index sequence is composed of 5 detection colors, 5 frames of face images for live body detection can be obtained. And generating a first detection result based on the face images corresponding to the detection colors, so that the first detection result comprises a plurality of frames of face images.
When the living body detection index includes a detection index sequence composed of a plurality of detection actions, for each detection action, a quality score of each video frame matching the detection action is determined. And selecting the video frame with the maximum quality score from the video frames matched with the detection action as a face image corresponding to the detection action and used for carrying out living body detection. Further, a face image for performing living body detection corresponding to each detection action can be obtained.
When the living body detection index comprises a detection index sequence formed by a plurality of detection actions and a detection index sequence formed by a plurality of detection colors, a face image for living body detection corresponding to each detection color and a face image for living body detection corresponding to each detection action can be obtained. For example, when the living body detection index includes a detection index sequence composed of 5 detection colors and a detection index sequence composed of 5 detection actions, 10 frames of face images for living body detection can be obtained.
In implementation, when the first detection result indicates that the detection is passed, the living body detection may be performed on each determined frame of face image, and a second detection result corresponding to the target user may be determined. For example, Hack neural network can be used to prevent illegal intrusion, Hack detection can be performed on each frame of face image, and a second detection result corresponding to the target user can be determined. Wherein the second detection result includes that the target user belongs to the living body and that the target user does not belong to the living body.
In the method, the video frame with the highest quality score in the first video to be detected is selected as the face image for the live body detection, so that the accuracy of the live body detection is improved when the face images of the plurality of frames are subjected to the live body detection.
In an implementation manner, in step a1, performing living body detection on multiple frames of face images included in the first detection result, and determining a second detection result corresponding to the target user may include:
step A11, performing living body detection on each frame of face image included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image.
Step a12, when the intermediate detection result corresponding to the target face image in the multiple frames of face images indicates that the target user belongs to the living body, and/or when the corresponding intermediate detection result indicates that the number of face images of the target user belonging to the living body is greater than or equal to the target number, determining that the second detection result corresponding to the target user is that the target user belongs to the living body.
When the method is implemented, the living body detection can be carried out on each frame of face image, and an intermediate detection result corresponding to each frame of face image is obtained.
In the first case, when the intermediate detection result corresponding to the target face image in the multiple face images indicates that the target user belongs to a living body, it may be determined that the second detection result corresponding to the target user belongs to the living body. The target face image can be a face image with the highest quality score in multiple frame face images. In the second case, after obtaining the intermediate detection results corresponding to the face images of the frames, the intermediate detection results may be determined as the number of face images of the target user belonging to the living body, and when the number is greater than or equal to the target number, the second detection result corresponding to the target user may be determined as the target user belonging to the living body. And in the third case, when the intermediate detection result corresponding to the target face image indicates that the target user belongs to the living body, and the intermediate detection result is that the number of the face images of the target user belonging to the living body is greater than or equal to the target number, determining that the second detection result corresponding to the target user belongs to the living body.
In implementation, the intermediate detection result may also be determined as a ratio between the number of face images of the target user belonging to the living body and the total number of the face images of the multiple frames, and when the ratio is greater than or equal to the set target ratio, the second detection result corresponding to the target user is determined as the target user belonging to the living body.
Illustratively, when the living body detection index includes a detection index sequence formed by 5 detection colors, a face image corresponding to each detection color is obtained, that is, a 5-frame face image is obtained. And respectively carrying out living body detection on each frame of face image to obtain intermediate detection results corresponding to the face image, namely 5 intermediate detection results. And/or when the intermediate detection result corresponding to at least 3 frames (3 is the target number) of face images indicates that the target user belongs to the living body, determining that a second detection result corresponding to the target user belongs to the living body.
In this case, the living body detection is performed on the multiple frames of face images included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image, and then the intermediate detection result corresponding to the multiple frames of face images can be used to flexibly determine the second detection result corresponding to the target user.
In step a2, in implementation, when the second detection result indicates that the target user belongs to a living body, matching the target video frame in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas, respectively, to obtain a matching result; when the second detection result indicates that the target user does not belong to the living body, subsequent matching operation is not performed, and warning information prompting that the target user does not belong to the living body can be generated.
After the first detection result indicates that the detection is passed, live body detection can be performed on the multiple frames of face images, whether a target user in the first video to be detected belongs to a live body or not is determined, and a second detection result corresponding to the target user is obtained; when the second detection result indicates that the target user belongs to the living body, the target video frame is matched with the candidate face image, so that the safety problem caused by face matching of the target video frame in the first video to be detected after the user obtains the first video to be detected by using the images of other people is relieved, and the safety of a hospital scene is improved.
The hospital scene may include a hospital entrance area, a triage room area, a clinic area, and the like, and the following description will be made of a case where the target area is the hospital entrance area, the target area is the triage room area, and the target area is the clinic area.
First, a case where the target area is a hospital entrance area will be described.
When the target area is a hospital entrance area in the hospital scene, the face library corresponding to the target area stores the face image of any registered user.
In S103, matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to the any target area, respectively, to obtain a matching result, which may include:
and taking each frame of face image stored in a face library corresponding to the entrance and exit area of the hospital as a first candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the first candidate face image to obtain a first matching value between the target user and each frame of the first candidate face image.
When any first matching value is larger than or equal to a set first matching threshold value, the matching result is successful; and under the condition that each first matching value is smaller than the set first matching threshold, the matching result is matching failure.
Responding to an image acquisition area of a hospital entrance area in a hospital scene where a target user arrives, acquiring a first video to be detected of the target user, and determining a target video frame in the first video to be detected after a first detection result is obtained; the target video frame is a video frame with the highest quality score in the first video to be detected.
A corresponding face library may be set for the hospital entrance and exit area, and the face library may store face images of any registered user. The face library may further store user information corresponding to the face image (the user information is information of a user to which the face image belongs). Any user that is registered includes, but is not limited to: a patient needing to be treated, a nursing user corresponding to the patient, and medical staff. The face images in the face library corresponding to the hospital entrance and exit areas and the user information corresponding to the face images may be obtained from a general face library corresponding to the hospital scene. The overall face library may store face images and user information for each user present in the hospital scene. Each user present in the hospital scenario may include, but is not limited to: users who have a historical visit, registered users who have not visited, users who are visiting, medical care personnel, hospital administrators, etc. The user information may include: name, gender, age, identification card number, medical insurance card number, registration information, etc.
Or the face library corresponding to the entrance and exit area of the hospital may store the face image and the user information of the visiting user who needs to visit on the current date, and the face image and the user information of the accompanying user corresponding to the visiting user who needs to visit on the current date.
When the method is implemented, each frame of face image in a face library corresponding to a hospital entrance area is used as a first candidate face image, and a target video frame corresponding to a target user is respectively matched with each frame of the first candidate face image to obtain a first matching value. For example, a face matching neural network may be used to perform matching between the target video frame and the first candidate face image. When the number of the first candidate face images is multiple, multiple first matching values can be obtained.
If any first matching value is larger than or equal to the set first matching threshold, the matching result is successful; if each obtained first matching value is smaller than the set first matching threshold, that is, no first matching value greater than or equal to the first matching threshold exists in the plurality of first matching values, the matching result is a matching failure. For example, the first match threshold may be 0.7 (full 1), 7 (full 10), and so on.
The image acquisition equipment is arranged in the entrance and exit area of the hospital, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used for being matched with each frame of face image in the face library corresponding to the entrance and exit area of the hospital, the matching result is obtained, the face matching of the target user is achieved, and after the matching is successful, the target equipment is controlled to execute target operation, for example, the entrance guard equipment in the entrance and exit area of the hospital can be controlled to be opened, and the safety of a hospital scene is improved.
In one embodiment, in response to the matching result being a successful matching, the target device is controlled to perform a target operation, including: and controlling the target equipment to be started in response to the matching result being successful in matching.
When the target area is a hospital entrance area, the target device may be an access control device such as an automatic door or a gate installed in the hospital entrance area. In implementation, in response to the matching result that the matching is successful, a control instruction for controlling the target device can be generated, and the control instruction is used for controlling the target device to be opened, so that the successfully matched target user can enter the hospital, the user who enters and exits the hospital can be controlled, meanwhile, the purpose of controlling the flow of people in the hospital can be achieved, and the safety of a hospital scene is improved.
In one embodiment, the method further comprises: acquiring user information of the target user and a face image corresponding to the target user under the condition that the matching result is matching failure; and storing the user information and the face image corresponding to the target user into the face library corresponding to the entrance and exit area of the hospital.
And under the condition that the matching result is matching failure, acquiring the user information of the target user and the face image corresponding to the target user. In one mode, the user information and the face image of the target user may be automatically obtained through the device, for example, a first prompt message may be generated and displayed through the device, where the first prompt message is used to prompt the target user to fill in user information, such as name, gender, age, identification number, health card number, and the like; the face image corresponding to the target user can be obtained through the equipment; in another mode, the user information and the face image of the target user may be obtained manually.
And storing the acquired user information and the acquired face image corresponding to the target user into a face library corresponding to the entrance and exit area of the hospital so that the target user can pass through the entrance and exit area of the hospital again in a face matching mode.
Here, when the matching result is that the matching fails, the user information of the target user and the face image corresponding to the target user may be acquired, the user information and the face image corresponding to the target user are stored in the face library corresponding to the entrance and exit area of the hospital, and information registration of the target user is realized, so that information detection can be performed subsequently by using the acquired user information and the face image.
Second, a case where the target area is a triage room area will be described.
When the target area is a diagnosis room area in the hospital scene, the face library corresponding to the diagnosis room area may store face images respectively corresponding to a visiting user who needs to visit the diagnosis room on the current date and an accompanying user corresponding to the visiting user.
In implementation, the number of the face libraries corresponding to the diagnosis room area may be multiple, each face library corresponds to one future date, that is, the face library stores the face image of the visiting user who needs to visit on the corresponding future date and the face image of the accompanying user corresponding to the visiting user. For each current date, a face library corresponding to the current date may be determined. For example, the triage room area corresponds to a face library 1 matched with the number 1 month 1, a face library 2 matched with the number 1 month 2, a face library 3 matched with the number 1 month 3, and the like. And when the current date is No. 1 month, determining that the face library corresponding to the diagnosis room area is the face library 1.
In S103, matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to the any target area, respectively, to obtain a matching result, which may include:
and taking each frame of face image stored in a face library corresponding to the area of the diagnosis room as a second candidate face image, and respectively matching the target video frame corresponding to the target user with each frame of the second candidate face image to obtain a second matching value between the target user and each frame of the second candidate face image.
And when any second matching value is larger than or equal to a set second matching threshold value, the matching result is successful.
And under the condition that each second matching value is smaller than the set second matching threshold, the matching result is matching failure.
The method comprises the steps of responding to the situation that a target user arrives at an image acquisition area of a diagnosis room area in a hospital scene, obtaining a first video to be detected of the target user, and determining a target video frame in the first video to be detected after obtaining a first detection result.
A corresponding face library can be set for the area of the diagnosis room, and the face library stores face images of the patients who need to see the doctor at the current date of the diagnosis room and face images of accompanying users corresponding to the patients. The face images in the face library and the user information corresponding to the face images are obtained from an overall face library corresponding to a hospital scene.
In implementation, each frame of face image in the face library corresponding to the triage room area is used as a second candidate face image, and a target video frame corresponding to a target user is respectively matched with each frame of second candidate face image to obtain a second matching value. For example, a face matching neural network may be used to perform matching between the target video frame and the second candidate face image. When the number of the second candidate face images is multiple, multiple second matching values can be obtained.
If any second matching value is larger than or equal to the set second matching threshold, the matching result is successful; and if each obtained second matching value is smaller than the set second matching threshold, the matching result is matching failure. Wherein the second matching threshold may be determined according to a security level of the triage room area. The first matching threshold and the second matching threshold may be the same or different. For example, if the security level corresponding to the triage room area is higher than the security level corresponding to the hospital entrance/exit area, the second matching threshold is greater than the first matching threshold.
The image acquisition equipment is arranged in the triage room area, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used for being matched with each frame of face image in the face library corresponding to the triage room area to obtain a matching result, the face matching of the target user is realized, and after the matching is successful, the target equipment is controlled to execute the target operation, so that the automatic triage operation aiming at the target user is realized, and the triage efficiency is improved.
In one embodiment, in response to the matching result being a successful matching, the target device is controlled to perform a target operation, including: responding to the matching result that the matching is successful, and acquiring identification information corresponding to the target user; and controlling the target equipment to update the current identification information sequence based on the identification information corresponding to the target user to obtain the updated identification information sequence.
In response to the matching result being that the matching is successful, acquiring identification information corresponding to the target user, where the identification information may be one or more of user information, for example, the identification information may include a name, an age, registration information (a number generated after registration), and the like. And then, based on the identification information corresponding to the target user, the target device is controlled to update the current identification information sequence to obtain an updated identification information sequence. For example, if the identification information corresponding to the target user is: zhang three (name) -06 (registration information), the current identification information sequence is: zhang one-01, Zhang two-02, the updated identification information sequence is: zhang Yi-01, Zhang Di-02 and Zhang san-06 so that the medical staff in the consulting room can see and examine each user according to the user sequence indicated by the updated identification information sequence.
Wherein, when the target area is a diagnosis room area, the target device can be a diagnosis instrument.
When the target user is the first visiting user of the consulting room, the current identification information sequence is the initialized blank sequence, the triage instrument (namely the target equipment) is controlled to update the current blank sequence to obtain an updated identification information sequence, the identification information of the target user is stored in the first sequence bit in the updated identification information sequence, and the later sequence bit is empty.
In the embodiment of the disclosure, the identification information corresponding to the target user is obtained by responding to the matching result as the matching is successful, and the target device is controlled to update the current identification information sequence by using the identification information to obtain the updated identification information sequence, so that the user can see and diagnose according to the user sequence indicated by the updated identification information sequence, the automatic triage operation of the target user is realized, the triage efficiency is improved, and the consumption of paper resources such as a triage list is reduced. Meanwhile, the contact between medical care personnel and the personnel in the clinic is reduced, and the safety of the hospital is improved.
In one embodiment, the method further comprises: generating prompt information for prompting the target user to register under the condition that the matching result is that the matching is failed; and responding to the successful registration operation of the target user, and storing the acquired face image corresponding to the target user into a face library corresponding to the triage room area.
When the matching fails, it is determined that the face image corresponding to the target user is not stored in the face library corresponding to the diagnosis room area, and the target user may not hang the number of the doctor in the diagnosis room. Based on this, when the matching result is that the matching fails, prompt information for prompting the target user to perform registration operation can be generated. The target device (such as a triage instrument) can be controlled to display prompt information, or the prompt information can be played in a voice mode.
And after the target user registration is successful, responding to the successful registration operation of the target user, acquiring a face image corresponding to the target user, and storing the acquired face image corresponding to the target user into a face library corresponding to the diagnosis room area. The face image corresponding to the target user can be obtained in the registration operation process, or the face image corresponding to the target user can be obtained from the general face library through user information.
In implementation, the user information corresponding to the face image may also be stored in a face library corresponding to a diagnosis room area, so that after the target user is successfully diagnosed, the user information and the face image corresponding to the target user are acquired from the face library corresponding to the diagnosis room area, and the acquired user information and the acquired face image corresponding to the target user are stored in the face library of the clinic area included in the diagnosis room.
In the embodiment of the disclosure, when the matching result is that the matching fails, prompt information for prompting the user to perform registration operation can be generated, the target user is flexibly prompted, and interaction between the target device and the target user is realized, so that the target user can perform subsequent operation according to the prompt information, and the diagnosis efficiency of the target user is improved. Meanwhile, in response to the success of the registration operation of the target user, the acquired face image corresponding to the target user is stored in a face library corresponding to the triage room area, so that data support is provided for the subsequent triage operation on the target user.
Third, a case where the target area is a clinic area will be described.
When the target area is a clinic area in a hospital scene, the face library corresponding to the clinic area stores the face image of the user to be diagnosed who has undergone triage and the face image corresponding to the accompanying user corresponding to the user to be diagnosed.
In implementation, in the diagnosis room area, after the target device is controlled to update the current identification information sequence based on the identification information corresponding to the target user to obtain the updated identification information sequence, the face image and the user information corresponding to the target user can be determined from the face library corresponding to the diagnosis room area based on the identification information corresponding to the target user, and the face image and the user information corresponding to the target user are stored in the face library corresponding to the diagnosis room area.
In S103, the matching the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any target area respectively to obtain a matching result may include:
taking the face image corresponding to the current user stored in the face library corresponding to the clinic area as a third candidate face image; the current user is a current visiting user who should enter the visiting room area at the current moment, or the current visiting user who should enter the visiting room area at the current moment and an accompanying user corresponding to the current visiting user;
matching the target video frame corresponding to the target user with the third candidate face image to obtain a third matching value;
and when the third matching value is greater than or equal to a set third matching threshold, the matching result is successful.
In implementation, the face image of the current user who needs to enter the clinic area at the current time may be determined from the face library corresponding to the clinic area, and the face image corresponding to the current user may be used as the third candidate face image. The current user may include a current visiting user, or the current visiting user and a co-attending user corresponding to the current visiting user.
Matching the target video frame corresponding to the target user with a third candidate face image to obtain a third matching value; if the third matching value is greater than or equal to the set third matching threshold, the matching result is successful; and if the third matching value is smaller than the set third matching threshold, the matching result is matching failure. Wherein, the third matching value can be set according to the requirement of the security level.
Here, by arranging the image acquisition device in the clinic area, after the first detection result is that the detection is passed, the target video frame corresponding to the target user can be used to match with the face image of the current user in the face library corresponding to the clinic area, so as to obtain a matching result, thereby realizing control over the user entering the clinic area and improving the safety of the clinic area.
In one embodiment, in response to the matching result being a successful matching, the target device is controlled to perform a target operation, including: and controlling the target equipment to be started in response to the matching result being successful in matching.
And responding to the matching result that the matching is successful, generating a control instruction, sending the control instruction to the target equipment, and controlling the target equipment to be started by using the control instruction. When the target area is a clinic area, the target device may be an access control device, such as an automatic door, a gate, etc., provided in the clinic area.
If the matching result is matching failure, the matching operation can be tried again, and if the matching results for multiple times are matching failure, indication information for prompting the target user to perform manual processing can be generated.
In the embodiment, in response to the matching result that the matching is successful, the target device is controlled to be started, so that the user entering the clinic area is controlled, for example, the current clinic user can enter the clinic area, but the non-current clinic user cannot enter the clinic area, so that the privacy of the clinic user is protected; the safety of seeing a doctor has been improved.
In one implementation, referring to fig. 2, before a target user arrives at an image capturing area in a hospital scene, a first to-be-detected video of the target user under a determined liveness detection index is acquired, the method further includes:
s201, responding to a registration operation triggered by the target user, and acquiring a second video to be detected of the target user under the determined living body detection index.
S202, performing target detection on the second video to be detected, and determining a third detection result corresponding to the target user under the determined living body detection index.
S203, responding to the third detection result indicating that the detection is passed, and generating indication information indicating that the registration is successful.
Before the target user goes to a hospital for a doctor, registration operation can be performed. In response to the registration operation triggered by the target user, a live body detection index can be determined for the target user, and a second video to be detected of the target user under the determined live body detection index is acquired. The second video to be detected can be acquired by the mobile device when the target user performs registration operation on the mobile device such as a mobile phone and a computer, and the second video to be detected can be acquired from the mobile device.
In implementation, in response to the target user triggering registration operation, a living body detection index corresponding to the target user is generated, and the generated living body detection index can be issued to the mobile device, so that the mobile device can display the living body detection index and acquire a second video to be detected of the target user under the living body detection index.
And carrying out target detection on the second video to be detected to obtain a third detection result. The living body detection index may include a detection index sequence composed of a plurality of detection colors, and/or a detection index sequence composed of a plurality of detection actions. When the living body detection index comprises a detection index sequence formed by a plurality of detection colors, the target detection may comprise dazzling color detection, that is, whether the color sequence included in the second video to be detected is consistent with the detection index sequence included in the living body detection index is determined, and if so, the third detection result is that the detection is passed.
When the living body detection index comprises a detection index sequence formed by a plurality of detection actions, the target detection can comprise action detection, namely determining whether each action sequence included in the second video to be detected is matched with the detection index sequence included in the living body detection index, and if the action sequences are matched, determining that the third detection result is detection passing.
And generating indication information indicating successful registration in response to the third detection result being that the detection is passed. Meanwhile, the target video frame with the highest quality score in the second video to be detected can be stored in the general face library corresponding to the hospital scene. And the quality score of the video frame in the second video to be detected is obtained in the target detection process.
In implementation, after the third detection result is that the detection is passed, performing living body detection on at least one frame of face image corresponding to each detection index in the second video to be detected to obtain a living body detection result, and if the living body detection result indicates that the target user belongs to a living body, generating indication information indicating that the registration is successful.
After the indication information indicating that the registration is successful is generated, second prompt information prompting the acquisition of the information of the accompanying user can be generated. Responding to the triggering operation of the second prompt message, and acquiring user information and a face image of the accompanying person; and then, the user information and the face image of the accompanying person, the face image and the user information of the visiting user can be associated and stored in an overall face library corresponding to the hospital scene.
The obtaining process of the face image of the accompanying person can be as follows: the accompanying person is subjected to at least one of colorful detection, motion detection and living body detection, and user information and face images of the accompanying person are obtained after the detection is passed.
In the embodiment of the disclosure, when the target user performs registration operation, the target detection is performed on the second to-be-detected video of the target user under the living body detection index, after the detection is passed, indication information indicating that the registration is successful is generated, the registration operation is completed, the safety problem caused by the registration operation performed by the non-living body user is relieved, and the safety of the hospital scene is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides an information detection apparatus, as shown in fig. 3, which is an architecture schematic diagram of the information detection apparatus provided in the embodiment of the present disclosure, and includes an obtaining module 301, a determining module 302, a matching module 303, and a control module 304, specifically:
the acquisition module 301 is configured to acquire a first video to be detected of a target user under a determined living body detection index in response to the target user arriving at an image acquisition area in a hospital scene; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection actions;
a determining module 302, configured to perform target detection on the first video to be detected, and determine a first detection result corresponding to the target user under the living body detection indicator;
a matching module 303, configured to, when the first detection result indicates that the detection is passed, match a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any one of the target areas, respectively, to obtain a matching result;
and the control module 304 is configured to control the target device to execute the target operation in response to the matching result being that the matching is successful.
In a possible implementation manner, the any target area is a hospital entrance area in the hospital scene, and a face library corresponding to the any target area stores a face image of any registered user;
the matching module 303, when matching a target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas, respectively, and obtaining a matching result, is configured to:
taking each frame of face image stored in a face library corresponding to the entrance and exit area of the hospital as a first candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the first candidate face image to obtain a first matching value between the target user and each frame of the first candidate face image;
when any first matching value is larger than or equal to a set first matching threshold value, the matching result is successful;
and under the condition that each first matching value is smaller than the set first matching threshold, the matching result is matching failure.
In a possible implementation manner, any one target area is a diagnosis room area in the hospital scene, and the face library corresponding to any one target area stores face images of a visiting user who needs to visit the diagnosis room on the current date and face images respectively corresponding to accompanying users corresponding to the visiting user;
the matching module 303, when matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas, respectively, to obtain a matching result, is configured to:
taking each frame of face image stored in a face library corresponding to the diagnosis room area as a second candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the second candidate face image to obtain a second matching value between the target user and each frame of the second candidate face image;
when any second matching value is larger than or equal to a set second matching threshold value, the matching result is successful;
and under the condition that each second matching value is smaller than the set second matching threshold, the matching result is matching failure.
In one possible implementation, the control module 304, when the matching result is a successful matching, is configured to control the target device to perform a target operation, and is configured to:
responding to the matching result that the matching is successful, and acquiring identification information corresponding to the target user;
and controlling the target equipment to update the current identification information sequence based on the identification information corresponding to the target user to obtain the updated identification information sequence.
In a possible implementation manner, the apparatus further includes a first storage module 305, where the first storage module 305 is configured to:
generating prompt information for prompting the target user to register under the condition that the matching result is that the matching is failed;
and responding to the successful registration operation of the target user, and storing the acquired face image corresponding to the target user into a face library corresponding to the diagnosis room area.
In a possible implementation manner, any one target area is a clinic area in the hospital scene, and the face library corresponding to any one target area stores face images of users to be treated who have undergone triage, and face images respectively corresponding to accompanying users corresponding to the users to be treated;
the matching module 303, when matching the target video frame corresponding to the target user in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas, respectively, to obtain a matching result, is configured to:
taking the face image corresponding to the current user stored in the face library corresponding to the clinic area as a third candidate face image; the current user is a current visiting user who should enter the visiting room area at the current moment, or the current visiting user who should enter the visiting room area at the current moment and an accompanying user corresponding to the current visiting user;
matching the target video frame corresponding to the target user with the third candidate face image to obtain a third matching value;
and when the third matching value is greater than or equal to a set third matching threshold, the matching result is successful.
In one possible implementation, the control module 304, when controlling the target device to perform the target operation in response to the matching result being a successful matching, includes:
and controlling the target equipment to be started in response to the matching result being successful in matching.
In one possible embodiment, before the acquiring a first video to be detected of the target user under the determined living body detection index in response to the target user arriving at an image acquisition area in a hospital scene, the apparatus further includes: a generating module 306, the generating module 306 configured to:
responding to registration operation triggered by the target user, and acquiring a second video to be detected of the target user under the determined living body detection index;
performing target detection on the second video to be detected, and determining a third detection result corresponding to the target user under the determined in-vivo detection index;
and generating indication information indicating successful registration in response to the third detection result indicating that the detection is passed.
In a possible implementation manner, the matching module 303, when the first detection result indicates that the detection is passed, is configured to match a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library respectively corresponding to any one of the target areas, and obtain a matching result, configured to:
under the condition that the first detection result indicates that the detection is passed, performing living body detection on a plurality of frames of face images included in the first detection result, and determining a second detection result corresponding to the target user; each frame of face image is matched with one detection index in the living body detection indexes;
and under the condition that the second detection result indicates that the target user belongs to a living body, matching the target video frame in the first video to be detected with the candidate face images in the face library corresponding to any one target area respectively to obtain a matching result.
In a possible implementation manner, the matching module 303, when performing living body detection on multiple frames of face images included in the first detection result and determining a second detection result corresponding to the target user, is configured to:
performing living body detection on each frame of face image included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image;
and determining that the second detection result corresponding to the target user belongs to the living body under the condition that the corresponding intermediate detection result indicates that the number of the face images of the target user belongs to the living body is greater than or equal to the target number.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 4, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under a determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection actions;
performing target detection on the first video to be detected, and determining a first detection result corresponding to the target user under the living body detection index;
under the condition that the first detection result indicates that the detection is passed, matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result;
and controlling the target equipment to execute the target operation in response to the matching result being successful in matching.
The specific processing flow of the processor 401 may refer to the description of the above method embodiment, and is not described herein again.
In addition, the disclosed embodiments also provide a computer readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the information detection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the information detection method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implementing, and for example, a plurality of units or components may be combined, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. An information detection method, comprising:
responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under a determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection actions;
performing target detection on the first video to be detected, and determining a first detection result corresponding to the target user under the living body detection index;
under the condition that the first detection result indicates that the detection is passed, matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result;
and controlling the target equipment to execute the target operation in response to the matching result being successful in matching.
2. The method according to claim 1, wherein any one of the target areas is a hospital entrance area in the hospital scene, and a face image of any registered user is stored in a face library corresponding to any one of the target areas;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking each frame of face image stored in a face library corresponding to the entrance and exit area of the hospital as a first candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the first candidate face image to obtain a first matching value between the target user and each frame of the first candidate face image;
when any first matching value is larger than or equal to a set first matching threshold value, the matching result is successful;
and under the condition that each first matching value is smaller than the set first matching threshold, the matching result is matching failure.
3. The method according to claim 1 or 2, wherein any one of the target areas is a triage room area in the hospital scene, and the face library corresponding to any one of the target areas stores face images of a visiting user who needs to visit the clinic on the current date and face images respectively corresponding to accompanying users corresponding to the visiting user;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking each frame of face image stored in a face library corresponding to the diagnosis room area as a second candidate face image, and respectively matching a target video frame corresponding to the target user with each frame of the second candidate face image to obtain a second matching value between the target user and each frame of the second candidate face image;
when any second matching value is larger than or equal to a set second matching threshold value, the matching result is successful;
and under the condition that each second matching value is smaller than the set second matching threshold, the matching result is matching failure.
4. The method of claim 3, wherein in response to the matching result being a successful match, controlling the target device to perform a target operation, comprising:
responding to the matching result that the matching is successful, and acquiring identification information corresponding to the target user;
and controlling the target equipment to update the current identification information sequence based on the identification information corresponding to the target user to obtain the updated identification information sequence.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
generating prompt information for prompting the target user to register under the condition that the matching result is that the matching is failed;
and responding to the successful registration operation of the target user, and storing the acquired face image corresponding to the target user into a face library corresponding to the diagnosis room area.
6. The method according to any one of claims 1 to 5, wherein any one of the target areas is a clinic area in the hospital scene, and the face library corresponding to any one of the target areas stores face images of a user to be treated who has undergone triage and face images respectively corresponding to accompanying users corresponding to the user to be treated;
the matching of the target video frame corresponding to the target user in the first video to be detected and the candidate face images in the face library corresponding to any one of the target areas to obtain a matching result includes:
taking the face image corresponding to the current user stored in the face library corresponding to the clinic area as a third candidate face image; the current user is a current visiting user who should enter the visiting room area at the current moment, or the current visiting user who should enter the visiting room area at the current moment and an accompanying user corresponding to the current visiting user;
matching the target video frame corresponding to the target user with the third candidate face image to obtain a third matching value;
and when the third matching value is greater than or equal to a set third matching threshold, the matching result is successful.
7. The method according to any one of claims 1 to 6, wherein before the first video to be detected of the target user under the determined liveness detection index is acquired in response to the target user arriving at an image acquisition area in a hospital scene, the method further comprises:
responding to registration operation triggered by the target user, and acquiring a second video to be detected of the target user under the determined living body detection index;
performing target detection on the second video to be detected, and determining a third detection result corresponding to the target user under the determined in-vivo detection index;
and generating indication information indicating successful registration in response to the third detection result indicating that the detection is passed.
8. The method according to any one of claims 1 to 7, wherein, when the first detection result indicates that the detection is passed, matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library respectively corresponding to any one of the target areas to obtain a matching result, includes:
under the condition that the first detection result indicates that the detection is passed, performing living body detection on a plurality of frames of face images included in the first detection result, and determining a second detection result corresponding to the target user; each frame of face image is matched with one detection index in the living body detection indexes;
and under the condition that the second detection result indicates that the target user belongs to a living body, matching the target video frame in the first video to be detected with the candidate face images in the face library corresponding to any one of the target areas respectively to obtain a matching result.
9. The method according to claim 8, wherein the performing living body detection on the plurality of frames of face images included in the first detection result and determining a second detection result corresponding to the target user comprises:
performing living body detection on each frame of face image included in the first detection result to obtain an intermediate detection result corresponding to each frame of face image;
and determining that the second detection result corresponding to the target user belongs to the living body under the condition that the corresponding intermediate detection result indicates that the number of the face images of the target user belongs to the living body is greater than or equal to the target number.
10. An information detecting apparatus, characterized by comprising:
the acquisition module is used for responding to the situation that a target user arrives at an image acquisition area in a hospital scene, and acquiring a first video to be detected of the target user under the determined living body detection index; wherein the target user comprises at least one of a visiting user and a attending user; the image acquisition area is an acquisition area corresponding to image acquisition equipment in any target area in the hospital scene; the in-vivo detection index includes: at least one of a detection index sequence composed of a plurality of detection colors and a detection index sequence composed of a plurality of detection operations;
the determining module is used for carrying out target detection on the first video to be detected and determining a first detection result corresponding to the target user under the living body detection index;
the matching module is used for matching a target video frame corresponding to the target user in the first video to be detected with candidate face images in a face library corresponding to any target area respectively to obtain a matching result under the condition that the first detection result indicates that the detection is passed;
and the control module is used for controlling the target equipment to execute the target operation in response to the matching result being successful.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the steps of the information detection method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the information detection method according to any one of claims 1 to 9.
CN202210404600.9A 2022-04-18 2022-04-18 Information detection method and device, electronic equipment and storage medium Pending CN114724071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210404600.9A CN114724071A (en) 2022-04-18 2022-04-18 Information detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210404600.9A CN114724071A (en) 2022-04-18 2022-04-18 Information detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114724071A true CN114724071A (en) 2022-07-08

Family

ID=82243892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210404600.9A Pending CN114724071A (en) 2022-04-18 2022-04-18 Information detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114724071A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077971A1 (en) * 2022-10-10 2024-04-18 京东科技控股股份有限公司 Liveness detection method and apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077971A1 (en) * 2022-10-10 2024-04-18 京东科技控股股份有限公司 Liveness detection method and apparatus

Similar Documents

Publication Publication Date Title
US10885306B2 (en) Living body detection method, system and non-transitory computer-readable recording medium
CN111899878B (en) Old person health detection system, method, computer device and readable storage medium
CN110705507B (en) Identity recognition method and device
CN107742100B (en) A kind of examinee's auth method and terminal device
CN110968855B (en) Occlusion detection during a facial recognition process
GB2500823A (en) Method, system and computer program for comparing images
CN112700572A (en) Health-care-based personnel access control method, device, equipment and storage medium
Wang et al. Using opportunistic face logging from smartphone to infer mental health: challenges and future directions
KR102243890B1 (en) Method and apparatus for managing visitor of hospital
CN110807117B (en) User relation prediction method and device and computer readable storage medium
CN105957172A (en) Photograph attendance application system of intelligent photograph electrical screen
KR20180031552A (en) Appratus, system and method for facial recognition
CN114724071A (en) Information detection method and device, electronic equipment and storage medium
CN111640477A (en) Identity information unifying method and device and electronic equipment
CN109543635A (en) Biopsy method, device, system, unlocking method, terminal and storage medium
CN113111846A (en) Diagnosis method, device, equipment and storage medium based on face recognition
CN108154070A (en) Face identification method and device
CN111783714A (en) Coercion face recognition method, device, equipment and storage medium
WO2023178997A1 (en) Interaction control method and apparatus, and computer device and storage medium
CN110248181A (en) External equipment self-resetting method, device, system and computer-readable medium
CN112927152B (en) CT image denoising processing method, device, computer equipment and medium
CN109543562A (en) Identity identifying method, insurance institution's server and the terminal of insured people
Czajka Is that eye dead or alive? Detecting new iris biometrics attacks
CN110866292A (en) Interface display method and device, terminal equipment and server
CN108304563A (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination