CN112364714B - Face recognition method, device, computer equipment and storage medium - Google Patents

Face recognition method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112364714B
CN112364714B CN202011145855.5A CN202011145855A CN112364714B CN 112364714 B CN112364714 B CN 112364714B CN 202011145855 A CN202011145855 A CN 202011145855A CN 112364714 B CN112364714 B CN 112364714B
Authority
CN
China
Prior art keywords
test
target
current
video
equipment part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011145855.5A
Other languages
Chinese (zh)
Other versions
CN112364714A (en
Inventor
李树波
张福利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Original Assignee
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China General Nuclear Power Corp, CGN Power Co Ltd, Daya Bay Nuclear Power Operations and Management Co Ltd, Lingdong Nuclear Power Co Ltd, Guangdong Nuclear Power Joint Venture Co Ltd, Lingao Nuclear Power Co Ltd filed Critical China General Nuclear Power Corp
Priority to CN202011145855.5A priority Critical patent/CN112364714B/en
Publication of CN112364714A publication Critical patent/CN112364714A/en
Application granted granted Critical
Publication of CN112364714B publication Critical patent/CN112364714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a face recognition method, a face recognition device, computer equipment and a storage medium. The method comprises the following steps: acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period; determining a current test procedure in a current nuclear power test according to the test information and the test video; screening more than one frame of face image corresponding to the current test procedure from the test video; clustering more than one frame of face image to obtain at least one cluster; when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library; and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel. By adopting the method, the management efficiency of the test personnel can be improved.

Description

Face recognition method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of nuclear power informatization construction, in particular to a face recognition method, a face recognition device, computer equipment and a storage medium.
Background
A nuclear power plant generates electricity by thermal energy generated in a nuclear reactor by nuclear fuel. In order to ensure the basic safety of a nuclear power plant, test personnel can regularly perform nuclear power tests on nuclear power equipment in the nuclear power plant. The nuclear power test refers to a process of executing preset test procedures item by item and recording results. Because of the specificity of nuclear power tests, different test procedures often need to be performed by different testers, for example, when the test procedure is to test a nuclear reactor in a nuclear power equipment A, the required testers are professional nuclear power personnel; when the test procedure is to test the condensed water in the nuclear power equipment A, the required test personnel is the professional chemical personnel.
For normal execution of the test procedure, the nuclear power plant manager manages the testers who come in and go out of the test area during the execution of the current test procedure. For example, during testing of nuclear reactors in a nuclear power plant, professional nuclear power personnel are allowed to enter the test area, and professional chemical personnel are prohibited from entering the test area. However, with the development of nuclear power technology, the test time of the test procedure is longer and longer, which causes that the nuclear power plant manager needs to spend a great deal of time to manage the tester, and thus the management efficiency of the tester is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a face recognition method, apparatus, computer device, and storage medium that can improve the management efficiency of test persons.
A face recognition method, the method comprising:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
determining a current test procedure in the current nuclear power test according to the test information and the test video;
Screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering the more than one frame of face images to obtain at least one cluster;
when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
And determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
In one embodiment, the obtaining the test information of the current nuclear power test includes:
acquiring a test identification code associated with a current nuclear power test;
Determining a test work order corresponding to the current nuclear power test based on the test identification code;
and analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
In one embodiment, the test information includes equipment part keywords corresponding to each test procedure;
determining a current test procedure in the current nuclear power test according to the test information and the test video, wherein the current test procedure comprises the following steps:
traversing at least one video frame in the test video, and identifying a current operating equipment part in each traversed video frame;
When the current operation equipment part in the traversed video frame belongs to a target equipment part corresponding to the equipment part keyword, determining the traversed video frame as a target video frame;
Screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure;
and determining the current test procedure based on the target equipment part keywords.
In one embodiment, the traversing at least one video frame of the target trial video and identifying a current operating device part in each video frame traversed includes:
Traversing at least one video frame in the test video;
Performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with a hand area from the video frames in the current traversal sequence based on the detection result of the hand detection;
and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
In one embodiment, the screening more than one frame of face image corresponding to the current test procedure from the test video includes:
traversing the target video frames in the test video according to the acquisition time of each target video frame;
When the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point;
dividing the test video according to the dividing time points to obtain at least one video segment;
Screening out target video clips from the at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip;
and carrying out face recognition on the video frames in the target video segment, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
In one embodiment, the screening the target video segment from the at least one video segment according to the test information, the acquisition time period of each video segment, and the target video frame included in each video segment includes:
determining target equipment part keywords corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information;
determining target equipment parts corresponding to the target equipment part keywords;
when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame;
Screening candidate video clips of which target video frames are all the video frames of the current test procedure from the video clips;
and determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
In one embodiment, the method further comprises:
determining the acquisition time of each face image and the position coordinates in the target test area;
forming more than one face movement track based on a plurality of face images with adjacent acquisition times and the difference of the position coordinates smaller than a first threshold value;
And when the non-target person does not have a trend of leaving the target test area according to the face movement track, corresponding alarm information is sent.
A face recognition device, the device comprising:
The acquisition module is used for acquiring test information of the current nuclear power test and test videos recorded in a target test area of the current nuclear power test within a preset duration.
And the current test procedure determining module is used for determining the current test procedure in the current nuclear power test according to the test information and the test video.
The non-target test person determining module is used for screening more than one frame of face image corresponding to the current test procedure from the test video; clustering the more than one frame of face images to obtain at least one cluster; when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library; and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
determining a current test procedure in the current nuclear power test according to the test information and the test video;
Screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering the more than one frame of face images to obtain at least one cluster;
when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
And determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
determining a current test procedure in the current nuclear power test according to the test information and the test video;
Screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering the more than one frame of face images to obtain at least one cluster;
when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
And determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
According to the face recognition method, the face recognition device, the computer equipment and the storage medium, the current test procedure which is being executed at the current moment can be determined based on the test information and the test video by acquiring the test information and the test video of the current nuclear power test; by determining the current test procedure, face images corresponding to the current test procedure can be screened out from the test video, so that the face images can be clustered to obtain at least one cluster; the number of the clusters can be matched with the number of the personnel of the target test personnel by determining the clusters, and the personnel identification corresponding to each cluster is determined when the number of the clusters is not matched, so that the probability of still carrying out personnel identification on the clusters when the test personnel in the target test area are all the target test personnel can be reduced, and the identification efficiency of personnel identification is improved; by determining the personnel identification corresponding to each cluster, the non-target test personnel in the target test area can be determined according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel executing the current test procedure, so that the purpose of managing the test personnel in the target test area is achieved. Because non-target test personnel in the target test area can be automatically identified, compared with the traditional management based on manual work, the management efficiency of the test personnel is greatly improved.
Drawings
FIG. 1 is an application environment diagram of a face recognition method in one embodiment;
FIG. 2 is a flow chart of a face recognition method in one embodiment;
FIG. 3 is a flow chart of the current test procedure determination step in one embodiment;
Fig. 4 is a block diagram of a face recognition device in one embodiment;
fig. 5 is a block diagram of a face recognition device according to another embodiment;
Fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The face recognition method provided by the application can be applied to an application environment shown in fig. 1. Wherein the video recorder 102 communicates with the server 104 over a network. The video recorder 102 is used for recording a test process of a nuclear power test to obtain a test video, sending the test video to the server 104, and the server 104 performs face recognition on the test video to determine non-target test staff in a test area of the nuclear power test. The server 104 may be implemented as a stand-alone server or a server cluster including a plurality of servers.
In one embodiment, as shown in fig. 2, a face recognition method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
Step S202, test information of a current nuclear power test is obtained, and test videos recorded for a target test area of the current nuclear power test within a preset time period are obtained.
The nuclear power test is a process of executing preset test procedures item by item and recording results; a complete nuclear power test may include multiple test procedures, and a test procedure may include one or more executable steps. The nuclear power test manager can divide each executable step in the nuclear power test in advance, so that a plurality of test procedures are obtained. The test information is information related to the nuclear power test, and the test information can comprise test procedures included in the current nuclear power test, key equipment parts corresponding to the test procedures, test areas of the current nuclear power test and the like. The test area is a geographical area where test personnel perform nuclear power tests, and may be a virtual geographical boundary surrounded by a virtual fence at a nuclear power plant.
Specifically, when a nuclear power test needs to be executed, a tester can erect a video recorder in a test area of the nuclear power test, record a test process of the nuclear power test through the erected video recorder, obtain a test video stream, and send the test video stream to a server corresponding to a test manager in real time. When a target tester in a target test area of a current nuclear power test needs to be managed, the server acquires test information of the current nuclear power test, determines the target test area of the current nuclear power test according to the test information, screens out test video streams recorded for the target test area from all received test video streams, and cuts out test video recorded for the target test area within a preset duration from the test video streams recorded for the target test area. The preset time length refers to a previous time length from the current time, for example, when the preset time length is 10 minutes and the current time is 10:00, the acquisition time period of the test video recorded for the target test area in the preset time length is 9:50-10:00. The preset time length can be freely set according to requirements, for example, the preset time length can be determined according to the execution time length of the test procedure. The preset duration is too long, the collected test video is recorded for a plurality of test procedures, the preset duration is too short, the collected test video is less in video content and cannot be subjected to face recognition based on the video content, therefore, the length of the preset duration needs to be reasonably set, the test video collected in the preset duration is recorded for one test procedure, and the server can be used for face recognition based on the video content in the test video.
In one embodiment, the server may generate the corresponding test identification code based on the test information of the nuclear power test prior to testing the nuclear power test. For example, a test identification code is generated according to the test name and the test area of the nuclear power test. When a tester needs to test the nuclear power test, the tester can go to a designated place to pick up the video recorder, and scan the image code corresponding to the test identification code through the video recorder, so that the video recorder and the test identification code are bound. When the video recorder uploads the test video stream to the server, the video recorder can embed the test identification code in the test video stream, so that the server can acquire the target test identification code corresponding to the current nuclear power test, and the video stream embedded with the target test identification code is used as the test video recorded in the target test area aiming at the current nuclear power test.
In one embodiment, the test information of the current nuclear power test may be input by a test manager or pulled from the network according to the test identifier of the current nuclear power test, which is not limited herein.
Step S204, determining a current test procedure in the current nuclear power test according to the test information and the test video.
The test information includes execution time periods corresponding to the test procedures and equipment part keywords corresponding to the test procedures. The current test procedure is a test procedure being executed at the current time.
Specifically, the nuclear power test is performed item by item according to the test procedure, and in the actual execution of the test procedure, the actual execution time period of each test procedure is not necessarily the same as the execution time period marked in the test information due to the occurrence of various emergency conditions, so that the current test procedure being executed at the current time cannot be determined simply based on the execution time period corresponding to each test procedure included in the test information, and further, it is necessary to confirm the current test procedure by means of the test video.
The server determines the acquisition time of the test video recorded in the target test area of the current nuclear power test within the preset time period, and determines the target execution time period containing the acquisition time period according to the execution time period corresponding to each test procedure. The server sorts the test procedures according to the ascending order of the execution time periods, so as to obtain a test procedure sequence and an execution time period sequence corresponding to the test procedure sequence. The server screens out a preset number of preamble execution time periods located before the target execution time period from the execution time period sequence, and a preset number of follow-up execution time periods located after the target execution time period. The preset number can be freely set according to requirements. Since the test procedure being performed at the present time may be a subsequent test procedure when the test is successful and the test procedure being performed at the present time may be a preceding test procedure when an emergency occurs in the test, it is necessary to determine the preceding test procedure before the target test procedure and the subsequent test procedure after the target test procedure.
For example, when the preset number is 1, the current nuclear power test includes 4 test procedures, and the execution period corresponding to the test procedure 1 is 10:00-10:30, the execution period corresponding to the test procedure 2 is 10:30-11:00, the execution period corresponding to the test procedure 3 is 11:00-11:30, the execution period corresponding to the test procedure 4 is 11:30-12:00, the target execution period including the acquisition period 11:00-11:10 is 11:00-11:30, the preamble execution period determined according to the preset number 1 is 10:30-11:00, and the follow-up execution period is 11:30-12:00.
Further, the server determines the test procedure corresponding to each of the target execution time period, the preamble execution time period and the follow-up execution time period according to the execution time period corresponding to each of the test procedures. For example, in the above example, the test procedure corresponding to the target execution period is the test procedure 3, the test procedure corresponding to the preceding execution period is the test procedure 2, and the test procedure corresponding to the following execution period is the test procedure 4. For convenience of description, the test procedures corresponding to each of the target execution period, the preamble execution period, and the post-execution period will be referred to as candidate test procedures hereinafter. And the server determines the equipment part keywords corresponding to each candidate test procedure according to the equipment part keywords corresponding to each test procedure. For convenience of description, the equipment part keywords corresponding to the candidate test process will be referred to as candidate equipment part keywords hereinafter. The key equipment parts corresponding to the test procedure are equipment parts which are subjected to the test in the process of executing the test procedure.
Further, the server analyzes the test video to obtain a corresponding video frame, traverses the video frame, and identifies the current operation equipment parts in the video frame in the current traversing sequence through a preset key equipment part identification model. The server judges whether the current operation equipment part in the current traversed video frame belongs to a target key equipment part corresponding to the equipment part key word, if so, the current traversed video frame is taken as a target video frame, and if not, the current traversed video frame is suspended to be taken as the target video frame. The current operating equipment part is the equipment part currently operated by the tester. The key equipment part identification model is a model formed by an artificial neural network. For example, when the currently operated equipment part in the currently traversed video frame is a water pump and the equipment part keyword is "water pump", the currently traversed video frame is taken as the target video frame.
Further, the server matches the candidate equipment part keywords corresponding to the candidate test procedures with the equipment parts of the current operation equipment in the target video frame, and the successfully matched candidate test procedure is used as the current test procedure of the current nuclear power test. For example, when the equipment part keywords corresponding to the preamble test procedure are "part a" and "part B", the equipment part keywords corresponding to the target test procedure are "part C" and "part D", and the currently operated equipment part in the target video frame is part a and part B, the candidate test procedure successfully matched with the target video frame is the preamble test procedure, and the server uses the preamble test procedure as the current test procedure.
In one embodiment, when the matching is unsuccessful, the duration of the test video recorded for the target test area of the current nuclear power test within the preset duration may be considered too short, and the server may increase the preset duration and reacquire the test video based on the increased preset duration.
By screening candidate test procedures from a plurality of test procedures based on test information in advance, the calculation amount of the current test procedure can be reduced, which is determined by the test video, so that the determination efficiency of the current test procedure is improved. Because the tester can operate the key equipment parts corresponding to the test procedure in the process of executing the test procedure, the current test procedure can be screened from the candidate test videos based on the current operation equipment parts in the test videos.
In one embodiment, when a tester executes a current test procedure, the procedure identifier of the current test procedure can be sent to the server as test information through a terminal corresponding to the tester, so that the server can determine the current test procedure based on the procedure identifier in the test information, and after the current test procedure is determined through the procedure identifier, the correctness of the previous test procedure is verified through the test video.
Step S206, more than one frame of face image corresponding to the current test procedure is screened out from the test video.
The face image refers to a local image of an area where a face is located in a video frame. The region where the face is located is the position of the face in the video frame.
Specifically, the server analyzes the test video to obtain a corresponding video frame, and the server can identify a face area in the video frame through a face detection algorithm. The face detection algorithm can be customized according to the needs, such as an OpenCV face detection algorithm, an IOS, an Android system self-contained face detection algorithm or a top-view face detection algorithm, and the like. The face detection algorithm may return whether the video frame contains a face and a specific face region, e.g., identify the location of the face by a rectangular box. After determining the face area in the live image, the server can intercept the live image along the face area to obtain the face image. A single frame of video may be truncated to obtain one or more face images. In the present embodiment, the face image may include only an image of the face area.
Step S208, clustering more than one frame of face images to obtain at least one cluster.
Wherein clustering is a process of dividing more than one data object into a plurality of classes composed of similar data objects, a cluster generated by clustering is a set of data objects, the data objects in the same cluster are similar to each other, and the data objects in different clusters are different from each other. The "data object" in this embodiment refers to a face image in a video frame.
Specifically, after the server acquires the face image, the face features in the face image can be extracted according to a preset face feature extraction strategy. Wherein the face features are data for reflecting facial features of the person. The facial features may reflect one or more of the sex of the person, the contour of the face, the hairstyle, the glasses, the nose, the mouth, and the distance between the facial organs. In one embodiment, the facial features may include facial texture features. The facial texture features may reflect pixel depths of facial organs, including the nose, ears, eyebrows, cheeks, or lips, etc. The facial texture features may include a color value distribution and a luminance value distribution of facial image pixels.
The server can cluster the corresponding face images according to the face features corresponding to the face images respectively, and cluster the similar face images into a cluster. The clustering algorithm used for clustering the face features may be a Rank-Order clustering algorithm, a K-means (K-means) clustering algorithm, a mean shift clustering algorithm, or a density-based clustering method, which is not limited herein. Taking a K-means clustering algorithm as an example, the server can initially group the face features randomly and randomly initialize the center points corresponding to each group. Wherein the center point is the same point as each face feature distance. The distance from each face feature to the center point is further calculated, and the face features are closest to which center point and are divided into which cluster. And repeatedly calculating the center point in each cluster as a new center point. Repeating the steps and performing iterative computation until the central point of each cluster is stable after each iteration. Thus, one or more clusters can be obtained after iteration stabilization.
Step S210, when the number of the cluster clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, the personnel identification corresponding to each cluster is determined based on the face library.
And S212, determining non-target test persons in the target test area according to the person identification corresponding to each cluster and the person identification of the target test person.
The face database is a database deployed based on face images of all employees in the nuclear power plant. The test information contains the personnel identification and the personnel number of the corresponding test personnel in each test procedure.
Specifically, the server determines the number of the target testers for executing the current test procedure according to the test information, judges whether the number of the cluster clusters is matched with the number of the testers for executing the current test, if so, the number of the testers in the target test area is consistent with the number of the target testers, and at the moment, the server judges that no non-target testers exist in the target test area; if the number of the test persons in the target test area is not matched with the number of the test persons in the target test area, the server judges that non-target test persons possibly exist in the target test area.
Further, when it is determined that non-target test persons may exist in the target test area, the server performs weighted summation or weighted averaging on feature vectors of a plurality of face features of the face images in the cluster, so as to fuse the face features and obtain identity features of the corresponding cluster. The server compares the obtained identity features with each face feature in the face library, and determines the face features matched with the identity features, namely, determines the personnel identification corresponding to each cluster. And the server matches the personnel identification corresponding to each cluster with the personnel identification of the target test personnel, and determines non-target test personnel in the target test area according to the matching result.
In one embodiment, determining, based on the face library, a person identification corresponding to a face feature that matches an identity feature of the cluster comprises: calculating the similarity between the identity characteristic and each face characteristic in the face library; determining a maximum value of the calculated similarities; when the maximum value of the similarity reaches a similarity threshold value, determining the face feature with the maximum similarity as the face feature matched with the identity feature; and acquiring a person identifier corresponding to the face feature matched with the identity feature.
In the face recognition method, the current test procedure which is being executed at the current moment can be determined based on the test information and the test video by acquiring the test information and the test video of the current nuclear power test; by determining the current test procedure, face images corresponding to the current test procedure can be screened out from the test video, so that the face images can be clustered to obtain at least one cluster; the number of the clusters can be matched with the number of the personnel of the target test personnel by determining the clusters, and the personnel identification corresponding to each cluster is determined when the number of the clusters is not matched, so that the probability of still carrying out personnel identification on the clusters when the test personnel in the target test area are all the target test personnel can be reduced, and the identification efficiency of personnel identification is improved; by determining the personnel identification corresponding to each cluster, the non-target test personnel in the target test area can be determined according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel executing the current test procedure, so that the purpose of managing the test personnel in the target test area is achieved. Because non-target test personnel in the target test area can be automatically identified, compared with the traditional management based on manual work, the management efficiency of the test personnel is greatly improved.
In one embodiment, obtaining test information for a current nuclear power test includes: acquiring a test identification code associated with a current nuclear power test; determining a test work order corresponding to the current nuclear power test based on the test identification code; analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
The test worksheet is a form recorded with process information of each test process in the nuclear power test; the process information can specifically comprise executable steps of the current test process, corresponding equipment part keywords and the like, namely, the test work order records test information of the nuclear power test.
Specifically, before testing a nuclear power test, the server may generate a corresponding test identifier according to test information of the nuclear power test. For example, a test identification code is generated according to the test name of the nuclear power test and the work order identification of the test work order. When the test identification code of the current nuclear power test is obtained, the server analyzes the test identification code to obtain a work order identification of the test work order, the test work order associated with the current nuclear power test is determined according to the work order identification, and test information associated with the current nuclear power test is read from the test work order associated with the current nuclear power test.
In one embodiment, the test identifier code records the total number of test sub-identifiers; the step of determining a test worksheet associated with the current nuclear power test based on the test identification code includes: analyzing the test identification codes to obtain a plurality of test sub-identification codes; when the number of the plurality of test sub-identification codes accords with the total number of the test sub-identification codes, determining the analysis mode corresponding to each test sub-identification code according to the analysis serial number in the test sub-identification code; and analyzing the test sub-identification code in a corresponding analysis mode to obtain a test work order associated with the current nuclear power test.
Specifically, when generating the corresponding test identifier according to the test information of the nuclear power experiment, the server can determine the component parts of the test information, encode each component part by using different encoding modes to obtain the test sub-identifier corresponding to each component part, and combine the test sub-identifiers to obtain the test identifier. For example, when the test information comprises a test name of a nuclear power test and a work order identifier of a test work order, the server encodes the test name according to an A encoding rule, and adds an analysis serial number corresponding to the A encoding rule to the encoded test name to obtain a first test sub-identifier code; the server encodes the work order identifier according to the B encoding rule, and adds the analysis serial number corresponding to the B encoding rule to the encoded work order identifier to obtain a second test sub-identifier code; the server combines the first test sub-identification code and the second test sub-identification code to obtain a test identification code, and adds the total number of the test sub-identification codes to the test identification code.
Further, when the test identification code needs to be analyzed, the server splits the test identification code into a plurality of test sub-identification codes, counts the number of the split test sub-identification codes, and analyzes the test sub-identification codes when the number of the split test sub-identification codes is equal to the total number of the test sub-identification codes. The server determines the analysis serial number in the current test sub-identification code, determines the corresponding analysis scheme according to the analysis serial number, and analyzes the current test sub-identification code according to the determined analysis mode, thereby obtaining the work order identification.
In the embodiment, the nuclear power test and the test work order video are bound on the basis of the test identification code in advance, so that the test work order associated with the current nuclear power test can be accurately determined according to the test identification code, and corresponding test information can be read from the test work order.
In one embodiment, as shown in fig. 3, the test information includes equipment part keywords corresponding to each test procedure; according to the test information and the test video, determining a current test procedure in the current nuclear power test comprises the following steps:
s302, traversing at least one video frame in the test video, and identifying the current operation equipment part in each traversed video frame.
S304, when the current operation equipment part in the traversed video frame belongs to the target equipment part corresponding to the equipment part keyword, determining the traversed video frame as the target video frame.
S306, screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure.
S308, determining the current test procedure based on the target equipment part keywords.
Specifically, because of the specificity of the nuclear power test, the start execution time and execution duration of one test procedure are often variable, and thus, it is necessary to determine the test procedure currently being executed by the equipment part keyword and the test video corresponding to each test procedure in the test information.
The test video consists of a plurality of video frames, and the server traverses the plurality of video frames in the test video and identifies the current operation equipment parts in each traversed video frame. The server can detect the current operation equipment part in the video frame based on a preset equipment part detection algorithm, so that the equipment part operated by the tester is determined, and the equipment part identification of the current operation equipment part is output. The equipment part identifier of the current operation equipment part may be an equipment part keyword of the current operation equipment part, for example, the equipment part identifier of the current operation equipment part may be a part name of the current operation equipment part. The equipment part detection algorithm can be customized according to requirements, for example, the current operation equipment part can be identified based on an image identification algorithm in matlab, or the current operation equipment part can be identified based on an image identification algorithm in OpenCV.
Further, the server judges whether the current operation equipment part in the currently traversed video frame belongs to a target equipment part corresponding to the equipment part keyword, if so, the currently traversed video frame is used as a target video frame, and if not, the currently traversed video frame is suspended to be used as the target video frame. For example, when the current operation equipment part is obtained based on the equipment part identification model, the equipment part identification model may also output an operation part keyword corresponding to the current operation equipment part, such as a part name of the current operation equipment part, so that the server may match the operation part keyword corresponding to the current operation equipment part output by the equipment part identification model with the equipment part keyword, and when the matching is successful, determine that the current operation equipment part in the traversed video frame belongs to the target equipment part corresponding to the equipment part keyword.
Further, the server matches the equipment part identifier of the current operation equipment part in the target video frame with the equipment part keywords corresponding to each test procedure, and screens out the target equipment part keywords corresponding to the current operation equipment part in the target video frame from the equipment part keywords corresponding to each test procedure according to the matching result. For example, when the device part keywords corresponding to the test procedure 1 are a and B, the device part keywords corresponding to the test procedure 2 are C and D, the device part identifier of the currently operated device part in the target video frame 1 is a, and the device part identifier of the currently operated device part in the target video frame 2 is B, the target device part corresponding to the target video frame is the device part corresponding to the device part keywords a and B.
Further, the server determines a correspondence between the test procedure and the equipment part keyword according to the equipment part keyword corresponding to each test procedure, determines a test procedure corresponding to the target equipment part keyword according to the correspondence, and determines the test procedure corresponding to the target equipment part keyword as the current test procedure. For example, in the above example, the server uses the test procedure 1 as the current test procedure.
In one embodiment, when training of the equipment part identification model is required, a developer obtains a large number of equipment part pictures from the network, and takes the equipment part pictures as sample pictures. The developer marks the part keywords corresponding to the equipment parts in the sample pictures to obtain image labels of the sample pictures, for example, the part names of the equipment parts are used as the image labels, the sample pictures and the corresponding image labels are input into an equipment part identification model to be trained, the equipment part identification model to be trained learns to obtain the equipment part characteristics in the sample images based on the image labels, and therefore the current operation equipment parts in the video frame can be identified according to the learned equipment part characteristics.
In the above embodiment, by identifying the current operation device part in the video frame, the server can determine the current test procedure based on the identified current operation device part without being limited by the duration of the test procedure.
In one embodiment, traversing at least one video frame of the target trial video and identifying a current operating equipment part in each video frame traversed, comprises: traversing at least one video frame in the test video; performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with the hand area from the video frames in the current traversal sequence based on the detection result of the hand detection; and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
Specifically, the server traverses at least one video frame in the target video, and performs hand detection on the video frames in the current traversal sequence based on a preset hand detection algorithm. When the hand area in the video frame of the current traversal sequence is determined through the hand detection algorithm, the server selects the image area contacted with the hand area through the geometric frame with the preset size, namely, cuts out the equipment part image contacted with the hand area based on the geometric frame with the preset size. The server inputs the cut equipment part images into a pre-trained equipment part recognition model, and the equipment part images are recognized through the pre-trained equipment part recognition model, so that the current operation equipment parts in the video frames of the current traversal sequence are obtained. Note that the geometry of the preset geometry box can be scaled freely according to the hand detection result.
In another embodiment, when the equipment part recognition model fails to recognize the currently operated equipment part in the equipment part image, the server expands the size of the geometric frame according to a preset size expansion rule, for example, expands the geometric frame with a preset size by one time, so that the expanded geometric frame can be framed to obtain the equipment part image with a larger size, and further the pre-trained equipment part recognition model can extract the more abundant equipment part features through the equipment part image with the larger size, and determine the currently operated equipment part according to the more abundant equipment part features.
In another embodiment, when the equipment part recognition model recognizes that the equipment part image has a plurality of equipment parts, the server pre-sets a size reduction rule to reduce the size of the geometric frame, and reduces the geometric frame with a pre-set size by one time, so that the reduced geometric frame can be used for selecting the equipment part image which is more accurately contacted with the hand, and the pre-trained equipment part recognition model can accurately recognize one current operation equipment part.
In the above embodiment, since the tester must touch a certain equipment part in the test equipment by hand during the test of the equipment part, by performing the hand detection on the video frame, the equipment part currently operated by the tester can be determined based on the hand detection result.
In one embodiment, screening more than one frame of face image from the test video corresponding to the current test procedure includes: traversing the target video frames in the test video according to the acquisition time of each target video frame; when the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point; dividing the test video according to the dividing time points to obtain at least one video segment; screening out target video clips from at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip; and carrying out face recognition on the video frames in the target video clips, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
The target segments are video segments recorded aiming at the current test procedure.
Specifically, since the test video recorded for the target test area in the preset time period may be the test video recorded for two or more test procedures, more than one frame of face image corresponding to the current test procedure needs to be screened from the test video.
The server sorts all target video frames according to the ascending sequence of the acquisition time of the video frames to obtain a target video frame sequence, and traverses all target video frames in the target video frame sequence according to the arrangement sequence. The server determines the current operation equipment part in the target video frame in the current traversal order and the current operation equipment part in the target video frame in the next traversal order, judges whether the current operation equipment part in the target video frame in the current traversal order is consistent with the current operation equipment part in the target video frame in the next traversal order, and if not, takes the acquisition time of the target video frame in the current traversal order as a segmentation time point. When the target video frame sequence is traversed, the server segments the test video according to all the determined segmentation time points to obtain at least one video segment, so that the current operation equipment parts in the target video frames in the same video segment are consistent.
Further, the server screens out target video clips from at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip, and performs face recognition on each video frame in the target video clip, and takes the video frame with the face as a face image corresponding to the current test procedure.
In this embodiment, since the target key equipment component corresponding to the current test procedure is operated when the tester executes the current test procedure, the target video clip corresponding to the current test procedure can be determined based on the target key equipment component, so that the face image during the execution of the current test procedure can be screened from the target video clip.
In one embodiment, selecting a target video clip from at least one video clip according to the test information, the acquisition time period of each video clip, and the target video frame included in each video clip, includes: determining target equipment part keywords corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information; determining a target equipment part corresponding to the target equipment part keyword; when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame; screening candidate video clips of which target video frames are all the video frames of the current test procedure from all the video clips; and determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
Specifically, the server determines a target equipment part keyword corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information, and determines a target equipment part corresponding to the target equipment part keyword, for example, when the target equipment part keyword corresponding to the current test procedure is "water pump", the equipment part corresponding to the "water pump" is the water pump. The server judges whether the current operation equipment part in the target video frame in the test video belongs to the target equipment part, and if so, the server sets the target video frame as the current test procedure video frame. Further, the server judges whether target video frames in the video clips are all video frames of the current test procedure, if all the target video frames are the video frames of the current test procedure, the video clips are possibly recorded for the current test procedure, and the video clips are set to be candidate video clips. When the test video contains a plurality of candidate video clips, the server determines the acquisition time period of each candidate video clip, and takes the candidate video clip with the acquisition time period nearest to the current moment as a target video clip, namely, the candidate video clip with the acquisition time period nearest to the current moment as a video clip recorded for the current test procedure.
In this embodiment, since the target key equipment component corresponding to the current test procedure is operated when the tester executes the current test procedure, the video segments of the target video frames, which are all video frames of the current test procedure, can be regarded as the video recorded for the current test procedure; because the closer the video clip to the current moment is, the greater the possibility of being the video recorded for the current test procedure is, in order to further improve the confidence that the extracted video clip is the target video clip, the target video clip can be screened out from the candidate video clips according to the acquisition time.
In one embodiment, the method further comprises: determining the acquisition time of each face image and the position coordinates in a target test area; forming more than one face movement track based on a plurality of face images with adjacent acquisition times and position coordinates with difference smaller than a first threshold value; and when judging that the non-target person does not have a trend of leaving the target test area according to the face movement track, sending out corresponding alarm information.
The human face moving track refers to track data formed by sampling the motion process of one or more persons appearing in a target test area under a space-time environment and arranging human face data obtained by sampling for multiple times according to the sampling sequence. The position coordinates of the face image in the target test area are coordinate values of coordinate points when a coordinate system is built based on a video frame and a certain pixel point in the face area where the face image is located is selected as the coordinate point of the face image in the coordinate system.
Specifically, the server identifies face images which can be determined to be the same person to a certain confidence degree according to the position change continuity of the face images relative to the video frame. The server determines the position coordinates of each face image in the video frame, and calculates the difference of the position coordinates of the face images adjacent to the acquisition time. The server divides a large number of face images into a plurality of groups of face images with adjacent acquisition time and position coordinates, wherein the difference of the position coordinates is smaller than a first threshold value, and each group of face images is arranged according to the acquisition time to form a plurality of corresponding face movement tracks. It can be understood that the acquisition times corresponding to the plurality of face images in the face movement track formed based on the position change continuity are continuous. The server screens out the human face track of the non-target test person from the plurality of human face tracks based on the human face features of the non-target test person, and judges whether the non-target test person has a trend of leaving the target test area according to the human face track of the non-target test person. If the trend of leaving the target test area does not exist, the server sends out corresponding alarm information. For example, when the face track of the non-target test person is close to the key equipment part, the server sends out corresponding alarm information.
In this embodiment, by determining the face track of the non-target test person, whether the non-target test person has a trend of leaving the target test area or not may be determined based on the face track of the non-target test person, and when the non-target test person does not have a corresponding trend, alarm information is sent, so that the security of the target test area may be improved.
It should be understood that, although the steps in the flowcharts of fig. 2 and 3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2, 3 may include a plurality of steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the steps or stages is not necessarily sequential, but may be performed in rotation or alternatively with at least some of the other steps or stages.
In one embodiment, as shown in fig. 4, there is provided a face recognition apparatus 400 including: an acquisition module 402, a current test procedure determination module 404, and a non-target trial personnel determination module 406, wherein:
the acquisition module 402 is configured to acquire test information of a current nuclear power test, and a test video recorded for a target test area of the current nuclear power test within a preset duration.
The current test procedure determining module 404 is configured to determine a current test procedure in a current nuclear power test according to the test information and the test video.
A non-target experimenter determining module 406, configured to screen out more than one frame of face image corresponding to the current experimental procedure from the experimental video; clustering more than one frame of face image to obtain at least one cluster; when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library; and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
In one embodiment, as shown in FIG. 5, the acquisition module 402 further includes a test identification code determination module 4021 for acquiring a test identification code associated with a current nuclear power test; determining a test work order corresponding to the current nuclear power test based on the test identification code; analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
In one embodiment, the test information includes equipment part keywords corresponding to each test procedure; the current test procedure determining module 404 further includes a target video frame determining module 4041 configured to traverse at least one video frame in the test video and identify a current operating device part in each traversed video frame; when the current operation equipment part in the traversed video frame belongs to the target equipment part corresponding to the equipment part keyword, determining the traversed video frame as a target video frame; screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure; and determining the current test procedure based on the target equipment part keywords.
In one embodiment, the target video frame determination module 4041 is further configured to traverse at least one video frame in the trial video; performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with the hand area from the video frames in the current traversal sequence based on the detection result of the hand detection; and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
In one embodiment, the target video frame determining module 4041 is further configured to traverse the target video frames in the test video according to the acquisition time of each target video frame; when the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point; dividing the test video according to the dividing time points to obtain at least one video segment; screening out target video clips from at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip; and carrying out face recognition on the video frames in the target video clips, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
In one embodiment, the target video frame determining module 4041 is further configured to determine a target equipment part keyword corresponding to the current test procedure according to the equipment part keyword corresponding to each test procedure in the test information; determining a target equipment part corresponding to the target equipment part keyword; when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame; screening candidate video clips of which target video frames are all the video frames of the current test procedure from all the video clips; and determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
In one embodiment, the face recognition device 400 further includes an alarm module 408 for determining the acquisition time of each face image and the position coordinates in the target trial area; forming more than one face movement track based on a plurality of face images with adjacent acquisition times and position coordinates with difference smaller than a first threshold value; and when judging that the non-target person does not have a trend of leaving the target test area according to the face movement track, sending out corresponding alarm information.
For specific limitations of the face recognition apparatus, reference may be made to the above limitations of the face recognition method, and no further description is given here. The respective modules in the above-described face recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a face recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
Determining a current test procedure in a current nuclear power test according to the test information and the test video;
screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering more than one frame of face image to obtain at least one cluster;
When the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a test identification code associated with a current nuclear power test;
determining a test work order corresponding to the current nuclear power test based on the test identification code;
analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
In one embodiment, the test information includes equipment part keywords corresponding to each test procedure; the processor when executing the computer program also implements the steps of:
in one embodiment, the processor when executing the computer program further performs the steps of:
Traversing at least one video frame in the test video;
Performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with the hand area from the video frames in the current traversal sequence based on the detection result of the hand detection;
and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
In one embodiment, the processor when executing the computer program further performs the steps of:
Traversing the target video frames in the test video according to the acquisition time of each target video frame;
when the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point;
Dividing the test video according to the dividing time points to obtain at least one video segment;
screening out target video clips from at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip;
And carrying out face recognition on the video frames in the target video clips, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining target equipment part keywords corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information;
determining a target equipment part corresponding to the target equipment part keyword;
when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame;
screening candidate video clips of which target video frames are all the video frames of the current test procedure from all the video clips;
And determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining the acquisition time of each face image and the position coordinates in a target test area;
Forming more than one face movement track based on a plurality of face images with adjacent acquisition times and position coordinates with difference smaller than a first threshold value;
and when judging that the non-target person does not have a trend of leaving the target test area according to the face movement track, sending out corresponding alarm information.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
Determining a current test procedure in a current nuclear power test according to the test information and the test video;
screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering more than one frame of face image to obtain at least one cluster;
When the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a test identification code associated with a current nuclear power test;
determining a test work order corresponding to the current nuclear power test based on the test identification code;
analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
In one embodiment, the test information includes equipment part keywords corresponding to each test procedure; the computer program when executed by the processor also performs the steps of:
Traversing at least one video frame in the test video, and identifying a current operating equipment part in each traversed video frame;
when the current operation equipment part in the traversed video frame belongs to the target equipment part corresponding to the equipment part keyword, determining the traversed video frame as a target video frame;
Screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure;
and determining the current test procedure based on the target equipment part keywords.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Traversing at least one video frame in the test video;
Performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with the hand area from the video frames in the current traversal sequence based on the detection result of the hand detection;
and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Traversing the target video frames in the test video according to the acquisition time of each target video frame;
when the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point;
Dividing the test video according to the dividing time points to obtain at least one video segment;
screening out target video clips from at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip;
And carrying out face recognition on the video frames in the target video clips, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining target equipment part keywords corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information;
determining a target equipment part corresponding to the target equipment part keyword;
when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame;
screening candidate video clips of which target video frames are all the video frames of the current test procedure from all the video clips;
And determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the acquisition time of each face image and the position coordinates in a target test area;
Forming more than one face movement track based on a plurality of face images with adjacent acquisition times and position coordinates with difference smaller than a first threshold value;
and when judging that the non-target person does not have a trend of leaving the target test area according to the face movement track, sending out corresponding alarm information.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of face recognition, the method comprising:
Acquiring test information of a current nuclear power test and a test video recorded for a target test area of the current nuclear power test within a preset time period;
determining a current test procedure in the current nuclear power test according to the test information and the test video;
Screening more than one frame of face image corresponding to the current test procedure from the test video;
clustering the more than one frame of face images to obtain at least one cluster;
when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library;
determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel;
The test information comprises equipment part keywords corresponding to each test procedure;
determining a current test procedure in the current nuclear power test according to the test information and the test video, wherein the current test procedure comprises the following steps:
traversing at least one video frame in the test video, and identifying a current operating equipment part in each traversed video frame;
When the current operation equipment part in the traversed video frame belongs to a target equipment part corresponding to the equipment part keyword, determining the traversed video frame as a target video frame;
Screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure;
and determining the current test procedure based on the target equipment part keywords.
2. The method of claim 1, wherein the obtaining test information for the current nuclear power test comprises:
acquiring a test identification code associated with a current nuclear power test;
Determining a test work order corresponding to the current nuclear power test based on the test identification code;
and analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
3. The method of claim 1, wherein traversing at least one video frame of the trial video and identifying a current operating device part in each video frame traversed comprises:
Traversing at least one video frame in the test video;
Performing hand detection on the video frames in the current traversal sequence, and cutting out an equipment part image contacted with a hand area from the video frames in the current traversal sequence based on the detection result of the hand detection;
and identifying the equipment part image to obtain the current operation equipment part in the video frame of the current traversal sequence.
4. The method of claim 1, wherein the screening out more than one frame of face images from the test video corresponding to the current test procedure comprises:
traversing the target video frames in the test video according to the acquisition time of each target video frame;
When the current operation equipment part in the target video frame of the current traversal sequence is inconsistent with the current operation equipment part in the target video frame of the next sequence, taking the acquisition time of the target video frame of the current traversal sequence as a segmentation time point;
dividing the test video according to the dividing time points to obtain at least one video segment;
Screening out target video clips from the at least one video clip according to the test information, the acquisition time period of each video clip and the target video frames contained in each video clip;
and carrying out face recognition on the video frames in the target video segment, and determining more than one frame of face image corresponding to the current test procedure according to the face recognition result.
5. The method of claim 4, wherein the screening out the target video clip from the at least one video clip based on the test information, the acquisition time period of each video clip, and the target video frame included in each video clip, comprises:
determining target equipment part keywords corresponding to the current test procedure according to the equipment part keywords corresponding to each test procedure in the test information;
determining target equipment parts corresponding to the target equipment part keywords;
when the current operation equipment in the target video frame belongs to the target equipment part, setting the target video frame as a current test procedure video frame;
Screening candidate video clips of which target video frames are all the video frames of the current test procedure from the video clips;
and determining the acquisition time period of each video segment, and screening target video segments from the candidate video segments based on the acquisition time period.
6. The method according to any one of claims 1 to 5, further comprising:
determining the acquisition time of each face image and the position coordinates in the target test area;
forming more than one face movement track based on a plurality of face images with adjacent acquisition times and the difference of the position coordinates smaller than a first threshold value;
and when judging that the non-target test personnel does not have a trend of leaving the target test area according to the face movement track, sending corresponding alarm information.
7. A face recognition device, the device comprising:
The acquisition module is used for acquiring test information of a current nuclear power test and a test video recorded in a preset time aiming at a target test area of the current nuclear power test; the test information comprises equipment part keywords corresponding to each test procedure;
The current test procedure determining module is used for traversing at least one video frame in the test video and identifying current operation equipment parts in each traversed video frame; when the current operation equipment part in the traversed video frame belongs to a target equipment part corresponding to the equipment part keyword, determining the traversed video frame as a target video frame; screening out target equipment part keywords corresponding to the current operation equipment parts in the target video frame from the equipment part keywords corresponding to each test procedure; determining a current test procedure based on the target equipment part keywords;
The non-target test person determining module is used for screening more than one frame of face image corresponding to the current test procedure from the test video; clustering the more than one frame of face images to obtain at least one cluster; when the number of the clusters is not matched with the number of the personnel of the target test personnel for executing the current test procedure, determining personnel identification corresponding to each cluster based on a face library; and determining non-target test personnel in the target test area according to the personnel identification corresponding to each cluster and the personnel identification of the target test personnel.
8. The apparatus of claim 7, wherein the acquisition module further comprises a test identification code determination module to acquire a test identification code associated with a current nuclear power test; determining a test work order corresponding to the current nuclear power test based on the test identification code; and analyzing the test worksheet to obtain the test information of the current nuclear power experiment.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202011145855.5A 2020-10-23 2020-10-23 Face recognition method, device, computer equipment and storage medium Active CN112364714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011145855.5A CN112364714B (en) 2020-10-23 2020-10-23 Face recognition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011145855.5A CN112364714B (en) 2020-10-23 2020-10-23 Face recognition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112364714A CN112364714A (en) 2021-02-12
CN112364714B true CN112364714B (en) 2024-07-19

Family

ID=74511830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011145855.5A Active CN112364714B (en) 2020-10-23 2020-10-23 Face recognition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112364714B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967016A (en) * 2021-03-02 2021-06-15 岭东核电有限公司 Nuclear power test procedure confirmation method and device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148898A1 (en) * 2011-12-09 2013-06-13 Viewdle Inc. Clustering objects detected in video
US20140198998A1 (en) * 2013-01-14 2014-07-17 Samsung Electronics Co., Ltd. Novel criteria for gaussian mixture model cluster selection in scalable compressed fisher vector (scfv) global descriptor
CN108509896B (en) * 2018-03-28 2020-10-13 腾讯科技(深圳)有限公司 Trajectory tracking method and device and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment

Also Published As

Publication number Publication date
CN112364714A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
Zhang et al. Too far to see? Not really!—Pedestrian detection with scale-aware localization policy
CN110490078B (en) Monitoring video processing method, device, computer equipment and storage medium
CN108733819A (en) A kind of personnel's archives method for building up and device
CN108537119B (en) Small sample video identification method
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN111160275B (en) Pedestrian re-recognition model training method, device, computer equipment and storage medium
CN112183468A (en) Pedestrian re-identification method based on multi-attention combined multi-level features
KR101996371B1 (en) System and method for creating caption for image and computer program for the same
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN112507860A (en) Video annotation method, device, equipment and storage medium
CN112380922B (en) Method, device, computer equipment and storage medium for determining multiple video frames
CN112364714B (en) Face recognition method, device, computer equipment and storage medium
CN112101154B (en) Video classification method, apparatus, computer device and storage medium
CN112580616B (en) Crowd quantity determination method, device, equipment and storage medium
CN113705650A (en) Processing method, device, medium and computing equipment for face picture set
CN113705293A (en) Image scene recognition method, device, equipment and readable storage medium
CN116189063B (en) Key frame optimization method and device for intelligent video monitoring
CN111783725A (en) Face recognition method, face recognition device and storage medium
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN114332716B (en) Clustering method and device for scenes in video, electronic equipment and storage medium
CN114495188B (en) Image data processing method and device and related equipment
CN112258513A (en) Nuclear power test video segmentation method and device, computer equipment and storage medium
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN111191563A (en) Face recognition method and system based on data sample and test data set training
Cheng et al. Class attendance checking system based on deep learning and global optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant