CN111652139A - Face snapshot method, snapshot device and storage device - Google Patents

Face snapshot method, snapshot device and storage device Download PDF

Info

Publication number
CN111652139A
CN111652139A CN202010495794.9A CN202010495794A CN111652139A CN 111652139 A CN111652139 A CN 111652139A CN 202010495794 A CN202010495794 A CN 202010495794A CN 111652139 A CN111652139 A CN 111652139A
Authority
CN
China
Prior art keywords
face
face image
person
snapshotted
snapshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010495794.9A
Other languages
Chinese (zh)
Inventor
谢凡凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010495794.9A priority Critical patent/CN111652139A/en
Publication of CN111652139A publication Critical patent/CN111652139A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a face snapshot method, a snapshot device and a storage device, wherein the face snapshot method comprises the following steps: acquiring a face image from a video frame of a monitoring video; judging whether the quality of the face image meets a first preset condition or not; if yes, caching the face image into a corresponding face image cache queue of the same person to be snapshotted; otherwise, discarding the face image; judging whether a current snapshotted person disappears from the monitoring video; if the face image is the face image which is the face image of the person to be snapshotted and which is the face image of the person to be snapshotted, selecting the face image which most meets a second preset condition from all the face images in the face image cache queue of the person to be snapshotted and outputting the face image which is the face image corresponding to the person to be snapshotted. Through the mode, the face image with good image quality can be selected and output in the time period from the time when the snapshotted person enters the monitoring video to the time when the snapshotted person finally leaves the monitoring video.

Description

Face snapshot method, snapshot device and storage device
Technical Field
The present application relates to the field of monitoring technologies, and in particular, to a face snapshot method, a snapshot apparatus, and a storage apparatus.
Background
The current video monitoring technology is widely applied, various intelligent security projects based on the artificial intelligence AI technology are implemented to practically ensure the life and property safety of people, and meanwhile, the intelligent security projects also make great contribution to fighting against illegal crimes and capturing evasive personnel. The effective monitoring and management of personnel is a key point, and the face snapshot technology is an effective means.
However, in the application scenario of the surveillance video, there are cases where the face quality is low and is affected by various factors, such as face pose, expression, blur, brightness, occlusion, and so on. Therefore, there is a need to provide a face snapshot method capable of improving the quality of a snapshot face image.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a face snapshot method, a snapshot device and a storage device, and a face image with good image quality can be selected and output in a time period from the time when a person to be snapshot enters a monitoring video to the time when the person to be snapshot leaves the monitoring video.
In order to solve the above problem, a first aspect of the present application provides a face snapshot method, including: acquiring a face image from a video frame of a monitoring video; judging whether the quality of the face image meets a first preset condition or not; if yes, caching the face image into a corresponding face image cache queue of the same person to be snapshotted; otherwise, discarding the face image; judging whether a current snapshotted person disappears from the monitoring video; if the face image is the face image which is the face image of the person to be snapshotted and which is the face image of the person to be snapshotted, selecting the face image which most meets a second preset condition from all the face images in the face image cache queue of the person to be snapshotted and outputting the face image which is the face image corresponding to the person to be snapshotted.
In order to solve the above technical problem, a second aspect of the present application provides a face snapshot apparatus, including a memory and a processor, which are coupled to each other, where the memory stores program instructions, and the processor is configured to execute the program instructions to implement the face snapshot method described in any one of the above embodiments.
In order to solve the above technical problem, a third aspect of the present application provides a storage device, which stores program instructions capable of being executed by a processor, where the program instructions are used to implement the face snapshot method in any one of the above embodiments.
The method is characterized in that in the first stage, the face images with the quality meeting the first preset condition in each video frame in the monitoring video are respectively cached to the corresponding face image cache queues of the same person to be snapshot. The mode can save the storage space of the system. And in the second stage, when a certain person to be snapshotted disappears from the monitoring video, selecting the face image which most meets the second preset condition from all face images in the face image cache queue of the disappeared person to be snapshotted, and outputting the face image as the face snapshotted image corresponding to the person to be snapshotted. The method can capture the best face from the global angle, namely, a face image with better image quality is selected and output in the time period from the time when the captured person enters the monitoring video to the time when the captured person finally leaves the monitoring video.
In addition, four human face quality evaluation indexes are mainly selected according to the practical application scene of the human face snapshot: face size, sharpness, completeness, and pose. The definition, the integrity and the posture are all models trained by deep learning, the effect is good, the speed is high, and the method is particularly suitable for snapshot scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart of an embodiment of a face snapshot method according to the present application;
fig. 2 is a schematic structural diagram of an embodiment of a CNN network model;
FIG. 3 is a flowchart illustrating an embodiment corresponding to step S103 in FIG. 1;
FIG. 4 is a flowchart illustrating an embodiment corresponding to step S106 in FIG. 1;
FIG. 5 is a schematic diagram of a frame of an embodiment of a face capture device according to the present application;
fig. 6 is a schematic structural diagram of an embodiment of a face capture device according to the present application;
fig. 7 is a schematic structural diagram of an embodiment of a memory device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a face snapshot method according to the present application, where the face snapshot method includes:
s101: and obtaining a face image from a video frame of the monitoring video.
Specifically, the monitoring video can be acquired by any monitoring camera, and the monitoring camera can be positioned at entrances and exits with large people flow, such as a subway entrance, an airport, a station, a highway intersection, a passenger station and the like, and can also be positioned at entrances and exits, such as a market supermarket, a residential area, a movie theater, a school, a library, a scenic spot, an industrial area and the like. In general, the monitoring camera is in an uninterrupted acquisition stage, so the step S101 may also be in an uninterrupted processing stage, and the step may be performed as long as it receives the video frame acquired by the monitoring camera.
In addition, any face detection algorithm in the prior art may be used to perform face detection on each video frame in the surveillance video, for example, an ACF algorithm, a DMP algorithm, a CNN algorithm, etc., so as to obtain face images corresponding to all face frames in each video frame. The face frame may be a rectangle, and the face image obtained correspondingly may also be a rectangle.
S102: and judging whether the quality of the face image meets a first preset condition or not.
Specifically, in this embodiment, the step S102 may be performed separately for each face image obtained in the step S101, and the step S102 specifically includes: and judging whether the definition and the integrity of the face image respectively meet corresponding preset conditions, namely the first preset conditions comprise the definition and the integrity. Of course, in other embodiments, the first preset condition may be other, such as definition, face pose, and the like.
Further, in an application scenario, the determining whether the definition and the integrity of the face image respectively satisfy the corresponding preset conditions includes: obtaining the definition grade of a face image by using a face definition deep learning model, and obtaining the integrity grade of the face image by using a face integrity deep learning model; and judging whether the definition score meets a first threshold value and the integrity score meets a second threshold value, wherein the specific sizes of the first threshold value and the second threshold value can be set by a user according to a monitoring scene.
The face definition deep learning model and the face integrity deep learning model can be CNN network models. As shown in fig. 2, fig. 2 is a schematic structural diagram of an embodiment of a CNN network model. The CNN network model is a convolutional neural network, is a common model in deep learning, and mainly comprises a convolutional layer, a pooling layer and a full link layer. For example, in the present embodiment, the CNN network model includes three convolutional layers, three pooling layers, and a full link layer, and finally outputs a sharpness score or an integrity score.
Therefore, before the step S102, the face capturing method provided by the present application further includes: constructing an initial face definition deep learning model based on a CNN network model and constructing a face definition data set; and training an initial face definition deep learning model by using the face definition data set to obtain a trained face definition deep learning model. The face definition expresses whether a face image is clear or fuzzy, and the definition of the snapshot face is high and has higher distinguishability. The face sharpness data set may contain face images with different levels (e.g., 5 levels) of sharpness, and the corresponding sharpness score ranges between [0,1], and the closer the sharpness score is to 1, the clearer the face image is, and the closer to 0, the more blurred the face image is.
Meanwhile, before the step S102, the face snapshot method provided by the present application further includes: constructing an initial face integrity deep learning model based on a CNN network model and a face integrity data set, and taking the ratio of the face in the complete face in the face integrity data set as a regression mark score; and training an initial face integrity deep learning model by using the face integrity data set to obtain a trained face integrity deep learning model. The human face integrity indicates whether the human face is intact or not and whether the human face part is missing or not, and the scoring range of the human face integrity is between [0 and 1 ]. The closer the integrity score is to 1, the more complete the face image is; the closer to 0, the more incomplete the face image is.
S103: if so, caching the face image into a corresponding face image cache queue of the same person to be snapshotted.
Specifically, in the present embodiment, please refer to fig. 3, and fig. 3 is a flowchart illustrating an embodiment corresponding to step S103 in fig. 1. The step S103 specifically includes:
s201: and judging whether the human face images of the same person to be snapshotted exist in a cache queue or not.
Specifically, in the present embodiment, all face frames in each video frame may be tracked using a face tracking technique (e.g., FHOG algorithm, GoTurn algorithm, etc.). And outputting a mark indicating whether the tracking is successful or not and a tracked position aiming at each video frame and each face frame, and setting the same face ID for the face image of the tracked face same as the face frame, namely setting the same face ID for the face image of the same person to be snapshotted. The judgment process in the step S201 may be: and judging whether a face ID corresponding to the face image exists at present.
S202: if the face image exists, the face image is cached to the face image cache queue of the same person to be snapshotted.
Specifically, if the face ID corresponding to the face image exists, it indicates that the person to be snapshotted does not appear in the monitoring video for the first time, and the person has already set a corresponding face image cache queue, so long as the person finds the corresponding face image cache queue and caches the current face image.
S203: and otherwise, creating a face image cache queue of the person to be snapshotted corresponding to the face image, and caching the face image into the face image cache queue.
Specifically, if there is no face ID corresponding to the face image, it indicates that the person to be snapshotted appears in the monitoring video for the first time, and at this time, a face image cache queue of the person to be snapshotted needs to be created first, and then the face image cache queue is cached to the face image cache queue.
S104: otherwise, the face image is discarded.
S105: and judging whether the current snapshotted person disappears from the monitoring video.
Specifically, whether the person to be snapshotted disappears from the monitoring video can be determined by determining whether the number of video frames continuously disappeared by the person to be snapshotted in the monitoring video exceeds a set value. The size of the specific setting value may be set according to an actual scene, for example, 16 frames, 20 frames, and the like.
S106: if the face image is the face image which is the most meeting the second preset condition, the face image which is the most meeting the second preset condition is selected from all the face images in the face image cache queue of the disappeared person to be snapshotted, and the face image is taken as the face snapshotted image corresponding to the person to be snapshotted to be output.
Specifically, before the step S106, all face images meeting the first preset condition (e.g., definition and integrity) are stored in the corresponding face image cache queue during the whole time period from the appearance of the currently disappeared snapshotted person in the monitoring video to the last departure of the snapshotted person from the monitoring video.
Further, in the present embodiment, please refer to fig. 4, where fig. 4 is a flowchart illustrating an embodiment corresponding to step S106 in fig. 1. The step S106 specifically includes:
s301: and obtaining the pose scores of all the face images in the face image cache queue of the person to be snapshotted which disappears at present.
Specifically, the step S301 includes: obtaining a pitch angle, a yaw angle and a roll angle corresponding to each face image by using a face posture deep learning model, wherein the angle range of the pitch angle is-70-60 degrees, the angle range of the yaw angle is-90 degrees, and the angle range of the roll angle is-90 degrees; obtaining a corresponding attitude score according to the absolute value of the pitch angle, the absolute value of the yaw angle and the sum of the absolute values of the roll angle of each face image, wherein the attitude score is inversely proportional to the sum, namely the higher the sum of the absolute values of the three angles is, the lower the attitude score is, the face attitude score is used for measuring whether the face is a side face, a head is a low head and the like, the attitude score can be between [0 and 1], the closer the attitude score is to 1, and the more positive the face image is represented; the closer to 0, the more the face image is incorrect.
In addition, the above-mentioned face pose deep learning model may be a CNN network model, and the network model structure thereof may be as shown in fig. 2. Before the step S301, the face snapshot method provided by the present application further includes: constructing an initial face posture deep learning model based on a CNN network model; and training the initial face posture deep learning model by using the training set to obtain the trained face posture deep learning model.
S302: and selecting the face image with the largest face from the plurality of face images with higher attitude scores as the face snapshot image corresponding to the person to be snapshot and outputting the face snapshot image.
Specifically, the face snapshot scene captures a large face as much as possible, and the size of all face images can be obtained after the video frames are subjected to face detection, for example, the face image is a rectangle, and the face size of the face image is the product of the width pixel value and the height pixel value of the face image.
In addition, in this embodiment, the step S302 may be: the face images are arranged according to the sequence of the posture scores from high to low, and the face image with the largest face is selected from the front 1/2 or 1/3 and other face images with higher posture scores and is output as the face snapshot image corresponding to the person to be snapshot. Meanwhile, the video frame corresponding to the face image with the largest face can be output.
In summary, the present application provides a two-stage face snapshot method, in the first stage, face images in each video frame of a surveillance video, whose quality meets a first preset condition, are respectively cached in corresponding face image cache queues of the same person to be snapshot. The mode can save the storage space of the system. And in the second stage, when a certain person to be snapshotted disappears from the monitoring video, selecting the face image which most meets the second preset condition from all face images in the face image cache queue of the disappeared person to be snapshotted, and outputting the face image as the face snapshotted image corresponding to the person to be snapshotted. The method can capture the best face from the global angle, namely, a face image with better image quality is selected and output in the time period from the time when the captured person enters the monitoring video to the time when the captured person finally leaves the monitoring video. In addition, four human face quality evaluation indexes are mainly selected according to the practical application scene of the human face snapshot: face size, sharpness, completeness, and pose. The definition, the integrity and the posture are all models trained by deep learning, the effect is good, the speed is high, and the method is particularly suitable for snapshot scenes.
Referring to fig. 5, fig. 5 is a schematic frame diagram of an embodiment of a face capturing device according to the present application, the face capturing device includes: the device comprises an acquisition module 10, a first judgment module 12, a first execution module 14, a second execution module 16, a second judgment module 18 and a third execution module 11.
The obtaining module 10 is configured to obtain a face image from a video frame of a surveillance video. The first judging module 12 is configured to judge whether the quality of the face image meets a first preset condition. The first executing module 14 is configured to, when the first determining module 12 determines that the face image is satisfied, cache the face image into a corresponding face image cache queue of the same person to be snapshotted. The second execution module 16 is configured to discard the face image when the first determination module 12 determines that the face image is not satisfied. The second determining module 18 is configured to determine whether there is a current person to be snapshotted to disappear from the monitored video, for example, determine whether the number of video frames that the person to be snapshotted continuously disappears in the monitored video exceeds a set value. The third execution module 11 is configured to pick out a face image that best meets a second preset condition from all face images in the missing face image cache queue of the person to be snapshotted, when the second determination module 18 determines that the face image is sometimes found, and output the face image as a face snapshotted image corresponding to the person to be snapshotted.
In an embodiment, the first determining module 12 is specifically configured to determine whether the sharpness and the integrity of the face image respectively satisfy corresponding preset conditions. For example, the first judgment module 12 includes a first obtaining sub-module and a first judgment sub-module; the first obtaining submodule is used for obtaining the definition score of the face image by using the face definition deep learning model and obtaining the integrity score of the face image by using the face integrity deep learning model; the first determining submodule is coupled to the first obtaining submodule and configured to determine whether the sharpness score meets a first threshold and the integrity score meets a second threshold.
In another embodiment, the third execution module 11 includes a second obtaining sub-module and a first selecting sub-module. The second obtaining submodule is used for obtaining attitude score scores of all face images in a face image cache queue of a person to be snapshotted which disappears at present, for example, a pitch angle, a yaw angle and a roll angle corresponding to each face image are obtained by using a face attitude deep learning model; and obtaining a corresponding attitude score according to the sum of the absolute value of the pitch angle, the absolute value of the yaw angle and the absolute value of the roll angle of each face image, wherein the attitude score is inversely proportional to the sum. The first selection submodule is used for selecting a face image with the largest face from a plurality of face images with higher pose score scores as a face snapshot image corresponding to a person to be snapshot and outputting the face snapshot image, wherein the face image is rectangular, and the face size of the face image is the product of the width pixel value and the height pixel value of the face image.
In another embodiment, the second execution module 16 specifically includes: a second judgment submodule and a second execution submodule; and the second judgment submodule is used for judging whether the face image cache queues of the same person to be snapshotted exist or not. The second execution submodule is used for caching the face image into a face image cache queue of the same person to be snapshotted when the second judgment submodule judges that the face image exists; and otherwise, creating a face image cache queue of the person to be snapshotted corresponding to the face image, and caching the face image into the face image cache queue.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a face capture device according to the present application. The capturing device comprises a memory 20 and a processor 22 which are coupled to each other, wherein the memory 20 stores program instructions, and the processor 22 is used for executing the program instructions to implement the capturing method in any one of the above embodiments.
In particular, the processor 22 is configured to control itself and the memory 20 to implement the steps in any of the above described embodiments of the snap-shot method. The processor 22 may also be referred to as a CPU (Central Processing Unit). The processor 22 may be an integrated circuit chip having signal processing capabilities. The Processor 22 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, processor 22 may be commonly implemented by a plurality of integrated circuit chips.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a memory device according to an embodiment of the present application. The storage means 30 stores program instructions 300 capable of being executed by the processor, the program instructions 300 being for implementing the steps in any of the above described embodiments of the snap-taking method.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A face snapshot method is characterized by comprising the following steps:
acquiring a face image from a video frame of a monitoring video;
judging whether the quality of the face image meets a first preset condition or not;
if yes, caching the face image into a corresponding face image cache queue of the same person to be snapshotted; otherwise, discarding the face image;
judging whether a current snapshotted person disappears from the monitoring video;
if the face image is the face image which is the face image of the person to be snapshotted and which is the face image of the person to be snapshotted, selecting the face image which most meets a second preset condition from all the face images in the face image cache queue of the person to be snapshotted and outputting the face image which is the face image corresponding to the person to be snapshotted.
2. The face snapshot method according to claim 1, wherein the determining whether the quality of the face image meets a first preset condition includes:
and judging whether the definition and the integrity of the face image respectively meet corresponding preset conditions.
3. The face snapshot method according to claim 2, wherein the determining whether the definition and the integrity of the face image respectively satisfy corresponding preset conditions comprises:
obtaining the definition score of the face image by using a face definition deep learning model, and obtaining the integrity score of the face image by using a face integrity deep learning model;
determining whether the clarity score meets a first threshold and whether the integrity score meets a second threshold.
4. The face snapshot method according to claim 2, wherein the step of selecting a face image that most satisfies a second preset condition from all face images in the missing face image cache queue of the person to be snapshot, and outputting the face image as the face snapshot image corresponding to the person to be snapshot comprises:
obtaining the pose scores of all face images in the face image cache queue of the person to be snapshotted which disappears at present;
and selecting the face image with the largest face from the face images with higher attitude scores as the face snapshot image corresponding to the person to be snapshot to be output.
5. The face snapshot method of claim 4, wherein the obtaining of the pose scores of all face images in the cached queue of face images of the person to be snapshot that are currently disappearing comprises:
obtaining a pitch angle, a yaw angle and a roll angle corresponding to each face image by using a face posture deep learning model;
and obtaining a corresponding attitude score according to the sum of the absolute value of the pitch angle, the absolute value of the yaw angle and the absolute value of the roll angle of each face image, wherein the attitude score is inversely proportional to the sum.
6. The face snapshot method of claim 4,
the face image is rectangular, and the face size of the face image is the product of the width pixel value and the height pixel value of the face image.
7. The face snapshot method of claim 1, wherein caching the face image into a corresponding cache queue of face images of the same person to be snapshot comprises:
judging whether a human face image cache queue of the same snap-shot person exists or not;
if the face image exists, caching the face image into a face image cache queue of the same person to be snapshotted; and otherwise, creating a face image cache queue of the person to be snapshotted corresponding to the face image, and caching the face image into the face image cache queue.
8. The face snapshot method of claim 1, wherein the determining whether the current snapshotted person disappears from the surveillance video comprises:
and judging whether the number of the video frames continuously disappeared by the snapshotted person in the monitoring video exceeds a set value.
9. A face capturing device, comprising a memory and a processor coupled to each other, wherein the memory stores program instructions, and the processor is configured to execute the program instructions to implement the face capturing method according to any one of claims 1 to 8.
10. A storage device storing program instructions executable by a processor to implement the face capture method of any one of claims 1 to 8.
CN202010495794.9A 2020-06-03 2020-06-03 Face snapshot method, snapshot device and storage device Pending CN111652139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010495794.9A CN111652139A (en) 2020-06-03 2020-06-03 Face snapshot method, snapshot device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010495794.9A CN111652139A (en) 2020-06-03 2020-06-03 Face snapshot method, snapshot device and storage device

Publications (1)

Publication Number Publication Date
CN111652139A true CN111652139A (en) 2020-09-11

Family

ID=72347205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010495794.9A Pending CN111652139A (en) 2020-06-03 2020-06-03 Face snapshot method, snapshot device and storage device

Country Status (1)

Country Link
CN (1) CN111652139A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297423A (en) * 2021-05-24 2021-08-24 深圳市优必选科技股份有限公司 Pushing method, pushing device and electronic equipment
CN117135443A (en) * 2023-02-22 2023-11-28 荣耀终端有限公司 Image snapshot method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927520A (en) * 2014-04-14 2014-07-16 中国华戎控股有限公司 Method for detecting human face under backlighting environment
US20170161553A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for capturing photo
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN110968719A (en) * 2019-11-25 2020-04-07 浙江大华技术股份有限公司 Face clustering method and device
CN111161206A (en) * 2018-11-07 2020-05-15 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera and monitoring system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927520A (en) * 2014-04-14 2014-07-16 中国华戎控股有限公司 Method for detecting human face under backlighting environment
US20170161553A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for capturing photo
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
CN108629284A (en) * 2017-10-28 2018-10-09 深圳奥瞳科技有限责任公司 The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
CN111161206A (en) * 2018-11-07 2020-05-15 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera and monitoring system
CN110968719A (en) * 2019-11-25 2020-04-07 浙江大华技术股份有限公司 Face clustering method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297423A (en) * 2021-05-24 2021-08-24 深圳市优必选科技股份有限公司 Pushing method, pushing device and electronic equipment
WO2022247118A1 (en) * 2021-05-24 2022-12-01 深圳市优必选科技股份有限公司 Pushing method, pushing apparatus and electronic device
CN117135443A (en) * 2023-02-22 2023-11-28 荣耀终端有限公司 Image snapshot method and electronic equipment

Similar Documents

Publication Publication Date Title
WO2021042682A1 (en) Method, apparatus and system for recognizing transformer substation foreign mattter, and electronic device and storage medium
US8559670B2 (en) Moving object detection detection within a video stream using object texture
US20150339831A1 (en) Multi-mode video event indexing
Wang et al. Adaptive flame detection using randomness testing and robust features
US20080170751A1 (en) Identifying Spurious Regions In A Video Frame
Venetianer et al. Stationary target detection using the objectvideo surveillance system
CN113223046B (en) Method and system for identifying prisoner behaviors
CN107563299B (en) Pedestrian detection method using RecNN to fuse context information
CN111652139A (en) Face snapshot method, snapshot device and storage device
US20180260964A1 (en) System and method for detecting moving object in an image
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
Wang et al. Experiential sampling for video surveillance
CN115861915A (en) Fire fighting access monitoring method, fire fighting access monitoring device and storage medium
CN111932596A (en) Method, device and equipment for detecting camera occlusion area and storage medium
CN112990057A (en) Human body posture recognition method and device and electronic equipment
CN110210274A (en) Safety cap detection method, device and computer readable storage medium
CN117315551B (en) Method and computing device for flame alerting
KR101270718B1 (en) Video processing apparatus and method for detecting fire from video
US11348338B2 (en) Methods and systems for crowd motion summarization via tracklet based human localization
CN111008609B (en) Traffic light and lane matching method and device and electronic equipment
WO2020129176A1 (en) Image processing system, image processing method, and image processing program
Gaborski et al. VENUS: A System for Novelty Detection in Video Streams with Learning.
CN113177917B (en) Method, system, equipment and medium for optimizing snap shot image
CN116266358A (en) Target shielding detection and tracking recovery method and device and computer equipment
Zhang et al. Nonparametric on-line background generation for surveillance video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination