CN110059576A - Screening technique, device and the electronic equipment of picture - Google Patents
Screening technique, device and the electronic equipment of picture Download PDFInfo
- Publication number
- CN110059576A CN110059576A CN201910230209.XA CN201910230209A CN110059576A CN 110059576 A CN110059576 A CN 110059576A CN 201910230209 A CN201910230209 A CN 201910230209A CN 110059576 A CN110059576 A CN 110059576A
- Authority
- CN
- China
- Prior art keywords
- picture
- frame
- user
- video
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012216 screening Methods 0.000 title claims abstract description 51
- 230000009471 action Effects 0.000 claims abstract description 61
- 230000004044 response Effects 0.000 claims abstract description 27
- 238000004364 calculation method Methods 0.000 claims description 28
- 238000001514 detection method Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000004397 blinking Effects 0.000 description 5
- 238000013136 deep learning model Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The disclosure discloses the screening technique, device and electronic equipment of a kind of picture.Wherein, the screening technique of the picture includes: to obtain video image from image source, and the video image includes an at least frame video frame;Obtain the first frame video frame in the video frame including facial image;The first standby signal is generated, the first standby signal prompt user makes scheduled movement;In response to detecting the picture of the scheduled movement acquisition user;It is calculated by first and screens Target Photo from the picture of the first frame video frame and the user.The screening technique of the picture of the embodiment of the present disclosure, by acquiring the initial picture of face and making the face picture after predetermined action, and standard compliant picture is filtered out on above-mentioned picture, solve the possible undesirable technical problem of photographic quality that random acquisition in the prior art arrives.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for screening pictures, and an electronic device.
Background
With the development of society, the requirements of various aspects on quick and effective automatic identity authentication are increasingly urgent. The biological characteristics are the intrinsic attributes of the human, and have strong self-stability and individual difference, so the biological characteristics are the most ideal basis for identity verification. The identity authentication by using the human face features is the most natural and direct means, has the characteristics of being direct, friendly and convenient compared with other human body biological features, and is easy to accept by users.
An important application scenario of current identity authentication is financial institutions such as banks, generally, when a user registers a personal account, the user can register by using a registration program, and at this time, a personal photo of the user needs to be retained.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
a picture screening method comprises the following steps:
acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame;
acquiring a first frame video frame including a face image in the video frame;
generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action;
in response to detecting the predetermined action, capturing a picture of the user;
and screening a target picture from the first frame of video frame and the picture of the user through a first calculation.
Further, the acquiring a video image from an image source, the video image comprising at least one frame of video frame, comprises:
the method comprises the steps of obtaining a video image from a video acquisition device, wherein the video image comprises at least one video frame.
Further, the acquiring a first frame video frame including a face image in the video frame includes:
detecting a face image in the video image;
and when the face appears in the video image, intercepting a first frame video frame in which the face appears.
Further, the generating a first prompt signal, where the first prompt signal prompts a user to perform a predetermined action, includes:
and in response to the detection of the face image, displaying a first prompt signal in the display device, wherein the first prompt signal prompts a user to do a preset action.
Further, the capturing a picture of the user in response to detecting the predetermined action includes:
in response to detecting a predetermined action in the video image, capturing a video frame of the user after the predetermined action as a picture of the user.
Further, the capturing a picture of the user in response to detecting the predetermined action includes:
and acquiring a picture of the user every time the predetermined action is detected within a predetermined time.
Further, the filtering, by the first calculation, a target picture from the first frame of video frame and the picture of the user includes:
inputting the first frame of video frame and the picture of the user into a first calculation model;
and obtaining a first frame of video frame and a picture meeting a first standard in the picture of the user as a target picture through a first calculation model.
Further, the method further comprises:
and responding to the plurality of screened target pictures, and selecting one picture meeting a second standard from the plurality of target pictures as a target picture.
Further, after the filtering, by the first computation, a target picture from the first frame of video frame and the picture of the user, the method further includes:
and if the target picture is not screened out, generating a second prompt signal.
According to another aspect of the present disclosure, the following technical solutions are also provided:
a picture screening apparatus, comprising:
the system comprises a video image acquisition module, a video image acquisition module and a video image processing module, wherein the video image acquisition module is used for acquiring a video image from an image source, and the video image comprises at least one frame of video frame;
the video frame acquisition module is used for acquiring a first frame video frame comprising a face image in the video frame;
the first prompt signal generation module is used for generating a first prompt signal, and the first prompt signal prompts a user to perform a preset action;
the user picture acquisition module is used for acquiring a picture of a user in response to the detection of the preset action;
and the first screening module is used for screening a target picture from the first frame of video frame and the picture of the user through a first calculation.
Further, the video image obtaining module is further configured to:
the method comprises the steps of obtaining a video image from a video acquisition device, wherein the video image comprises at least one video frame.
Further, the video frame acquiring module further includes:
the face detection module is used for detecting a face image in the video image;
the video frame intercepting module is used for intercepting a first frame video frame with a human face when the human face appears in the video image;
further, the first prompt signal generating module is further configured to:
in response to detecting the face image, displaying a first prompt signal in a display device, wherein the first prompt signal prompts a user for a predetermined action.
Further, the user picture collecting module further includes:
a video frame capture first sub-module for capturing a video frame of the user following the predetermined action as a picture of the user in response to the predetermined action being detected in the video image.
Further, the user picture collecting module further includes:
and the video frame acquisition second sub-module is used for acquiring pictures of the user every time the preset action is detected within a preset time.
Further, the first screening module further includes:
the input module is used for inputting the first frame of video frame and the picture of the user into a first calculation model;
and the first screening submodule is used for obtaining a first frame of video frame and a picture which meets a first standard in the pictures of the user as a target picture through a first calculation model.
Further, the apparatus further comprises:
and the second screening module is used for responding to screening of a plurality of target pictures and selecting one picture meeting a second standard from the plurality of target pictures as the target picture.
Further, the apparatus further comprises:
and the second prompt signal generation module is used for generating a second prompt signal if the target picture is not screened out. According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and the processor is used for executing the computer readable instructions, so that the processor realizes the steps in any picture screening method when executing.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer-readable storage medium for storing non-transitory computer-readable instructions, which, when executed by a computer, cause the computer to perform the steps of any of the above-mentioned picture screening methods.
The disclosure discloses a picture screening method and device and electronic equipment. The picture screening method comprises the following steps: acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame; acquiring a first frame video frame including a face image in the video frame; generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action; acquiring a picture of a user in response to detecting the predetermined action; and screening a target picture from the first frame of video frame and the picture of the user through a first calculation. According to the picture screening method, the initial picture of the face and the picture of the face after the preset action are collected, and the picture meeting the standard is screened out on the picture, so that the technical problem that the quality of the randomly collected picture possibly does not meet the requirement in the prior art is solved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flow chart of a method for screening pictures according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of face keypoints, according to one embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a picture screening apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a picture screening method. The image screening method provided by this embodiment may be executed by a computing device, where the computing device may be implemented as software, or implemented as a combination of software and hardware, and the computing device may be integrally disposed in a server, a terminal device, or the like. As shown in fig. 1, the method for screening pictures mainly includes the following steps S101 to S105. Wherein:
step S101: acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame;
in the present disclosure, the image source is a local storage space or a network storage space, the acquiring a video image from the image source includes acquiring a video image from the local storage space or acquiring a video image from the network storage space, where the video image is acquired, preferably a storage address where the video image needs to be acquired, and then acquiring the video image from the storage address, where the video image includes at least one image frame, and the video image may be a video or a picture with a dynamic effect, as long as the image with multiple frames may be the video image in the present disclosure. The local storage space or the network storage space may typically be a cache of an image sensor or an image relay memory, etc.
In the present disclosure, the video source may be an image sensor, and the acquiring the video image from the image source includes capturing the video image from the image sensor. The image sensor refers to various devices capable of acquiring images, and typical image sensors are video cameras, cameras and the like. In this embodiment, the image sensor may be a camera on a mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and a video image acquired by the camera may be directly displayed on a display screen of the smart phone.
Step S102: acquiring a first frame video frame including a face image in the video frame;
in the disclosure, the acquiring a first frame video frame including a face image in the video frame includes detecting the face image in the video image; and when the face appears in the video image, intercepting a first frame video frame in which the face appears.
In the above steps, firstly, the face image in the video image needs to be detected, the face detection is a process of giving any image or a group of image sequences, searching the given image or group of image sequences by adopting a certain strategy to determine the positions and areas of all faces, determining whether the faces exist in various images or image sequences, and determining the number and spatial distribution of the faces. Generally, conventional methods for face detection can be classified into 4 types: (1) the method is based on prior knowledge, and comprises the steps of forming a rule base by a typical human face to encode the human face, and positioning the human face through the relationship among facial features; (2) a feature invariant method that finds stable features under the condition that the pose, viewing angle, or illumination condition changes, and then determines a face using the features; (3) the template matching method comprises the steps of storing several standard human face modes for respectively describing the whole human face and the facial features, and then calculating the correlation between an input image and the stored modes and using the correlation for detection; (4) appearance-based methods, which are the inverse of template matching methods, learn from a set of training images to obtain models, and use these models for detection. Optionally, one implementation of the method (4) may be used to describe the process of face detection: firstly, features are required to be extracted to complete modeling, Haar features are used as key features for judging the human face in the embodiment, the Haar features are simple rectangular features, the extraction speed is high, a feature template used for calculating the general Haar features is formed by two or more congruent rectangles through simple rectangle combination, and two types of black rectangles and white rectangles are arranged in the feature template; and then, using an AdaBoost algorithm to find a part of features playing a key role from a large number of Haar features, using the features to generate an effective classifier, and detecting the human face in the image through the constructed classifier. In the face detection process, a plurality of face key points can be detected, and typically 106 key points can be used for face recognition. Optionally, a CNN network model for detecting the face may be trained based on a deep learning method, and a video frame of the video image is input into the CNN network model to obtain a result of whether the image includes the face. It is understood that, in fact, any human face detection method may be applied to the present disclosure, and particularly, a method with a fast detection speed and a low computation complexity may be applied to the present disclosure, and the method in the foregoing embodiment is only used as an example and does not constitute a limitation to the present disclosure.
When the face is detected, the first frame of video frame with the face is stored as one of the candidate pictures of the target picture. And storing the first frame video frame with the human face in a preset storage position to be called for use in a subsequent step.
Step S103: generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action;
in this disclosure, the generating a first prompt signal, the first prompt signal prompting a user to perform a predetermined action, includes: and in response to the detection of the face image, displaying a first prompt signal in the display device, wherein the first prompt signal prompts a user to do a preset action. In one example, when a facial image of the user is detected, a prompt message is displayed on a display device such as a display screen, the prompt message may include a piece of text and an animation, the animation may be an animation representing a predetermined action, the text prompts the user to make the predetermined action, and the animation shows the user the correct predetermined action. Alternatively, the predetermined action may be blinking, smiling, mouth opening, etc., and when a face of a person is detected in the video image, for example, blinking, displaying "please blink" on the display screen and simultaneously displaying a segment of blinking animation to guide the user to make a corresponding blinking action.
Step S104: acquiring a picture of a user in response to detecting the predetermined action;
in the present disclosure, the capturing a picture of the user in response to detecting the predetermined action includes: in response to detecting a predetermined action in the video image, capturing a video frame of the user after the predetermined action as a picture of the user. Optionally, after a predetermined action is detected in the video image, a first video frame after the predetermined action is collected as a picture of the user; optionally, after the predetermined action is detected in the video image, acquiring n video frames after the predetermined action as the picture of the user, where n is an integer greater than 1, and the acquiring may be acquiring at a certain acquiring frequency, such as acquiring once every 2 frames, or acquiring once every 10ms, and so on, which is not described herein again.
In this disclosure, the capturing a picture of the user in response to detecting the predetermined action further comprises: and acquiring a picture of the user every time the predetermined action is detected within a predetermined time. In this embodiment, a time period is preset, and in this time period, whenever it is detected that the user performs a predetermined action, a picture of the user is acquired, and a specific acquisition process may refer to the acquisition process in the above embodiment. Typically, for example, within 10s, a picture of the user is captured, and within 10s, the user takes 3 blinks, and each blink captures a video frame of the first frame of the user after blinking as the picture of the user, and within 10s, a total of 3 pictures of the user are captured.
Step S105: and screening a target picture from the first frame of video frame and the picture of the user through a first calculation.
In this disclosure, the filtering, by a first calculation, a target picture from the first frame of video frame and the picture of the user includes: inputting the first frame of video frame and the picture of the user into a first calculation model; and obtaining a first frame of video frame and a picture meeting a first standard in the picture of the user as a target picture through a first calculation model. Optionally, the first calculation model is a convolution calculation model, and when the first calculation model is used, the first frame of video frame and the picture of the user are converted into gray-scale images, then each gray-scale image and the first convolution kernel are subjected to convolution calculation to obtain a first convolution image, the variance of the first convolution image is calculated, and a picture corresponding to the convolution image with the variance larger than a first threshold is selected as the target picture. Typically, the convolution kernel can be chosen as:
the gray level map of one of the first frame video frame and the user picture is as follows:
performing convolution calculation on the gray level image of the picture by using the convolution kernel to obtain a first convolution image as follows:
[21 28 2 2]
calculating the variance of the first convolution image as: if the first threshold is 200, the picture in the above example is judged to be a blurred image, and is discarded, and only the image with the variance of the convolved image higher than the first threshold is retained.
Alternatively, the first computational model may be a deep learning model that is trained in advance into a two-class model that classifies each picture that passes through the model into two types, fuzzy and clear. Specifically, firstly, type labeling is carried out on pictures in a training set, then the pictures in the training set are input into the deep learning model, the output of the model can use a sigmoid function to normalize the result to be between 0 and 1, a threshold value can be set at the moment, when the output of the sigmoid is larger than 0.5, the pictures are considered to be fuzzy images, when the output of the sigmoid is smaller than or equal to 0.5, the pictures are considered to be clear images, the clear images are retained at the moment, and the fuzzy images are discarded.
In the application scenario of the present disclosure, only the face image of the user needs to be clear, so in order to improve the accuracy of the determination, the face part may be segmented first, and the segmented face image is convolved by using the convolution kernel or input into the deep learning model, so as to determine the sharpness of the image.
In an embodiment, in step S105, a plurality of target pictures may be screened, that is, under the first criterion, a plurality of target pictures meeting the first criterion may exist, at this time, the first criterion may be increased to the second criterion, the target pictures meeting the first criterion are further screened by using the second criterion, and so on until only one target picture meeting the criterion exists; or displaying the target pictures meeting the first standard or the second standard on the display device, and selecting one of the target pictures through the received selection signal. The second criterion may also be other criteria, such as that some parts of the face cannot be shielded, such as that hair cannot be shielded, forehead cannot be shielded, the face needs to be straight and cannot be askew, and the like, and these criteria may all use a trained deep learning model to perform corresponding image screening.
As shown in fig. 2, after step S105, the method may further include:
step S201: and if the target picture is not screened out, generating a second prompt signal.
And if all the collected pictures can not meet the first standard, generating a second prompt signal on the display device to prompt the user to collect the pictures again. Typically, a countdown is prompted, for example, 10S later to start the re-acquisition, and when the countdown is finished, the method returns to step S101 to re-execute the above method.
The disclosure discloses a picture screening method and device and electronic equipment. The picture screening method comprises the following steps: acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame; acquiring a first frame video frame including a face image in the video frame; generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action; acquiring a picture of a user in response to detecting the predetermined action; and screening a target picture from the first frame of video frame and the picture of the user through a first calculation. According to the picture screening method, the initial picture of the face and the picture of the face after the preset action are collected, and the picture meeting the standard is screened out on the picture, so that the technical problem that the quality of the randomly collected picture possibly does not meet the requirement in the prior art is solved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides a screening device for pictures. The device can execute the steps described in the above picture screening method embodiments. As shown in fig. 3, the apparatus 300 mainly includes: a video image acquisition module 301, a video frame acquisition module 302, a first prompt signal generation module 303, a user picture acquisition module 304 and a screening module 305. Wherein,
a video image obtaining module 301, configured to obtain a video image from an image source, where the video image includes at least one frame of video frame;
a video frame acquiring module 302, configured to acquire a first frame video frame including a face image in the video frame;
a first prompt signal generating module 303, configured to generate a first prompt signal, where the first prompt signal prompts a user to perform a predetermined action;
a user picture acquisition module 304, configured to acquire a picture of a user in response to detecting the predetermined action;
a first filtering module 305, configured to filter a target picture from the first frame of video frame and the picture of the user through a first calculation.
Further, the video image obtaining module 301 is further configured to:
the method comprises the steps of obtaining a video image from a video acquisition device, wherein the video image comprises at least one video frame.
Further, the video frame obtaining module 302 further includes:
the face detection module is used for detecting a face image in the video image;
the video frame intercepting module is used for intercepting a first frame video frame with a human face when the human face appears in the video image;
further, the first prompt signal generating module 303 is further configured to:
in response to detecting the face image, displaying a first prompt signal in a display device, wherein the first prompt signal prompts a user for a predetermined action.
Further, the user picture capturing module 304 further includes:
a video frame capture first sub-module for capturing a video frame of the user following the predetermined action as a picture of the user in response to the predetermined action being detected in the video image.
Further, the user picture capturing module 304 further includes:
and the video frame acquisition second sub-module is used for acquiring pictures of the user every time the preset action is detected within a preset time.
Further, the first filtering module 305 further includes:
the input module is used for inputting the first frame of video frame and the picture of the user into a first calculation model;
and the first screening submodule is used for obtaining a first frame of video frame and a picture which meets a first standard in the pictures of the user as a target picture through a first calculation model.
Further, the apparatus 300 further comprises:
the second filtering module 306 is configured to select, in response to filtering out the plurality of target pictures, one picture meeting a second criterion from the plurality of target pictures as a target picture.
Further, the apparatus 300 further comprises:
and the second prompt signal generation module is used for generating a second prompt signal if the target picture is not screened out.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1 and 2, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1 and 2. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1 and fig. 2, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame; acquiring a first frame video frame including a face image in the video frame; generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action; acquiring a picture of a user in response to detecting the predetermined action; and screening a target picture from the first frame of video frame and the picture of the user through a first calculation.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (12)
1. A picture screening method comprises the following steps:
acquiring a video image from an image source, wherein the video image comprises at least one frame of video frame;
acquiring a first frame video frame including a face image in the video frame;
generating a first prompt signal, wherein the first prompt signal prompts a user to perform a preset action;
in response to detecting the predetermined action, capturing a picture of the user;
and screening a target picture from the first frame of video frame and the picture of the user through a first calculation.
2. The method for screening pictures according to claim 1, wherein said obtaining a video image from an image source, said video image comprising at least one frame of video, comprises:
the method comprises the steps of obtaining a video image from a video acquisition device, wherein the video image comprises at least one video frame.
3. The method for screening pictures according to claim 1, wherein said obtaining a first video frame including a face image in said video frame comprises:
detecting a face image in the video image;
and when the face appears in the video image, intercepting a first frame video frame in which the face appears.
4. The method for screening pictures according to claim 1, wherein the generating a first prompt signal for prompting a user to perform a predetermined action comprises:
and in response to the detection of the face image, displaying a first prompt signal in the display device, wherein the first prompt signal prompts a user to do a preset action.
5. The method for screening pictures according to claim 1, wherein the capturing a picture of the user in response to detecting the predetermined action comprises:
in response to detecting a predetermined action in the video image, capturing a video frame of the user after the predetermined action as a picture of the user.
6. The method for screening pictures according to claim 1, wherein said capturing a picture of a user in response to detecting said predetermined action comprises:
and acquiring a picture of the user every time the predetermined action is detected within a predetermined time.
7. The method for filtering pictures according to claim 1, wherein the filtering a target picture from the first frame of video frame and the picture of the user by a first calculation comprises:
inputting the first frame of video frame and the picture of the user into a first calculation model;
and obtaining a first frame of video frame and a picture meeting a first standard in the picture of the user as a target picture through a first calculation model.
8. The method for screening pictures according to claim 7, wherein the method further comprises:
and responding to the plurality of screened target pictures, and selecting one picture meeting a second standard from the plurality of target pictures as a target picture.
9. The picture screening method according to claim 1, further comprising, after the screening, by the first calculation, a target picture from the first frame of video frame and the picture of the user:
and if the target picture is not screened out, generating a second prompt signal.
10. A picture screening apparatus, comprising:
the system comprises a video image acquisition module, a video image acquisition module and a video image processing module, wherein the video image acquisition module is used for acquiring a video image from an image source, and the video image comprises at least one frame of video frame;
the video frame acquisition module is used for acquiring a first frame video frame comprising a face image in the video frame;
the first prompt signal generation module is used for generating a first prompt signal, and the first prompt signal prompts a user to perform a preset action;
the user picture acquisition module is used for acquiring a picture of a user in response to the detection of the preset action;
and the screening module is used for screening a target picture from the first frame of video frame and the picture of the user through first calculation.
11. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions, so that the processor when running implements the method for screening pictures according to any one of claims 1 to 9.
12. A non-transitory computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to perform the method of filtering a picture according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910230209.XA CN110059576A (en) | 2019-03-26 | 2019-03-26 | Screening technique, device and the electronic equipment of picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910230209.XA CN110059576A (en) | 2019-03-26 | 2019-03-26 | Screening technique, device and the electronic equipment of picture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110059576A true CN110059576A (en) | 2019-07-26 |
Family
ID=67315959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910230209.XA Pending CN110059576A (en) | 2019-03-26 | 2019-03-26 | Screening technique, device and the electronic equipment of picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059576A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112382390A (en) * | 2020-11-09 | 2021-02-19 | 北京沃东天骏信息技术有限公司 | Method, system and storage medium for generating health assessment report |
CN112906435A (en) * | 2019-12-03 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Video frame optimization method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942525A (en) * | 2013-12-27 | 2014-07-23 | 高新兴科技集团股份有限公司 | Real-time face optimal selection method based on video sequence |
US20140355883A1 (en) * | 2013-06-03 | 2014-12-04 | Alipay.com Co., Ltd. | Method and system for recognizing information |
CN105138954A (en) * | 2015-07-12 | 2015-12-09 | 上海微桥电子科技有限公司 | Image automatic screening, query and identification system |
CN106454112A (en) * | 2016-11-21 | 2017-02-22 | 上海斐讯数据通信技术有限公司 | Photographing method and system |
CN107578000A (en) * | 2017-08-25 | 2018-01-12 | 百度在线网络技术(北京)有限公司 | For handling the method and device of image |
CN107644219A (en) * | 2017-10-10 | 2018-01-30 | 广东欧珀移动通信有限公司 | Face registration method and related product |
CN107944378A (en) * | 2017-11-20 | 2018-04-20 | 广东金赋科技股份有限公司 | The personal identification method and self-help serving system of a kind of Self-Service |
CN108986094A (en) * | 2018-07-20 | 2018-12-11 | 南京开为网络科技有限公司 | For the recognition of face data automatic update method in training image library |
-
2019
- 2019-03-26 CN CN201910230209.XA patent/CN110059576A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140355883A1 (en) * | 2013-06-03 | 2014-12-04 | Alipay.com Co., Ltd. | Method and system for recognizing information |
CN103942525A (en) * | 2013-12-27 | 2014-07-23 | 高新兴科技集团股份有限公司 | Real-time face optimal selection method based on video sequence |
CN105138954A (en) * | 2015-07-12 | 2015-12-09 | 上海微桥电子科技有限公司 | Image automatic screening, query and identification system |
CN106454112A (en) * | 2016-11-21 | 2017-02-22 | 上海斐讯数据通信技术有限公司 | Photographing method and system |
CN107578000A (en) * | 2017-08-25 | 2018-01-12 | 百度在线网络技术(北京)有限公司 | For handling the method and device of image |
CN107644219A (en) * | 2017-10-10 | 2018-01-30 | 广东欧珀移动通信有限公司 | Face registration method and related product |
CN107944378A (en) * | 2017-11-20 | 2018-04-20 | 广东金赋科技股份有限公司 | The personal identification method and self-help serving system of a kind of Self-Service |
CN108986094A (en) * | 2018-07-20 | 2018-12-11 | 南京开为网络科技有限公司 | For the recognition of face data automatic update method in training image library |
Non-Patent Citations (2)
Title |
---|
MINJAE KIM ET AL: "Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: A preliminary study", 《2013 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS)》 * |
李月龙等: "基于独立组件的模糊人脸图像鉴别", 《计算机辅助设计与图形学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906435A (en) * | 2019-12-03 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Video frame optimization method and device |
CN112906435B (en) * | 2019-12-03 | 2024-03-01 | 杭州海康威视数字技术股份有限公司 | Video frame optimization method and device |
CN112382390A (en) * | 2020-11-09 | 2021-02-19 | 北京沃东天骏信息技术有限公司 | Method, system and storage medium for generating health assessment report |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368685B (en) | Method and device for identifying key points, readable medium and electronic equipment | |
CN111696176B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
CN109670444B (en) | Attitude detection model generation method, attitude detection device, attitude detection equipment and attitude detection medium | |
CN110072047B (en) | Image deformation control method and device and hardware device | |
US11367310B2 (en) | Method and apparatus for identity verification, electronic device, computer program, and storage medium | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
Vazquez-Fernandez et al. | Built-in face recognition for smart photo sharing in mobile devices | |
CN111292272B (en) | Image processing method, image processing apparatus, image processing medium, and electronic device | |
CN111368944B (en) | Method and device for recognizing copied image and certificate photo and training model and electronic equipment | |
CN111488759A (en) | Image processing method and device for animal face | |
CN110349161B (en) | Image segmentation method, image segmentation device, electronic equipment and storage medium | |
CN115311178A (en) | Image splicing method, device, equipment and medium | |
CN108289176B (en) | Photographing question searching method, question searching device and terminal equipment | |
US20220207917A1 (en) | Facial expression image processing method and apparatus, and electronic device | |
CN110069996A (en) | Headwork recognition methods, device and electronic equipment | |
CN110059576A (en) | Screening technique, device and the electronic equipment of picture | |
US20240095886A1 (en) | Image processing method, image generating method, apparatus, device, and medium | |
CN110349108B (en) | Method, apparatus, electronic device, and storage medium for processing image | |
CN110232417B (en) | Image recognition method and device, computer equipment and computer readable storage medium | |
CN111507143B (en) | Expression image effect generation method and device and electronic equipment | |
CN110222576B (en) | Boxing action recognition method and device and electronic equipment | |
US11810336B2 (en) | Object display method and apparatus, electronic device, and computer readable storage medium | |
CN111507139A (en) | Image effect generation method and device and electronic equipment | |
CN111292247A (en) | Image processing method and device | |
KR20140134844A (en) | Method and device for photographing based on objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |