CN109447006A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109447006A
CN109447006A CN201811297964.1A CN201811297964A CN109447006A CN 109447006 A CN109447006 A CN 109447006A CN 201811297964 A CN201811297964 A CN 201811297964A CN 109447006 A CN109447006 A CN 109447006A
Authority
CN
China
Prior art keywords
facial image
image
point
face
boundary point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811297964.1A
Other languages
Chinese (zh)
Inventor
郭哲
郭一哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201811297964.1A priority Critical patent/CN109447006A/en
Publication of CN109447006A publication Critical patent/CN109447006A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

Image processing method, device, equipment and storage medium provided by the invention, belong to technical field of image processing.The image processing method includes: to carry out Face datection by the facial image to target object, obtains the corresponding face key point of the facial image;Determine whether face key point corresponding to the facial image meets preset requirement;If so, using facial image described in at least frame in the facial image corresponding to the face key point for meeting the preset requirement as target image.To effectively filter out since motor is because there is distortion, fuzzy frame, and then image is picked out between multiframe facial image and compares clearly facial image, and then improves the quality of detection picture, and improve the accuracy of algorithm under corresponding scene.

Description

Image processing method, device, equipment and storage medium
Technical field
The present invention relates to field of image processings, in particular to image processing method, device, equipment and storage medium.
Background technique
Face caused by human motion can be encountered in successive frame during face unlock, human face data acquisition at present There is fuzzy situation in a certain frame, and this motion blur will lead to identification, the accuracy of living body scheduling algorithm decline, to will affect User is in unlock etc. to the usage experience of the higher application scenarios of face accuracy demand.
Summary of the invention
Image processing method, device, equipment and storage medium provided in an embodiment of the present invention, can solve in the prior art The technical issues of algorithm quality declines caused by the frame that existing face is generated due to movement obscures.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, a kind of image processing method provided in an embodiment of the present invention, comprising: to the facial image of target object Face datection is carried out, the corresponding face key point of the facial image is obtained;Determine that face corresponding to the facial image closes Whether key point meets preset requirement;If so, the facial image corresponding to the face key point of the preset requirement will be met In an at least frame described in facial image as target image.
With reference to first aspect, the embodiment of the invention provides the first possible embodiments of first aspect, described true Whether face key point corresponding to the fixed facial image meets preset requirement, comprising: determines facial image described in each frame Whether the quantity of the corresponding face key point is greater than the people corresponding to preset quantity and the face key point Whether face image includes complete facial contour;If the quantity of the face key point is greater than the preset quantity and the face Image includes complete facial contour, then determines that the face key point meets the preset requirement.
The possible embodiment of with reference to first aspect the first, the embodiment of the invention provides second of first aspect Possible embodiment, whether the facial image corresponding to the determination face key point includes complete face wheel It is wide, comprising: to be scanned on the facial image according to preset rules, find first pixel on the facial image Point, is denoted as first boundary point;Second boundary point is determined according to the first boundary point;Third is determined according to the second boundary point Boundary point;If the third boundary point is overlapped with the first boundary point, determine that the facial image includes complete face Profile.
The possible embodiment of second with reference to first aspect, the embodiment of the invention provides the third of first aspect Possible embodiment, it is described to be scanned on the facial image according to preset rules, it finds on the facial image First pixel, is denoted as first boundary point, comprising: obtains pixel point set corresponding to the facial image;According to the people The resolution ratio of face image determines matrix corresponding to the pixel;Pixel corresponding to the first row first row by the matrix Point is as the first boundary point on the facial image.
The third possible embodiment with reference to first aspect, the embodiment of the invention provides the 4th kind of first aspect Possible embodiment, it is described that second boundary point is determined according to the first boundary point, comprising: to obtain and the first boundary point The adjacent first object pixel of the first preset direction;Obtain position letter of the first object pixel in the matrix Breath;If the location information meets preset requirement, determine that the first object pixel is the second boundary point.
The third possible embodiment with reference to first aspect, the embodiment of the invention provides the 5th kind of first aspect Possible embodiment, it is described that third boundary point is determined according to the second boundary point, comprising: to obtain and the second boundary point Adjacent the second target pixel points of the second preset direction;Obtain position letter of second target pixel points in the matrix Breath;If the location information meets the preset requirement, determine that second target pixel points are the third boundary point.
With reference to first aspect, the embodiment of the invention provides the 6th kind of possible embodiments of first aspect, further includes: If the face key point is unsatisfactory for preset requirement, the facial image is deleted.
Any one implementation of the first possible embodiment with reference to first aspect into the 6th kind of possible embodiment Mode, it is described to meet the preset requirement the embodiment of the invention provides the 5th kind of possible embodiment of first aspect Face key point corresponding to facial image described in an at least frame in the facial image as target image, comprising: will Meet the facial image corresponding to the face key point of the preset requirement to store to storage medium;Determine that the storage is situated between Whether the frame number of the facial image stored in matter is more than preset threshold;If so, by least one in the storage medium Facial image is as target image described in frame.
The 7th kind of possibility with reference to first aspect, the embodiment of the invention provides the 8th kind of possible implementations of first aspect Mode, the preset threshold meet: according to generated fuzzy graph in image collecting device during exercise corresponding period Piece quantity determines the preset threshold.
The 7th kind of possible embodiment with reference to first aspect, the embodiment of the invention provides the 9th kind of first aspect Possible embodiment, facial image described in at least frame using in the storage medium is as target image, comprising: obtains Take facial image described in the last frame stored in the storage medium according to chronological order;The facial image is made For the target image.
Second aspect, a kind of image processing apparatus provided in an embodiment of the present invention, comprising: face detection module, for pair The facial image of target object carries out Face datection, obtains the corresponding face key point of the facial image;Processing module is used for Determine whether face key point corresponding to the facial image meets preset requirement;Mark module, for if so, institute will be met Facial image described in at least frame in the facial image corresponding to the face key point of preset requirement is stated as target figure Picture.
The third aspect, a kind of terminal device provided in an embodiment of the present invention, comprising: memory, processor and be stored in In the memory and the computer program that can run on the processor, when the processor executes the computer program It realizes such as the step of any one of first aspect described image processing method.
Fourth aspect, a kind of storage medium provided in an embodiment of the present invention are stored with instruction on the storage medium, work as institute Instruction is stated when running on computers, so that the computer executes such as the described in any item image processing methods of first aspect.
Compared with prior art, the embodiment of the present invention bring it is following the utility model has the advantages that
Image processing method, device, equipment and storage medium provided in an embodiment of the present invention, pass through the people to target object Face image carries out Face datection, obtains the corresponding face key point of the facial image;It determines corresponding to the facial image Whether face key point meets preset requirement;If so, the people corresponding to the face key point of the preset requirement will be met Facial image described in an at least frame in face image is as target image.To effectively filter out since motor is because turning round Bent, fuzzy frame, and then pick out image between multiframe facial image and compare clearly facial image, and then improve detection The quality of picture, and the accuracy of algorithm under corresponding scene is improved, so that user is using answering under respective algorithms scene It can be significantly improved with Experience Degree, and then effectively overcome the frame that the face in the presence of the prior art is generated due to movement The technical issues of algorithm quality declines caused by fuzzy.
Other feature and advantage of the disclosure will illustrate in the following description, alternatively, Partial Feature and advantage can be with Deduce from specification or unambiguously determine, or by implement the disclosure above-mentioned technology it can be learnt that.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the flow chart for the image processing method that first embodiment of the invention provides;
Fig. 2 is the functional block diagram for the image processing apparatus that second embodiment of the invention provides;
Fig. 3 is a kind of schematic diagram for terminal device that third embodiment of the invention provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.Therefore, The model of claimed invention is not intended to limit to the detailed description of the embodiment of the present invention provided in the accompanying drawings below It encloses, but is merely representative of selected embodiment of the invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not having Every other embodiment obtained under the premise of creative work is made, shall fall within the protection scope of the present invention.
With reference to the accompanying drawing, it elaborates to some embodiments of the present invention.In the absence of conflict, following Feature in embodiment and embodiment can be combined with each other.
First embodiment
Face caused by human motion can be encountered even during face unlock, human face data acquisition as existing There is fuzzy situation in a certain frame of continuous frame, so as to cause identification, the accuracy of living body scheduling algorithm decline, in order to improve detection figure The quality of piece, to improve the accuracy of algorithm under corresponding scene, the present embodiment provides firstly a kind of image processing method, needs Illustrate, step shown in the flowchart of the accompanying drawings can be in a computer system such as a set of computer executable instructions It executes, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein suitable Sequence executes shown or described step.It describes in detail below to the present embodiment.
Referring to Fig. 1, being the flow chart of image processing method provided in an embodiment of the present invention.It below will be to shown in FIG. 1 Detailed process is described in detail.
Step S101 carries out Face datection to the facial image of target object, obtains the corresponding face of the facial image Key point.
In embodiments of the present invention, facial image is the real time data frame of image collecting device (such as camera) capture.
Optionally, facial image is the reality of the target object of image collecting device (such as camera) capture during the motion When data frame.
Optionally, multiframe facial image can be continuous multiple frames facial image.
Certainly, multiframe facial image can also according to the multiframe facial image of a certain aturegularaintervals, for example, according to every Two frames are formed by multiframe facial image.Here, being not especially limited.
Optionally, face key point is in every frame facial image for determining cheek profile, brow region, the glasses of face The key point in region, nasal area and mouth region etc..
As an implementation, step S101 include: based on active shape model (Active Shape Models, ASM Face datection) is carried out to the continuous multiple frames facial image of target object, obtains the corresponding face of facial image described in each frame Key point.
In practice, face inspection can also be carried out by the continuous multiple frames facial image to target object otherwise It surveys, and then obtains the corresponding face key point of facial image described in each frame, for example, by neural network, being based on local binary The methods of feature carries out Face datection to the continuous multiple frames facial image of target object, and then obtains facial image described in each frame Corresponding face key point.
Step S102, determines whether face key point corresponding to the facial image meets preset requirement.
As an implementation, step S102 comprises determining that the face corresponding to facial image described in each frame The quantity of key point whether be greater than the facial image corresponding to preset quantity and the face key point whether included Whole facial contour;If the quantity of the face key point corresponding to facial image described in each frame is greater than the preset quantity And the facial image includes complete facial contour, then determines that the face key point meets the preset requirement.
In the present embodiment, the face key point can be determined according to the contour following algorithm of bianry image Close edges Whether the corresponding facial image includes complete facial contour.
Optionally, whether the facial image corresponding to the determination face key point includes complete face wheel It is wide, comprising: to be scanned on the facial image according to preset rules, find first pixel on the facial image Point, is denoted as first boundary point;Second boundary point is determined according to the first boundary point;Third is determined according to the second boundary point Boundary point;If the third boundary point is overlapped with the first boundary point, determine that the facial image includes complete face Profile.
Wherein, first boundary point, second boundary point, third boundary point are used to the boundary of characterization facial contour.
Optionally, it is scanned on the facial image according to preset rules, finds first on the facial image A pixel, is denoted as first boundary point, comprising: obtains pixel point set corresponding to the facial image;According to the face figure The resolution ratio of picture determines matrix corresponding to the pixel;Pixel corresponding to the first row first row by the matrix is made For the first boundary point on the facial image.For example, the resolution ratio of the facial image is 320*240, then by the facial image It is 320 that corresponding pixel point set, which forms line number, the matrix that columns is 240.Then by the first row first row of the matrix First pixel as the first boundary point on the facial image.
As an example it is assumed that pixel point set corresponding to the facial image is { a1, a2, a3, a4 }, point of the facial image Resolution is 2*2, then the matrix A according to corresponding to the pixel that the resolution ratio of the facial image determines are as follows:Wherein, pixel corresponding to the first row first row of the matrix is a1, i.e., first boundary point is a1.
Wherein, the coordinate of the pixel is two-dimensional coordinate, and each pixel is embodied by two-dimensional coordinate in two dimension Positional relationship in plane.
Optionally, second boundary point is determined according to the first boundary point, comprising: obtain the with the first boundary point The adjacent first object pixel of one preset direction;Position letter where obtaining the first object pixel in the matrix Breath;If the location information meets preset requirement, determine that the first object pixel is second boundary point.
Optionally, the first preset direction can be the lower section of first boundary point, be also possible to the right of first boundary point.
In actual use, the selection of the first preset direction can be arranged according to the ranks number of matrix.
Optionally, the location information where the first object pixel in the matrix is every row in the matrix Header element or tail element when, then determine that the location information meets preset requirement.For example, the matrix B of a 2*3, wherein full The location information of sufficient preset requirement includes: (1,1) B, B (1,3), B (2,1), B (2,3).
Continue for above-mentioned example, it is assumed that the first preset direction is the lower section of first boundary point, then the first mesh found Mark pixel is a3, and wherein the location information where a3 in the matrix is A (2,1), and A (2,1) is the second row of matrix A at this time Header element.Therefore a3 location information in a matrix meet preset requirement, then determine that the first object pixel is the second side Boundary's point.
Optionally, third boundary point is determined according to the second boundary point, comprising: obtain the with the second boundary point The second adjacent target pixel points of two preset directions;Position letter where obtaining second target pixel points in the matrix Breath;If the location information meets the preset requirement, determine that second target pixel points are third boundary point.
Optionally, the second preset direction can be the right side of second boundary point, bottom right, under, lower-left is left, upper left, upper or upper right Equal directions.
In actual use, the selection of the second preset direction can be arranged according to the ranks number of matrix.Here, not making to have Body limits.
Continue for above-mentioned example, it is assumed that pixel point set corresponding to the facial image is { a1, a2, a3, a4 }, the people The resolution ratio of face image is 2*2, then the matrix A according to corresponding to the pixel that the resolution ratio of the facial image determines Are as follows:Wherein, pixel corresponding to the first row first row of the matrix is a1, i.e., first boundary point is a1. Assuming that the first preset direction is the lower section of first boundary point, then the first object pixel found is a3, wherein described in the place a3 Location information in matrix is A (2,1), and A (2,1) is the header element of the second row of matrix A at this time.Therefore a3 position in a matrix Information meets preset requirement, then determines that the first object pixel is second boundary point.Then it obtains and the second boundary The second adjacent target pixel points of second preset direction of point, it is assumed that the second preset direction is left, then the second target pixel points For a4;Location information where obtaining second target pixel points in the matrix, the position of a4 is A (2,2) at this time;A (2, 2) belong to the last bit of the second row, therefore A (2,2) meets preset requirement, using a4 as third boundary point.Whether judge third boundary point It is overlapped, i.e., third boundary point is compared with the location information of first boundary point with first boundary point, if location information is identical, Then it is overlapped, the location information of third boundary point is A (2,2) at this time, and the location information of first boundary point is A (1,1), it is unequal, I.e. third boundary point is not overlapped with first boundary point, then continues to obtain and the third boundary using third boundary point as starting point The adjacent third target pixel points of the third preset direction of point, it is assumed that third preset direction is top, then the third mesh got Mark pixel is a2;Location information where obtaining the third target pixel points in the matrix, i.e. location information be A (1, 2);If the location information meets the preset requirement, A (1,2) belongs to the last bit of the first row, therefore A (1,2) meets default want It asks, then determines that the third target pixel points are fourth boundary point, judge whether fourth boundary point is overlapped with first boundary point, no It is overlapped, continues to find the 5th boundary point with fourth boundary point, specifically, obtain the third preset direction with the fourth boundary point The 4th adjacent target pixel points, it is assumed that third preset direction is left, then the 4th target pixel points got are a1;It obtains Location information where 4th target pixel points in the matrix;If the location information meets the preset requirement, Determine that the 4th target pixel points are that the 5th boundary point is sentenced if the 5th boundary point is overlapped with the first boundary point The fixed facial image includes complete facial contour.
Certainly, in actual use, can also by other methods, such as edge following algorithm based on Dynamic Weights or Person is that " worm with " method scheduling algorithm determines whether the facial image corresponding to the face key point includes complete face wheel It is wide.
In embodiments of the present invention, right by the institute of facial image described in each frame in the facial image described in multiframe respectively The face key point answered carries out judging whether to meet preset requirement, thus the face key point that will meet preset requirement The corresponding facial image is stored into storage medium.It will be unsatisfactory for the facial image frame deletion of preset requirement.
Optionally, the selection of preset quantity can be according to double eyebrow characteristic points on facial contour, eyes characteristic point, mouth wheel Wide characteristic point and the quantity of face edge feature point are arranged, for example, preset quantity can be 68 characteristic points, can also be 74 etc..Here, being not especially limited.
In actual use, it is special that double eyebrow characteristic points, eyes characteristic point, mouth profile can be respectively set according to user demand The particular number of sign point and face edge feature point can also be set for example, the quantity of double eyebrow characteristic points can be set to 10 It is set to 14.It include complete face since the quantity of obtained face key point is greater than or equal to preset quantity and facial image When profile, that is, indicate that obtained face key point is all key points on facial contour, so that it is determined that the face closes Key point meets the preset requirement.
In a possible embodiment, after step s 102, image processing method provided in an embodiment of the present invention is also wrapped It includes: if the face key point is unsatisfactory for preset requirement, deleting the facial image, and re-execute the steps S101, until inspection Measure the frame for meeting preset requirement.
Wherein, ephemeral data container is preset for storing face figure corresponding to image collecting device present frame collected Picture.
Optionally, presetting ephemeral data container can be the storage container based on Map data type.
Step S103, if so, by meeting in the facial image corresponding to the face key point of the preset requirement Facial image described in an at least frame is as target image.
As an implementation, step S103 includes: that will meet corresponding to the face key point of the preset requirement The facial image is stored to storage medium;Determine whether the frame number of the facial image stored in the storage medium surpasses Cross preset threshold;If so, using facial image described in at least frame in the storage medium as target image.
It is when the face key point meets preset requirement, the face key point institute for meeting the preset requirement is right The facial image answered is stored into storage medium.
Optionally, the storage medium can be caching, be also possible to hard disk.
Optionally, the setting of preset threshold can be according to generating in image collecting device during exercise corresponding period Blurred picture quantity determine.For example, N blurred pictures are generated in the t1 period of image collecting device during the motion, Then preset threshold can be set to N+1 or N+M, wherein N is the positive integer greater than 1.
Optionally, described according to generated fuzzy graph the piece number in image collecting device during exercise corresponding period Amount determines the preset threshold, comprising: will be according to generated mould in image collecting device during exercise corresponding period Paste picture number is set as the preset threshold.
Continue for by taking above-mentioned example as an example, it is assumed that N are generated in the t1 period of image collecting device during the motion Blurred picture, then preset threshold can be set to N.
Wherein it is possible to determine that the image collecting device exists by the fog-level of image collecting device acquired image Movement.
Optionally, the fog-level of image can be configured according to actual needs, here, being not especially limited.
Optionally, it is generally stored in storage medium in one blurred picture of every acquisition, to pass through stored number Amount is to obtain blurred picture quantity, or when calculating by the counter in image collecting device corresponding during exercise Between the quantity of blurred picture that generates in section.
Optionally, the M is the integer more than or equal to zero.
In actual use, general M can be configured by debugging.
Optionally, facial image described in at least frame using in the storage medium is as target image, comprising: will Facial image described in each frame in facial image described in multiframe in the storage medium is as target image.
Optionally, the higher facial image of clarity in order to obtain, at least frame institute by the storage medium Facial image is stated as target image, comprising: obtain last stored in the storage medium according to chronological order Facial image described in frame;Using the facial image as the target image.
In embodiments of the present invention, by the frame number for the facial image for judging to be stored in the storage medium, come Detect the stable state of present frame, for example the data for having N frame in storage medium turn out current image collecting device and exist Data in present case N frame time belong to more stable state.So as to using target image as urban population bottom library Or the purposes of recognition of face (as unlocked) contour quality requirement, to improve the efficiency of recognition of face.
Image processing method provided by the embodiment of the present invention carries out face inspection by the facial image to target object It surveys, obtains the corresponding face key point of the facial image;Determine whether face key point corresponding to the facial image is full Sufficient preset requirement;If so, by meet in the facial image corresponding to the face key point of the preset requirement at least one Facial image is as target image described in frame.To effectively filter out since motor is because there is distortion, fuzzy frame, in turn Image is picked out between multiframe facial image and compares clearly facial image, and then improves the quality of detection picture, and The accuracy for improving algorithm under corresponding scene enables application Experience Degree of the user under using respective algorithms scene significant Be improved, so effectively overcome frame that the face in the presence of the prior art is generated due to movement it is fuzzy caused by algorithm The technical issues of quality declines.
Second embodiment
Corresponding to the image processing method in first embodiment, at using image shown in first embodiment The one-to-one image processing apparatus of reason method.As shown in Fig. 2, described image processing unit 400 includes face detection module 410, processing module 420 and mark module 430.Wherein, face detection module 410, processing module 420 and mark module 430 Realize function and the poly- one-to-one correspondence of step corresponding in first embodiment, to avoid repeating, the present embodiment is not described in detail one by one.
Face detection module 410 carries out Face datection for the facial image to target object, obtains the facial image Corresponding face key point.
In a possible embodiment, before face detection module 410, described image processing unit 400 further include: obtain Modulus block, for obtaining the multiframe facial image of target object during the motion.
Processing module 420, for determining whether face key point corresponding to the facial image meets preset requirement.It can It is pre- to be also used to determine whether the quantity of the face key point corresponding to the facial image is greater than for selection of land, processing module 420 If whether the facial image corresponding to quantity and the face key point includes complete facial contour;If the face The quantity of key point is greater than the preset quantity and the facial image includes complete facial contour, then determines that the face closes Key point meets the preset requirement.
Optionally, whether the facial image corresponding to the determination face key point includes complete face wheel It is wide, comprising: to be scanned on the facial image according to preset rules, find first pixel on the facial image Point, is denoted as first boundary point;Second boundary point is determined according to the first boundary point;Third is determined according to the second boundary point Boundary point;If the third boundary point is overlapped with the first boundary point, determine that the facial image can completely show people Face profile.
Optionally, whether the facial image corresponding to the determination face key point includes complete face wheel It is wide, comprising: to be scanned on the facial image according to preset rules, find first pixel on the facial image Point, is denoted as first boundary point;Second boundary point is determined according to the first boundary point;Third is determined according to the second boundary point Boundary point;If the third boundary point is overlapped with the first boundary point, determine that the facial image includes complete face Profile.
Optionally, described to be scanned on the facial image according to preset rules, it finds on the facial image First pixel, is denoted as first boundary point, comprising: obtains pixel point set corresponding to the facial image;According to the people The resolution ratio of face image determines matrix corresponding to the pixel;Pixel corresponding to the first row first row by the matrix Point is as the first boundary point on the facial image.
Optionally, second boundary point is determined according to the first boundary point, comprising: obtain the with the first boundary point The adjacent first object pixel of one preset direction;Position letter where obtaining the first object pixel in the matrix Breath;If the location information meets preset requirement, determine that the first object pixel is second boundary point.
Optionally, third boundary point is determined according to the second boundary point, comprising: obtain the with the second boundary point The second adjacent target pixel points of two preset directions;Position letter where obtaining second target pixel points in the matrix Breath;If the location information meets the preset requirement, determine that second target pixel points are third boundary point.
In a possible embodiment, described image processing unit 400 further include: execution module is emptied, if for described Face key point is unsatisfactory for preset requirement, deletes the facial image.
Mark module 430, for if so, the face figure corresponding to the face key point of the preset requirement will be met Facial image described in an at least frame as in is as target image.Optionally, mark module 430 are situated between if being also used to the storage When the frame number of the facial image stored in matter is more than preset threshold, by face figure described in the multiframe in the storage medium Facial image described in any one frame as in is as target image.Optionally, mark module 430 are also used to meet described pre- If it is required that face key point corresponding to the facial image store to storage medium;It determines and is stored in the storage medium The frame number of the facial image whether be more than preset threshold;If so, by face described in at least frame in the storage medium Image is as target image.
Optionally, facial image described in at least frame using in the storage medium is as target image, comprising: obtains Take facial image described in the last frame stored in the storage medium according to chronological order;The facial image is made For the target image.
Optionally, the preset threshold meets: according to being produced in image collecting device during exercise corresponding period Raw blurred picture quantity determines the preset threshold.
Optionally, described according to generated fuzzy graph the piece number in image collecting device during exercise corresponding period Amount determines the preset threshold, comprising: will be according to generated mould in image collecting device during exercise corresponding period Paste picture number is set as the preset threshold.
3rd embodiment
As shown in figure 3, being the schematic diagram of terminal device 500.The terminal device 500 includes memory 502, processor 504 and it is stored in the computer program 503 that can be run in the memory 502 and on the processor 504, the calculating The described image processing method in first embodiment is realized when machine program 503 is executed by processor 504, to avoid repeating, herein It repeats no more.Alternatively, realizing second embodiment described image processing dress when the computer program 503 is executed by processor 504 The function of each module/unit in setting, to avoid repeating, details are not described herein again.
Illustratively, computer program 503 can be divided into one or more module/units, one or more mould Block/unit is stored in memory 502, and is executed by processor 504, to complete the present invention.One or more module/units It can be the series of computation machine program instruction section that can complete specific function, the instruction segment is for describing computer program 503 Implementation procedure in terminal device 500.For example, computer program 503 can be divided into the inspection of the face in second embodiment Survey module 410, processing module 420 and mark module 430, the concrete function of each module such as the first embodiment or the second embodiment institute It states, will not repeat them here.
Terminal device 500 can be desktop PC, notebook, palm PC and cloud server etc. and calculate equipment.
Wherein, memory 502 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read- Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..Wherein, memory 502 is for storing program, and the processor 504 is after receiving and executing instruction, described in execution The method of program, the flow definition that aforementioned any embodiment of the embodiment of the present invention discloses can be applied in processor 504, or It is realized by processor 504.
Processor 504 may be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 504 can To be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general Processor can be microprocessor or the processor is also possible to any conventional processor etc..
It is understood that structure shown in Fig. 3 is only a kind of structural schematic diagram of terminal device 500, terminal device 500 It can also include than more or fewer components shown in Fig. 3.Each component shown in Fig. 3 can use hardware, software or its group It closes and realizes.
Fourth embodiment
The embodiment of the present invention also provides a kind of storage medium, and instruction is stored on the storage medium, when described instruction exists The described image processing side in first embodiment is realized when running on computer, when the computer program is executed by processor Method, to avoid repeating, details are not described herein again.Alternatively, realizing second embodiment institute when the computer program is executed by processor The function of each module/unit in image processing apparatus is stated, to avoid repeating, details are not described herein again.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can lead to Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software, based on this understanding, this hair Bright technical solution can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that computer equipment (can be with It is personal computer, server or the network equipment etc.) method that executes each implement scene of the present invention.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing It is further defined and explained.

Claims (13)

1. a kind of image processing method characterized by comprising
Face datection is carried out to the facial image of target object, obtains the corresponding face key point of the facial image;
Determine whether face key point corresponding to the facial image meets preset requirement;
If so, by people described in at least frame met in the facial image corresponding to the face key point of the preset requirement Face image is as target image.
2. the method according to claim 1, wherein face corresponding to the determination facial image is crucial Whether point meets preset requirement, comprising:
Determine whether the quantity of the face key point corresponding to the facial image is greater than preset quantity and the face Whether the facial image corresponding to key point includes complete facial contour;
If the quantity of the face key point is greater than the preset quantity and the facial image includes complete facial contour, Determine that the face key point meets the preset requirement.
3. according to the method described in claim 2, it is characterized in that, the people corresponding to the determination face key point Whether face image includes complete facial contour, comprising:
It is scanned on the facial image according to preset rules, finds first pixel on the facial image, remembered For first boundary point;
Second boundary point is determined according to the first boundary point;
Third boundary point is determined according to the second boundary point;
If the third boundary point is overlapped with the first boundary point, determine that the facial image includes complete face wheel It is wide.
4. according to the method described in claim 3, it is characterized in that, described carry out on the facial image according to preset rules Search, finds first pixel on the facial image, is denoted as first boundary point, comprising:
Obtain pixel point set corresponding to the facial image;
According to the resolution ratio of the facial image determine the pixel corresponding to matrix;
Using pixel corresponding to the first row first row of the matrix as the first boundary point on the facial image.
5. according to the method described in claim 4, it is characterized in that, described determine the second boundary according to the first boundary point Point, comprising:
Obtain the first object pixel adjacent with the first preset direction of the first boundary point;
Obtain location information of the first object pixel in the matrix;
If the location information meets preset requirement, determine that the first object pixel is the second boundary point.
6. according to the method described in claim 4, it is characterized in that, described determine third boundary according to the second boundary point Point, comprising:
Obtain second target pixel points adjacent with the second preset direction of the second boundary point;
Obtain location information of second target pixel points in the matrix;
If the location information meets the preset requirement, determine that second target pixel points are the third boundary point.
7. the method according to claim 1, wherein further include:
If the face key point is unsatisfactory for preset requirement, the facial image is deleted.
8. method described in -7 any one according to claim 1, which is characterized in that described to meet the preset requirement Facial image described in an at least frame in the facial image corresponding to face key point is as target image, comprising:
The facial image corresponding to the face key point for meeting the preset requirement is stored to storage medium;
Whether the frame number for determining the facial image stored in the storage medium is more than preset threshold;
If so, using facial image described in at least frame in the storage medium as target image.
9. according to the method described in claim 8, it is characterized in that, the preset threshold meets:
It is determined according to generated blurred picture quantity in image collecting device during exercise corresponding period described default Threshold value.
10. according to the method described in claim 8, it is characterized in that, described in at least frame by the storage medium Facial image is as target image, comprising:
Obtain facial image described in the last frame stored in the storage medium according to chronological order;
Using the facial image as the target image.
11. a kind of image processing apparatus characterized by comprising
Face detection module carries out Face datection for the facial image to target object, it is corresponding to obtain the facial image Face key point;
Processing module, for determining whether face key point corresponding to the facial image meets preset requirement;
Mark module, for if so, by meeting in the facial image corresponding to the face key point of the preset requirement Facial image described in an at least frame is as target image.
12. a kind of terminal device characterized by comprising memory, processor and storage are in the memory and can be The computer program run on the processor, the processor realized when executing the computer program as claim 1 to The step of 10 described in any item image processing methods.
13. a kind of storage medium, which is characterized in that instruction is stored on the storage medium, when described instruction on computers When operation, so that the computer executes image processing method as described in any one of claim 1 to 10.
CN201811297964.1A 2018-11-01 2018-11-01 Image processing method, device, equipment and storage medium Pending CN109447006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811297964.1A CN109447006A (en) 2018-11-01 2018-11-01 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811297964.1A CN109447006A (en) 2018-11-01 2018-11-01 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109447006A true CN109447006A (en) 2019-03-08

Family

ID=65550169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811297964.1A Pending CN109447006A (en) 2018-11-01 2018-11-01 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109447006A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298229A (en) * 2019-04-29 2019-10-01 星河视效文化传播(北京)有限公司 Method of video image processing and device
CN111754410A (en) * 2019-03-27 2020-10-09 浙江宇视科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111860475A (en) * 2019-04-28 2020-10-30 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN111915567A (en) * 2020-07-06 2020-11-10 浙江大华技术股份有限公司 Image quality evaluation method, device, equipment and medium
CN112907803A (en) * 2021-01-14 2021-06-04 湖南海讯供应链有限公司 Automatic AI (Artificial Intelligence) adjustment intelligent access control system and access control detection method
CN111754410B (en) * 2019-03-27 2024-04-09 浙江宇视科技有限公司 Image processing method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100608595B1 (en) * 2004-11-16 2006-08-03 삼성전자주식회사 Face identifying method and apparatus
CN102542299A (en) * 2011-12-07 2012-07-04 惠州Tcl移动通信有限公司 Face recognition method, device and mobile terminal capable of recognizing face
CN104469092A (en) * 2013-09-18 2015-03-25 联想(北京)有限公司 Image acquisition method and electronic equipment
CN105893963A (en) * 2016-03-31 2016-08-24 南京邮电大学 Method for screening out optimal easily-recognizable frame of single pedestrian target in video
CN106250851A (en) * 2016-08-01 2016-12-21 徐鹤菲 A kind of identity identifying method, equipment and mobile terminal
CN106845331A (en) * 2016-11-18 2017-06-13 深圳云天励飞技术有限公司 A kind of image processing method and terminal
CN107483834A (en) * 2015-02-04 2017-12-15 广东欧珀移动通信有限公司 A kind of image processing method, continuous shooting method and device and related media production
CN107633209A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Electronic installation, the method and storage medium of dynamic video recognition of face
CN107729736A (en) * 2017-10-27 2018-02-23 广东欧珀移动通信有限公司 Face identification method and Related product
CN108268864A (en) * 2018-02-24 2018-07-10 达闼科技(北京)有限公司 Face identification method, system, electronic equipment and computer program product
CN108537787A (en) * 2018-03-30 2018-09-14 中国科学院半导体研究所 A kind of quality judging method of facial image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100608595B1 (en) * 2004-11-16 2006-08-03 삼성전자주식회사 Face identifying method and apparatus
CN102542299A (en) * 2011-12-07 2012-07-04 惠州Tcl移动通信有限公司 Face recognition method, device and mobile terminal capable of recognizing face
CN104469092A (en) * 2013-09-18 2015-03-25 联想(北京)有限公司 Image acquisition method and electronic equipment
CN107483834A (en) * 2015-02-04 2017-12-15 广东欧珀移动通信有限公司 A kind of image processing method, continuous shooting method and device and related media production
CN105893963A (en) * 2016-03-31 2016-08-24 南京邮电大学 Method for screening out optimal easily-recognizable frame of single pedestrian target in video
CN106250851A (en) * 2016-08-01 2016-12-21 徐鹤菲 A kind of identity identifying method, equipment and mobile terminal
CN106845331A (en) * 2016-11-18 2017-06-13 深圳云天励飞技术有限公司 A kind of image processing method and terminal
CN107633209A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Electronic installation, the method and storage medium of dynamic video recognition of face
CN107729736A (en) * 2017-10-27 2018-02-23 广东欧珀移动通信有限公司 Face identification method and Related product
CN108268864A (en) * 2018-02-24 2018-07-10 达闼科技(北京)有限公司 Face identification method, system, electronic equipment and computer program product
CN108537787A (en) * 2018-03-30 2018-09-14 中国科学院半导体研究所 A kind of quality judging method of facial image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
向元平等: "基于正面人脸图像的人脸轮廓的提取", 《微计算机信息》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754410A (en) * 2019-03-27 2020-10-09 浙江宇视科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111754410B (en) * 2019-03-27 2024-04-09 浙江宇视科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111860475A (en) * 2019-04-28 2020-10-30 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN111860475B (en) * 2019-04-28 2023-12-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110298229A (en) * 2019-04-29 2019-10-01 星河视效文化传播(北京)有限公司 Method of video image processing and device
CN111915567A (en) * 2020-07-06 2020-11-10 浙江大华技术股份有限公司 Image quality evaluation method, device, equipment and medium
CN112907803A (en) * 2021-01-14 2021-06-04 湖南海讯供应链有限公司 Automatic AI (Artificial Intelligence) adjustment intelligent access control system and access control detection method
CN112907803B (en) * 2021-01-14 2021-09-28 湖南海讯供应链有限公司 Automatic AI (Artificial Intelligence) adjustment intelligent access control system and access control detection method

Similar Documents

Publication Publication Date Title
CN110532984B (en) Key point detection method, gesture recognition method, device and system
CN110807385B (en) Target detection method, target detection device, electronic equipment and storage medium
CN109447006A (en) Image processing method, device, equipment and storage medium
US20210158023A1 (en) System and Method for Generating Image Landmarks
US20180018503A1 (en) Method, terminal, and storage medium for tracking facial critical area
CN108875537B (en) Object detection method, device and system and storage medium
CN108520229A (en) Image detecting method, device, electronic equipment and computer-readable medium
US10395094B2 (en) Method and apparatus for detecting glasses in a face image
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109934065A (en) A kind of method and apparatus for gesture identification
CN108921131B (en) Method and device for generating face detection model and three-dimensional face image
CN109409962A (en) Image processing method, device, electronic equipment, computer readable storage medium
CN110858316A (en) Classifying time series image data
CN111935479A (en) Target image determination method and device, computer equipment and storage medium
CN111626163A (en) Human face living body detection method and device and computer equipment
CN108921070A (en) Image processing method, model training method and corresponding intrument
CN112528908A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
US10791321B2 (en) Constructing a user's face model using particle filters
CN109961103B (en) Training method of feature extraction model, and image feature extraction method and device
CN112906571B (en) Living body identification method and device and electronic equipment
CN105354833B (en) A kind of method and apparatus of shadow Detection
CN108509828A (en) A kind of face identification method and face identification device
CN110390344B (en) Alternative frame updating method and device
US10861174B2 (en) Selective 3D registration
CN111860057A (en) Face image blurring and living body detection method and device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190308

RJ01 Rejection of invention patent application after publication