CN105975961B - The method, apparatus and terminal of recognition of face - Google Patents

The method, apparatus and terminal of recognition of face Download PDF

Info

Publication number
CN105975961B
CN105975961B CN201610491320.0A CN201610491320A CN105975961B CN 105975961 B CN105975961 B CN 105975961B CN 201610491320 A CN201610491320 A CN 201610491320A CN 105975961 B CN105975961 B CN 105975961B
Authority
CN
China
Prior art keywords
face
area
target image
region
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610491320.0A
Other languages
Chinese (zh)
Other versions
CN105975961A (en
Inventor
陈志军
龙飞
张旭华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610491320.0A priority Critical patent/CN105975961B/en
Publication of CN105975961A publication Critical patent/CN105975961A/en
Application granted granted Critical
Publication of CN105975961B publication Critical patent/CN105975961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides the method, apparatus and terminal of a kind of recognition of face, one specific embodiment of the method includes: to obtain first area, the first area is the partial region in target image to be identified, and the first area includes all faces in the target image;Using region corresponding to the every face detected from the first area as target area;The corresponding location information of every face in the target image is determined according to the target area.The embodiment accurately identifies without each region to whole image in the case where guaranteeing does not reduce accuracy of identification, to save the time, improves the efficiency of recognition of face.

Description

The method, apparatus and terminal of recognition of face
Technical field
This disclosure relates to field of computer technology, the in particular to method, apparatus and terminal of a kind of recognition of face.
Background technique
Face recognition technology is a very fast computer technology of current development, can be widely applied to government, army The fields such as team, bank, welfare, e-commerce, safe defence.With the further mature and Social Agree of technology Raising, face recognition technology is applied in more fields, has more wide application prospect.
It in the related art, can be using convolutional neural networks technology to image in order to improve the accuracy of recognition of face Carry out recognition of face.But when being identified to image, it usually needs accurately known to each region of whole image Not, so that the face location in image could be oriented accurately.To waste the time, the effect of recognition of face is reduced Rate.
Summary of the invention
In order to solve the above-mentioned technical problem, the disclosure provides the method, apparatus and terminal of a kind of recognition of face.
According to the first aspect of the embodiments of the present disclosure, a kind of method of recognition of face is provided, comprising:
First area is obtained, the first area is the partial region in target image to be identified, the first area Include all faces in the target image;
Using region corresponding to the every face detected from the first area as target area;
The corresponding location information of every face in the target image is determined according to the target area.
Optionally, the acquisition first area, comprising:
The distributed intelligence of the corresponding face probability of the target image is obtained, the face probability is with face characteristic Probability;
Distributed intelligence based on the corresponding face probability of target image fetching portion region from the target image As first area.
Optionally, the distributed intelligence for obtaining the corresponding face probability of the target image, comprising:
The face probability of each detection zone in the target image is obtained based on full convolutional network;
According to the face probability and location information of each detection zone, determine that the corresponding face of the target image is general The distributed intelligence of rate.
Optionally, the face probability that each detection zone in the target image is obtained based on full convolutional network, packet It includes:
Full convolutional network human face recognition model using training in advance knows the target image under the first scale Not, to obtain the face probability of each detection zone in the target image;Wherein, first scale is less than the target figure The archeus of picture.
Optionally, the distributed intelligence based on the corresponding face probability of the target image is obtained from the target image Take partial region as first area, comprising:
Face probability in the target image is more than or equal to the region of the first probability as second area, from described The minimum rectangular area comprising the second area is obtained in target image as first area.
Optionally, described using region corresponding to the every face detected from the first area as target area Domain, comprising:
Obtain the distributed intelligence of the corresponding face probability in the first area;
Based on the distributed intelligence of the corresponding face probability in the first area, the mesh is detected from the first area Mark region.
Optionally, the distributed intelligence for obtaining the corresponding face probability in the first area, comprising:
The first area under the second scale is identified using the full convolutional network human face recognition model of training in advance, with Obtain the distributed intelligence of the corresponding face probability in the first area, wherein second scale is less than the target image Archeus is greater than first scale.
Optionally, following detecting step is repeated, to detect each target respectively from the first area Region:
Obtain the set of the detection zone of not labeled face mark in the first area;
The detection zone of the first predetermined condition will be met in the set, centered on detection zone;
The detection zone of the second predetermined condition will be met around the Spot detection region, as corresponding periphery detection zone Domain;
It is corresponding as a face using the Spot detection region with the connected region that corresponding periphery detection zone is constituted Target area, and mark identical face to identify in the Spot detection region and corresponding periphery detection zone.
Optionally, the detection zone that the second predetermined condition will be met around the Spot detection region, as correspondence Periphery detection zone, comprising:
Since the detection zone adjacent with the Spot detection region, the detection for meeting the second predetermined condition is successively obtained Region is as corresponding periphery detection zone.
Optionally, the first predetermined condition of the satisfaction includes: face maximum probability, and face probability is more than or equal to Second probability.
The second predetermined condition of the satisfaction includes: that face probability is more than or equal to the second probability, and face probability is small The face probability of all detection zones between Spot detection region.
Optionally, described to determine that the corresponding position of every face is believed in the target image according to the target area Breath, comprising:
The corresponding precise region of every face in the target image is determined based on the target area;
The corresponding location information of every face in the target image is determined based on the precise region.
Optionally, described that the corresponding accurate area of every face in the target image is determined based on the target area Domain, comprising:
Adjacent target area if it exists, using the minimum rectangular area comprising adjacent target region as candidate region;
The candidate region under third scale is identified using the full convolutional network human face recognition model of training in advance, with Obtain the distributed intelligence of the corresponding face probability in the candidate region, wherein the third scale is less than or equal to the mesh The archeus of logo image is greater than second scale;
The detecting step is repeated, to obtain the corresponding precise region of every face in the target image.
Optionally, for every face, the corresponding location information of the face in the target image is determined, comprising:
Border circular areas is obtained by the center of circle of the center of the precise region of the face, so that in the border circular areas, the essence of face True region proportion is more than or equal to predetermined ratio;
The location information of the boundary rectangle of the border circular areas is obtained as the corresponding location information of the face.
According to the second aspect of an embodiment of the present disclosure, a kind of device of recognition of face is provided, comprising:
Module is obtained, is configured as obtaining first area, the first area is the part in target image to be identified Region, the first area include all faces in the target image;
It is right to be configured as the every face institute that will be detected from the first area that the acquisition module obtains for detection module The region answered is as target area;
Determining module is configured as being determined in the target image according to the target area that the detection module detects The corresponding location information of every face.
Optionally, the acquisition module includes:
First acquisition submodule is configured as obtaining the distributed intelligence of the corresponding face probability of the target image, described Face probability is the probability with face characteristic;
Second acquisition submodule is configured as the corresponding face of target image obtained based on first acquisition submodule The distributed intelligence of probability from the target image fetching portion region as first area.
Optionally, first acquisition submodule includes:
Probability acquisition submodule is configured as obtaining each detection zone in the target image based on full convolutional network Face probability;
Information determines submodule, is configured as the face of each detection zone obtained according to the probability acquisition submodule Probability and location information determine the distributed intelligence of the corresponding face probability of the target image.
Optionally, the probability acquisition submodule includes:
First identification submodule, is configured as the full convolutional network human face recognition model using training in advance to the first scale Under the target image identified, to obtain the face probability of each detection zone in the target image;Wherein, institute State the archeus that the first scale is less than the target image.
Optionally, second acquisition submodule includes:
Region acquisition submodule is configured as face probability in the target image being more than or equal to the first probability Region obtains the minimum rectangular area comprising the second area as the firstth area as second area from the target image Domain.
Optionally, the detection module includes:
Third acquisition submodule is configured as obtaining the distributed intelligence of the corresponding face probability in the first area;
Detection sub-module is configured as the corresponding face probability in first area obtained based on the third acquisition submodule Distributed intelligence, the target area is detected from the first area.
Optionally, the third acquisition submodule includes:
Second identification submodule, is configured as the full convolutional network human face recognition model using training in advance to the second scale Under first area identified, to obtain the distributed intelligence of the corresponding face probability in the first area, wherein described second Scale is less than the archeus of the target image, is greater than first scale.
Optionally, the detection sub-module includes:
Gather acquisition submodule, is configured as obtaining the detection zone of not labeled face mark in the first area Set;
Spot detection region determines submodule, is configured as that the will be met in the set that the set acquisition submodule obtains The detection zone of one predetermined condition, centered on detection zone;
Periphery detection zone determines submodule, is configured as determining the Spot detection region into the center that submodule determines The detection zone for meeting the second predetermined condition around detection zone, as corresponding periphery detection zone;
Submodule is marked, the connected region for constituting in the Spot detection region with corresponding periphery detection zone is configured as Domain corresponds to target area as a face, and identical to the Spot detection region and corresponding periphery detection zone label Face mark.
Optionally, the periphery detection zone determines that submodule includes:
Condition detection sub-module is configured as successively obtaining since the detection zone adjacent with the Spot detection region Take the detection zone for meeting the second predetermined condition as corresponding periphery detection zone.
Optionally, the first predetermined condition of the satisfaction includes: face maximum probability, and face probability is more than or equal to Second probability.
The second predetermined condition of the satisfaction includes: that face probability is more than or equal to the second probability, and face probability is small The face probability of all detection zones between Spot detection region.
Optionally, the determining module includes:
Precise region determines submodule, is configured as determining every people in the target image based on the target area The corresponding precise region of face;
Location information determines submodule, and it is true to be configured as the precise region for determining that submodule is determined based on the precise region Make the corresponding location information of every face in the target image.
Optionally, the precise region determines that submodule includes:
Submodule is chosen, is configured as when there are adjacent target area, by the minimum square comprising adjacent target region Shape region is as candidate region;
Third identifies submodule, is configured as the full convolutional network human face recognition model using training in advance to third scale Under candidate region identified, to obtain the distributed intelligence of the corresponding face probability in the candidate region, wherein the third Scale is less than or equal to the archeus of the target image, is greater than second scale;
Implementation sub-module is configured as repeating the detecting step, to obtain every face in the target image Corresponding precise region.
Optionally, location information determines that submodule includes:
Border circular areas acquisition submodule is configured as obtaining circle by the center of circle of the center of the precise region of the face Domain, so that the precise region proportion of face is more than or equal to predetermined ratio in the border circular areas;
Location information acquisition submodule is configured as obtaining the border circular areas that the border circular areas acquisition submodule obtains Boundary rectangle location information as the corresponding location information of the face.
According to the third aspect of an embodiment of the present disclosure, a kind of terminal is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
First area is obtained, the first area is the partial region in target image to be identified, the first area Include all faces in the target image;
Using region corresponding to the every face detected from the first area as target area;
The corresponding location information of every face in the target image is determined according to the target area.
The technical scheme provided by this disclosed embodiment can include the following benefits:
The method of the recognition of face provided by the above embodiment of the disclosure obtains packet first from target image to be identified Then partial region containing all faces identifies first area as first area, the every face therefrom detected Finally determine that the corresponding position of every face is believed in target image according to target area as target area in corresponding region Breath.To be accurately identified without each region to whole image in the case where guaranteeing does not reduce accuracy of identification, save Time, improve the efficiency of recognition of face.
The method of the recognition of face provided by the above embodiment of the disclosure, by obtaining the corresponding face probability of target image Distributed intelligence, and distributed intelligence fetching portion region conduct from target image based on the corresponding face probability of target image First area, using region corresponding to the every face detected from first area as target area, according to target area Determine the corresponding location information of every face in target image.To further save the time, recognition of face is improved Efficiency.
The method of the recognition of face provided by the above embodiment of the disclosure obtains the firstth area after getting first area The distributed intelligence of the corresponding face probability in domain, based on the distributed intelligence of the corresponding face probability in first area, from first area It detects target area, and the corresponding location information of every face in target image is determined according to target area.Thus into one The time is saved to step, improves the efficiency of recognition of face.
The method of the recognition of face provided by the above embodiment of the disclosure will be from the firstth area after getting first area It is determined in target image often as target area, and based on target area in region corresponding to the every face detected in domain The corresponding precise region of face is opened, the corresponding location information of every face in target image is determined based on precise region.To Improve the accuracy and efficiency of recognition of face.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.
Figure 1A is a kind of disclosure flow chart of the method for recognition of face shown according to an exemplary embodiment;
Figure 1B is a kind of showing for disclosure selection first area from target image shown according to an exemplary embodiment It is intended to;
Fig. 2 is the flow chart of the method for the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 3 A is the flow chart of the method for the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 3 B is that the disclosure is shown according to an exemplary embodiment a kind of detects target area from first area The flow chart of method;
Fig. 4 is the flow chart of the method for the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 5 is a kind of disclosure block diagram of the device of recognition of face shown according to an exemplary embodiment;
Fig. 6 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 7 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 8 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Fig. 9 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 10 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 11 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 12 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 13 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 14 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 15 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 16 is the block diagram of the device of the disclosure another recognition of face shown according to an exemplary embodiment;
Figure 17 is an a kind of disclosure structural schematic diagram of the device of recognition of face shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
It is only to be not intended to be limiting the disclosure merely for for the purpose of describing particular embodiments in the term that the disclosure uses. The "an" of the singular used in disclosure and the accompanying claims book, " described " and "the" are also intended to including majority Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the disclosure A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from In the case where disclosure range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination ".
As shown in Figure 1A, Figure 1A is a kind of flow chart of the method for recognition of face shown according to an exemplary embodiment, This method can be applied in terminal device or server.It will be understood by those skilled in the art that the terminal device can wrap Include but be not limited to mobile terminal device, intelligent wearable equipment, tablet computer, personal digital assistant, the knee of such as smart phone Mo(u)ld top half portable computer and desktop computer etc..Method includes the following steps:
In a step 101, first area is obtained.
In the present embodiment, image to be identified available first is as target image.It can be straight by picture pick-up device Acquisition image is connect, and using acquired image as target image, some images can also be chosen from the image locally prestored As target image, the image of other terminal devices or server transport can also be received as target image.It can manage Solution, can also obtain target image otherwise, and the disclosure does not limit the concrete mode aspect for obtaining target image.
Then, first area can be obtained from target image, which includes all faces in target image, Also, the first area is the partial region in target image.As shown in Figure 1B, image 111 is target image, and region 112 is Region 113, can be used as first area by the human face region in target image, which includes all in target image Face.Region 114 can also be used as to first area, the region 114 is also comprising all faces in target image.It can manage It solves, arbitrarily the partial region comprising all faces can be used as first area in target image, and the disclosure is to first area Specific choice aspect does not limit.
In the present embodiment, the first rough general area for detecting face in target image is needed, according to target image The general area of middle face obtains first area.It in one implementation, can be lower using existing some accuracy Mode directly identifies target image, thus the rough face probability for obtaining each detection zone in target image.? In another implementation, the algorithm for being also based on full convolutional network identifies the target image for reducing scale, thus The rough face probability for obtaining each detection zone in target image.It is appreciated that mesh can also be obtained otherwise The face probability of each detection zone in logo image, the disclosure is to the face probability of each detection zone in acquisition target image It is not limited in terms of concrete mode.
In a step 102, using region corresponding to the every face detected from first area as target area.
In step 103, the corresponding location information of every face in target image is determined according to target area.
In the present embodiment, other regions in target image in addition to first area can be ignored, only to first area Discriminance analysis is carried out, to detect the corresponding region of every face as target area respectively from first area.Due to each The corresponding target area of face may be a border circular areas either approximate circle irregular area, and image is by more The pixel of row multiple row arrangement is constituted, and therefore, for ease of description and is marked, can be according to the corresponding target area of each face It determines a rectangular area comprising the target area, is described and marked in target image with the rectangular area to correspond to face Location information.
The method of the recognition of face provided by the above embodiment of the disclosure obtains packet first from target image to be identified Then partial region containing all faces identifies first area as first area, the every face therefrom detected Finally determine that the corresponding position of every face is believed in target image according to target area as target area in corresponding region Breath.To be accurately identified without each region to whole image in the case where guaranteeing does not reduce accuracy of identification, save Time, improve the efficiency of recognition of face.
As shown in Fig. 2, the flow chart of the method for Fig. 2 another recognition of face shown according to an exemplary embodiment, it should The process for obtaining first area is described in detail in embodiment, and this method can be applied in terminal device or server, including Following steps:
In step 201, the distributed intelligence of the corresponding face probability of target image is obtained.
In the present embodiment, above-mentioned face probability is the probability with face characteristic.Getting target figure to be identified As after, the distributed intelligence of the corresponding face probability of available target image.It, can in a kind of implementation of the present embodiment In a manner of using existing some accuracy lower, directly target image is identified, thus rough acquisition target figure The face probability of each detection zone as in.
In another implementation of the present embodiment, it is also based on full convolutional network and obtains the corresponding people of target image The distributed intelligence of face probability.The distributed intelligence of the corresponding face probability of target image obtained using full convolutional network technology is had Higher accuracy.Specifically, it is possible, firstly, to obtain the face of each detection zone in target image based on full convolutional network Probability.For target image, arranged by the pixel of multiple row and columns, it can be by target image according to pixel Arrangement is evenly dividing into multiple regions as detection zone, and each detection zone includes the array of one group of rectangular pixels point.For example, Detection zone can be divided to target image, it is assumed that the pixel of target image as unit of the rectangular pixels lattice array of 16*16 For 1600*3200, then 100*200 detection zone can be marked off from target image.It is an inspection with each detection zone Unit is surveyed, the face probability of each detection zone is calculated separately.It is appreciated that can also be with the rectangular pixels lattice array of 32*32 For unit, detection zone is divided to target image, the disclosure does not limit the specific division aspect of detection zone.
It in the present embodiment, can be using the full convolutional network human face recognition model of training in advance to the mesh under the first scale Logo image is identified, to obtain the face probability of each detection zone in target image.Wherein, the first scale is less than target figure The archeus of picture.Specifically, target image first can be contracted to the first scale, at this point, the pixel value of target image is also than original The pixel value of target image is reduced.Then, the target image after diminution is marked off into multiple detection zones, using preparatory instruction Experienced full convolutional network human face recognition model identifies that obtained recognition result is target to the target image under the first scale The face probability of each detection zone in image.Due to reduce after target image pixel value than former target image pixel value Reduced, therefore, using in advance training full convolutional network human face recognition model to the target image under the first scale into When row identification, calculation amount when calculation amount is than identifying the target image under archeus using above-mentioned model is small, therefore mentions High calculating speed.
Then, the corresponding face of target image can be determined according to the face probability and location information of each detection zone The distributed intelligence of probability.Specifically, the location information of available each detection zone, then by the position of each detection zone Confidence breath joins with corresponding face probability correlation, to generate the distributed intelligence of the corresponding face probability of target image.For example, The temperature figure of the corresponding face probability of target image can be generated, which can be used for describing the corresponding face of target image The distributed intelligence of probability.
In step 202, distributed intelligence based on the corresponding face probability of the target image fetching portion from target image Region is as first area.
In the present embodiment, statistics calculating can be carried out to the corresponding face probability of detection zone each in target image, The region that face probability is more than or equal to the first probability in acquisition target image can make second area as second area For the human face region got roughly.Wherein, the first probability is the threshold value of a preset probability, and the first probability can be one Empirical value, it will be understood that the disclosure does not limit the specific value aspect of the first probability.
It in the present embodiment, can be using the arbitrarily region comprising second area in target image as first area.Into one Step, the minimum rectangular area comprising second area can also be obtained from target image as first area.It is appreciated that such as The area that fruit second area is chosen is minimum, then calculation amount is also the smallest, therefore improves calculating speed.
In step 203, using region corresponding to the every face detected from first area as target area.
In step 204, the corresponding location information of every face in target image is determined according to target area.
It should be noted that no longer going to live in the household of one's in-laws on getting married in above-mentioned Fig. 2 embodiment for the step identical with Figure 1A embodiment It states, related content can be found in Figure 1A embodiment.
The method of the recognition of face provided by the above embodiment of the disclosure, by obtaining the corresponding face probability of target image Distributed intelligence, and distributed intelligence fetching portion region conduct from target image based on the corresponding face probability of target image First area, using region corresponding to the every face detected from first area as target area, according to target area Determine the corresponding location information of every face in target image.To further save the time, recognition of face is improved Efficiency.
As shown in Figure 3A, Fig. 3 A is the process of the method for another recognition of face shown according to an exemplary embodiment Region corresponding to the every face that will be detected from first area is described in detail as target area in figure, the embodiment Process, this method can be used in terminal device or server, comprising the following steps:
In step 301, first area is obtained.
In step 302, the distributed intelligence of the corresponding face probability in first area is obtained.
In the present embodiment, after getting first area, the distribution of the corresponding face probability in first area is then obtained Information.The first area under the second scale can be identified using the full convolutional network human face recognition model of training in advance, To obtain the distributed intelligence of the corresponding face probability in first area, wherein the second scale is less than the archeus of target image, is greater than The first scale in Fig. 2 embodiment.
Specifically, target image first can be contracted to the second scale, at this point, the pixel value of target image is than former target The pixel value of image is reduced, but increased than the pixel value of the image in Fig. 2 embodiment.Then, from the mesh after diminution First area after diminution is marked off multiple detection zones by the first area that diminution is taken in logo image, using training in advance Full convolutional network human face recognition model identifies that obtained recognition result is first area to the first area under the second scale In each detection zone face probability.
Since the pixel value of the first area after reducing is reduced than the pixel value of former first area, it is using When the full convolutional network human face recognition model of training identifies the first area under the second scale in advance, calculation amount ratio is used Calculation amount when above-mentioned model identifies the first area under archeus is small, so improving calculating speed.But than Fig. 2 reality The pixel value for applying the image in example increased, and therefore, the face probability of each detection zone is accurate in calculating target image Degree is greater than the accuracy being roughly calculated in Fig. 2 embodiment.
It should be noted that first area can also first be obtained from former target image, then only to first got Region carries out diminution processing, is contracted under the second scale, and the disclosure to not limiting in this respect.
In step 303, the distributed intelligence based on the corresponding face probability in first area, detects mesh from first area Mark region.
In the present embodiment, it can be examined from first area according to the distributed intelligence of the corresponding face probability in first area Measure target area.As shown in Figure 3B, Fig. 3 B is that one kind shown according to an exemplary embodiment is detected from first area The flow chart of the method for target area can be detected from first area respectively by repeating detecting step 311-314 Each target area.It is every to execute the primary detecting step, it can detecte out the region of a face.The detecting step can wrap It includes:
In step 311, the set of the detection zone of not labeled face mark in first area is obtained.
In the present embodiment, after the detection zone in first area is confirmed as human face region, face mark can be used The detection zone is marked in knowledge, and corresponds to the detection zone of same face using identical face mark label. Therefore, it is possible, firstly, to obtain the set of the detection zone of not labeled face mark in first area.
In step 312, the detection zone of the first predetermined condition will be met in set, centered on detection zone.
In the present embodiment, meeting the first predetermined condition may include: face maximum probability, and face probability be greater than or Person is equal to the second probability.Wherein, the second probability is the threshold value of a preset probability, and the second probability can be an empirical value, It is appreciated that the disclosure does not limit the specific value aspect of the second probability.It should be noted that the second probability should be greater than figure The first probability in 2 embodiments.
Therefore, corresponding face probability can be more than or equal in the detection zone of the second probability from above-mentioned set, The detection zone for obtaining face maximum probability, by detection zone central area the most.
In step 313, the detection zone that the second predetermined condition will be met around Spot detection region, as corresponding week Side detection zone.
In the present embodiment, meeting the second predetermined condition includes: that face probability is more than or equal to the second probability, and people Face probability is less than the face probability of all detection zones between Spot detection region.For example, detection zone SAIt is examined with center Survey region S0Between, all detection zones being separated by include SB、SC、SDIf detection zone SAFace probability be respectively less than detect Region SB、SC、SDFace probability, then detection zone SAIt can be used as Spot detection region S0Corresponding periphery detection zone.
It specifically, can be since the detection zone adjacent with Spot detection region, to leaving Spot detection region Direction, successively judges whether detection zone meets the second predetermined condition, until detecting the detection for being unsatisfactory for the second predetermined condition Until region, stop the adjacent area that detection is unsatisfactory for the detection zone of the second predetermined condition.And it obtains and meets the second predetermined item The detection zone of part is as corresponding periphery detection zone.
For example, can first judge whether four detection zones adjacent with Spot detection region meet the second predetermined condition, If meeting the second predetermined condition, using this four detection zones as corresponding periphery detection zone.And judge respectively this four Whether the adjacent detection zone of a periphery detection zone meets the second predetermined condition, if meeting the second predetermined condition, will expire The adjacent detection area of the second predetermined condition of foot is also used as corresponding periphery detection zone.And so on, until detecting to be discontented with The detection zone of the second predetermined condition of foot, stopping continue to test adjacent detection zone.
In a step 314, connected region Spot detection region and corresponding periphery detection zone constituted is as one Face corresponds to target area, and marks identical face to identify center detection zone and corresponding periphery detection zone.
In the present embodiment, can using connected region that Spot detection region and corresponding periphery detection zone are constituted as One face corresponds to target area, and marks identical face mark to center detection zone and corresponding periphery detection zone Know.
In step 304, the corresponding location information of every face in target image is determined according to target area.
It should be noted that for the step identical with Figure 1A and Fig. 2 embodiment, in above-mentioned Fig. 3 A embodiment no longer It is repeated, related content can be found in Figure 1A and Fig. 2 embodiment.
The method of the recognition of face provided by the above embodiment of the disclosure obtains the firstth area after getting first area The distributed intelligence of the corresponding face probability in domain, based on the distributed intelligence of the corresponding face probability in first area, from first area It detects target area, and the corresponding location information of every face in target image is determined according to target area.Thus into one The time is saved to step, improves the efficiency of recognition of face.
As shown in figure 4, Fig. 4 is the flow chart of the method for another recognition of face shown according to an exemplary embodiment, The process that the corresponding location information of every face in target image is determined according to target area is described in detail in the embodiment, should Method can be used in terminal device or server, comprising the following steps:
In step 401, first area is obtained.
In step 402, using region corresponding to the every face detected from first area as target area.
In step 403, the corresponding precise region of every face in target image is determined based on target area.
In the present embodiment, if each target area be it is independent, it is non-conterminous between each other, then can will be independent Each target area is as the corresponding precise region of every face in target image.Further accurate meter is not needed to carry out again It calculates.If there is adjacent target area, then illustrates that two faces may be closer from obtaining, need to carry out further accurate meter It calculates, so that it is determined that the corresponding precise region of every face in target image out.
It specifically, can be using the minimum rectangular area comprising adjacent target region as candidate region, using preparatory instruction Experienced full convolutional network human face recognition model identifies the candidate region under third scale, corresponding to obtain candidate region The distributed intelligence of face probability, wherein third scale is less than or equal to the archeus of target image, is greater than the second scale.So Afterwards, by repeating detecting step 311~314, to obtain the corresponding precise region of every face in target image.
In step 404, the corresponding location information of every face in target image is determined based on precise region.
In the present embodiment, for every face, it is corresponding that the face in target image can be determined as follows out Location information: it is possible, firstly, to using the center of the precise region of the face as the center of circle obtain border circular areas so that the border circular areas In, the precise region proportion of face is more than or equal to predetermined ratio.Then, the boundary rectangle of the border circular areas is obtained Location information is as the corresponding location information of the face.
It should be noted that for the step identical with Figure 1A-Fig. 3 A embodiment, in above-mentioned Fig. 4 embodiment no longer It is repeated, related content can be found in Figure 1A-Fig. 3 A embodiment.
The method of the recognition of face provided by the above embodiment of the disclosure will be from the firstth area after getting first area It is determined in target image often as target area, and based on target area in region corresponding to the every face detected in domain The corresponding precise region of face is opened, the corresponding location information of every face in target image is determined based on precise region.To Improve the accuracy and efficiency of recognition of face.
It should be noted that although describing the operation of the method for the present invention in the accompanying drawings with particular order, this is not required that Or hint must execute these operations in this particular order, or have to carry out operation shown in whole and be just able to achieve the phase The result of prestige.On the contrary, the step of describing in flow chart can change and execute sequence.Additionally or alternatively, it is convenient to omit certain Multiple steps are merged into a step and executed, and/or a step is decomposed into execution of multiple steps by step.
Corresponding with the embodiment of the method for aforementioned recognition of face, the disclosure additionally provides the device of recognition of face and its is answered The embodiment of terminal.
As shown in figure 5, Fig. 5 is a kind of disclosure device block diagram of recognition of face shown according to an exemplary embodiment, The device includes: to obtain module 501, detection module 502 and determining module 503.
Wherein, module 501 is obtained, is configured as obtaining first area, first area is in target image to be identified Partial region, first area include all faces in target image.
In the present embodiment, image to be identified available first is as target image.It can be straight by picture pick-up device Acquisition image is connect, and using acquired image as target image, some images can also be chosen from the image locally prestored As target image, the image of other terminal devices or server transport can also be received as target image.It can manage Solution, can also obtain target image otherwise, and the disclosure does not limit the concrete mode aspect for obtaining target image.
Then, first area can be obtained from target image, which includes all faces in target image, Also, the first area is the partial region in target image.In the present embodiment, it needs first rough to detect target image The general area of middle face obtains first area according to the general area of face in target image.In one implementation, may be used In a manner of using existing some accuracy lower, directly target image is identified, thus rough acquisition target figure The face probability of each detection zone as in.In another implementation, the algorithm of full convolutional network is also based on to contracting The target image of small scale is identified, thus the rough face probability for obtaining each detection zone in target image.It can be with Understand, the face probability of each detection zone in target image can also be obtained otherwise, the disclosure is to acquisition target It is not limited in terms of the concrete mode of the face probability of each detection zone in image.
Detection module 502 is configured as the every face institute that will be detected from the first area that acquisition module 501 obtains Corresponding region is as target area.
Determining module 503 is configured as being determined according to the target area that detection module 502 detects in target image every Open the corresponding location information of face.
In the present embodiment, other regions in target image in addition to first area can be ignored, only to first area Discriminance analysis is carried out, to detect the corresponding region of every face as target area respectively from first area.Due to each The corresponding target area of face may be a border circular areas either approximate circle irregular area, and image is by more The pixel of row multiple row arrangement is constituted, and therefore, for ease of description and is marked, can be according to the corresponding target area of each face It determines a rectangular area comprising the target area, is described and marked in target image with the rectangular area to correspond to face Location information.
Device as shown in Figure 5 is for realizing above-mentioned method flow as shown in Figure 1A.The device of above-mentioned recognition of face is real Example is applied, obtains the partial region comprising all faces from target image to be identified by acquisition module 501 first as first Region, then detection module 502 identifies first area, and region corresponding to the every face therefrom detected is as mesh Region is marked, last determining module 503 determines the corresponding location information of every face in target image according to target area.To In the case where guaranteeing does not reduce accuracy of identification, is accurately identified without each region to whole image, saves the time, Improve the efficiency of recognition of face.
As shown in fig. 6, Fig. 6 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 5, obtaining module 501 may include: the first acquisition submodule 601 With the second acquisition submodule 602.
Wherein, the first acquisition submodule 601 is configured as obtaining the distributed intelligence of the corresponding face probability of target image, Face probability is the probability with face characteristic.
In the present embodiment, after getting target image to be identified, the corresponding face of available target image The distributed intelligence of probability.In a kind of implementation of the present embodiment, existing some lower modes of accuracy can be used, Directly target image is identified, thus the rough face probability for obtaining each detection zone in target image.
In another implementation of the present embodiment, it is also based on full convolutional network and obtains the corresponding people of target image The distributed intelligence of face probability.The distributed intelligence of the corresponding face probability of target image obtained using full convolutional network technology is had Higher accuracy.
Second acquisition submodule 602 is configured as the corresponding people of target image obtained based on the first acquisition submodule 601 The distributed intelligence of face probability from target image fetching portion region as first area.
In the present embodiment, statistics calculating can be carried out to the corresponding face probability of detection zone each in target image, The region that face probability is more than or equal to the first probability in acquisition target image can make second area as second area For the human face region got roughly.Wherein, the first probability is the threshold value of a preset probability, and the first probability can be one Empirical value, it will be understood that the disclosure does not limit the specific value aspect of the first probability.
The Installation practice of above-mentioned recognition of face obtains the corresponding face of target image by the first acquisition submodule 601 The distributed intelligence of probability, and the distributed intelligence by the second acquisition submodule 602 based on the corresponding face probability of target image is from mesh Region corresponding to the every face detected from first area is made as first area in fetching portion region in logo image For target area, the corresponding location information of every face in target image is determined according to target area.To further save The time has been saved, the efficiency of recognition of face is improved.
As shown in fig. 7, Fig. 7 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 6, the first acquisition submodule 601 may include: that probability obtains submodule Block 701 and information determine submodule 702.
Wherein, probability acquisition submodule 701 is configured as obtaining each detection zone in target image based on full convolutional network The face probability in domain.
In the present embodiment, it for target image, is arranged by the pixel of multiple row and columns, it can be by target Image is evenly dividing into multiple regions as detection zone according to the arrangement of pixel, and each detection zone includes one group of rectangle picture The array of vegetarian refreshments.For example, detection zone can be divided to target image as unit of the rectangular pixels lattice array of 16*16, it is false If the pixel of target image is 1600*3200, then 100*200 detection zone can be marked off from target image.With each Detection zone is a detection unit, calculates separately the face probability of each detection zone.It is appreciated that can also be with 32*32 Rectangular pixels lattice array be unit, detection zone is divided to target image, the disclosure is to the specific division of detection zone aspect It does not limit.
Information determines submodule 702, is configured as the people of each detection zone obtained according to probability acquisition submodule 701 Face probability and location information determine the distributed intelligence of the corresponding face probability of target image.
In the present embodiment, the location information of available each detection zone, then by the position of each detection zone Information joins with corresponding face probability correlation, to generate the distributed intelligence of the corresponding face probability of target image.For example, can be with The temperature figure of the corresponding face probability of target image is generated, which can be used for describing the corresponding face probability of target image Distributed intelligence.
The Installation practice of above-mentioned recognition of face is based on full convolutional network by probability acquisition submodule 701 and obtains target The face probability of each detection zone in image.And determine what submodule 702 was obtained according to probability acquisition submodule 701 by information The face probability and location information of each detection zone determine the distributed intelligence of the corresponding face probability of target image.Thus into The time is saved to one step, improves the efficiency of recognition of face.
As shown in figure 8, Fig. 8 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 7, probability acquisition submodule 701 may include: the first identification submodule Block 801.
Wherein, the first identification submodule 801, is configured as the full convolutional network human face recognition model pair using training in advance Target image under first scale is identified, to obtain the face probability of each detection zone in target image;Wherein, first Scale is less than the archeus of target image.
In the present embodiment, specifically, target image first can be contracted to the first scale, at this point, the picture of target image Pixel value of the plain value also than former target image is reduced.Then, the target image after diminution is marked off into multiple detection zones, The target image under the first scale is identified using the full convolutional network human face recognition model of training in advance, obtained identification It as a result is the face probability of each detection zone in target image.
The Installation practice of above-mentioned recognition of face, due to reduce after target image pixel value than former target image picture Plain value is reduced, and is carried out in the full convolutional network human face recognition model using training in advance to the target image under the first scale When identification, calculation amount when calculation amount is than identifying the target image under archeus using above-mentioned model is small, therefore improves Calculating speed.
As shown in figure 9, Fig. 9 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 6, the second acquisition submodule 602 may include: that region obtains submodule Block 901.
Wherein, region acquisition submodule 901 is configured as face probability in target image being more than or equal to first The region of probability obtains the minimum rectangular area comprising second area as the firstth area as second area from target image Domain.
It in the present embodiment, can be using the arbitrarily region comprising second area in target image as first area.Into one Step, the minimum rectangular area comprising second area can also be obtained from target image as first area.
The Installation practice of above-mentioned recognition of face, since the area that second area is chosen is minimum, then calculation amount is also minimum , therefore improve calculating speed.
As shown in Figure 10, Figure 10 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 5, detection module 502 may include: third acquisition submodule 1001 With detection sub-module 1002.
Wherein, third acquisition submodule 1001 is configured as obtaining the distributed intelligence of the corresponding face probability in first area.
In the present embodiment, after getting first area, the distribution of the corresponding face probability in first area is then obtained Information.The first area under the second scale can be identified using the full convolutional network human face recognition model of training in advance, To obtain the distributed intelligence of the corresponding face probability in first area, wherein the second scale is less than the archeus of target image, is greater than The first scale in Fig. 2 embodiment.
Detection sub-module 1002 is configured as the corresponding face in first area obtained based on third acquisition submodule 1001 The distributed intelligence of probability, detects target area from first area.
In the present embodiment, it can be examined from first area according to the distributed intelligence of the corresponding face probability in first area Measure target area.Detecting step shown in 3B can be repeated as detection sub-module 1002, be examined respectively from first area Measure each target area.It is every to execute the primary detecting step, it can detecte out the region of a face.
Device as shown in Figure 10 is for realizing above-mentioned method flow as shown in Figure 3A.The device of above-mentioned recognition of face is real Example is applied, after getting first area, by obtaining the distributed intelligence of the corresponding face probability in first area, is based on first area The distributed intelligence of corresponding face probability detects target area from first area, and determines target according to target area The corresponding location information of every face in image.To further save the time, the efficiency of recognition of face is improved.
As shown in figure 11, Figure 11 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 10, third acquisition submodule 1001 may include: the second identification Module 1101.
Wherein, the second identification submodule 1101, is configured as the full convolutional network human face recognition model using training in advance First area under second scale is identified, to obtain the distributed intelligence of the corresponding face probability in first area, wherein the Two scales are less than the archeus of target image, are greater than the first scale.
Specifically, target image first can be contracted to the second scale, at this point, the pixel value of target image is than former target The pixel value of image is reduced, but increased than the pixel value of the image in Fig. 2 embodiment.Then, from the mesh after diminution First area after diminution is marked off multiple detection zones by the first area that diminution is taken in logo image, using training in advance Full convolutional network human face recognition model identifies that obtained recognition result is first area to the first area under the second scale In each detection zone face probability.
The Installation practice of above-mentioned recognition of face, due to reduce after first area pixel value than former first area picture Plain value is reduced, therefore, in the full convolutional network human face recognition model using training in advance to the firstth area under the second scale When domain is identified, calculation amount when calculation amount is than identifying the first area under archeus using above-mentioned model is small, institute To improve calculating speed.But the pixel value than the image in Fig. 2 embodiment increased, and therefore, calculate every in target image The accuracy of the face probability of a detection zone is greater than the accuracy being roughly calculated in Fig. 2 embodiment.
As shown in figure 12, Figure 12 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 10, detection sub-module 1002 may include: set acquisition submodule 1201, Spot detection region determines submodule 1202, and periphery detection zone determines submodule 1203 and label submodule 1204.
Wherein, gather acquisition submodule 1201, be configured as obtaining the detection of not labeled face mark in first area The set in region.
In the present embodiment, after the detection zone in first area is confirmed as human face region, face mark can be used The detection zone is marked in knowledge, and corresponds to the detection zone of same face using identical face mark label. Therefore, it is possible, firstly, to obtain the set of the detection zone of not labeled face mark in first area.
Spot detection region determines submodule 1202, is configured as to gather in the set that acquisition submodule 1201 obtains full Foot the first predetermined condition detection zone, centered on detection zone.
In the present embodiment, meeting the first predetermined condition may include: face maximum probability, and face probability be greater than or Person is equal to the second probability.Wherein, the second probability is the threshold value of a preset probability, and the second probability can be an empirical value, It is appreciated that the disclosure does not limit the specific value aspect of the second probability.It should be noted that the second probability should be greater than the One probability.
Therefore, corresponding face probability can be more than or equal in the detection zone of the second probability from above-mentioned set, The detection zone for obtaining face maximum probability, by detection zone central area the most.
Periphery detection zone determines submodule 1203, is configured as determining Spot detection region into what submodule 1202 determined The detection zone for meeting the second predetermined condition around Spot detection region, as corresponding periphery detection zone.
In the present embodiment, meeting the second predetermined condition includes: that face probability is more than or equal to the second probability, and people Face probability is less than the face probability of all detection zones between Spot detection region.Specifically, it can be examined from center It surveys the adjacent detection zone in region to start, to the direction for leaving Spot detection region, successively judges whether detection zone meets the Two predetermined conditions, until detecting the detection zone for being unsatisfactory for the second predetermined condition, it is predetermined that stopping detection being unsatisfactory for second The adjacent area of the detection zone of condition.And the detection zone for meeting the second predetermined condition is obtained as corresponding periphery detection zone Domain.
For example, can first judge whether four detection zones adjacent with Spot detection region meet the second predetermined condition, If meeting the second predetermined condition, using this four detection zones as corresponding periphery detection zone.And judge respectively this four Whether the adjacent detection zone of a periphery detection zone meets the second predetermined condition, if meeting the second predetermined condition, will expire The adjacent detection area of the second predetermined condition of foot is also used as corresponding periphery detection zone.And so on, until detecting to be discontented with The detection zone of the second predetermined condition of foot, stopping continue to test adjacent detection zone.
Submodule 1204 is marked, the connected region for constituting in Spot detection region with corresponding periphery detection zone is configured as Domain corresponds to target area as a face, and marks identical people to center detection zone and corresponding periphery detection zone Face mark.
In the present embodiment, can using connected region that Spot detection region and corresponding periphery detection zone are constituted as One face corresponds to target area, and marks identical face mark to center detection zone and corresponding periphery detection zone Know.
Device as shown in figure 12 is for realizing above-mentioned method flow as shown in Figure 3B.The device of above-mentioned recognition of face is real Example is applied, can detecte out the region of every face.To further save the time, the efficiency of recognition of face is improved.
As shown in figure 13, Figure 13 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 12, periphery detection zone determines that submodule 1203 may include: item Part detection sub-module 1301.
Wherein, condition detection sub-module 1301 is configured as since the detection zone adjacent with Spot detection region, according to The secondary detection zone for meeting the second predetermined condition that obtains is as corresponding periphery detection zone.
In some optional embodiments, meeting the first predetermined condition includes: face maximum probability, and face probability is big In or equal to the second probability.
Meet the second predetermined condition include: face probability be more than or equal to the second probability, and face probability be less than with The face probability of all detection zones between Spot detection region.
As shown in figure 14, Figure 14 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 5, determining module 503 may include: that precise region determines submodule 1401 and location information determine submodule 1402.
Wherein, precise region determines submodule 1401, is configured as determining in target image every based on target area The corresponding precise region of face.
In the present embodiment, if each target area be it is independent, it is non-conterminous between each other, then can will be independent Each target area is as the corresponding precise region of every face in target image.Further accurate meter is not needed to carry out again It calculates.If there is adjacent target area, then illustrates that two faces may be closer from obtaining, need to carry out further accurate meter It calculates, so that it is determined that the corresponding precise region of every face in target image out.
It specifically, can be using the minimum rectangular area comprising adjacent target region as candidate region, using preparatory instruction Experienced full convolutional network human face recognition model identifies the candidate region under third scale, corresponding to obtain candidate region Face probability distributed intelligence, wherein third scale be less than or equal to target image archeus, be greater than the second scale. Then, by repeating 311~step 314 of detecting step, to obtain the corresponding accurate area of every face in target image Domain.
Location information determines submodule 1402, is configured as determining the accurate area that submodule 1401 determines based on precise region Determine the corresponding location information of every face in target image in domain.
In the present embodiment, for every face, it is corresponding that the face in target image can be determined as follows out Location information: it is possible, firstly, to using the center of the precise region of the face as the center of circle obtain border circular areas so that the border circular areas In, the precise region proportion of face is more than or equal to predetermined ratio.Then, the boundary rectangle of the border circular areas is obtained Location information is as the corresponding location information of the face.
Device as shown in figure 14 is for realizing above-mentioned method flow as shown in Figure 4.The device of above-mentioned recognition of face is real Example is applied, after getting first area, using region corresponding to the every face detected from first area as target area Domain, and the corresponding precise region of every face in target image is determined based on target area, mesh is determined based on precise region The corresponding location information of every face in logo image.To improve the accuracy and efficiency of recognition of face.
As shown in figure 15, Figure 15 is the device frame of the disclosure another recognition of face shown according to an exemplary embodiment Figure, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 14, precise region determines that submodule 1401 may include: to choose son Module 1501, third identify submodule 1502 and implementation sub-module 1503.
Wherein, submodule 1501 is chosen, is configured as when there are adjacent target area, will include adjacent target region Minimum rectangular area as candidate region.
Third identifies submodule 1502, is configured as the full convolutional network human face recognition model using training in advance to third Candidate region under scale is identified, to obtain the distributed intelligence of the corresponding face probability in candidate region, wherein third scale Less than or equal to the archeus of target image, it is greater than the second scale.
Implementation sub-module 1503 is configured as repeating detecting step, corresponding to obtain every face in target image Precise region.
As shown in figure 16, Figure 16 is the device of the disclosure another recognition of face shown according to an exemplary embodiment Block diagram, for the embodiment on the basis of aforementioned embodiment illustrated in fig. 14, location information determines that submodule 1402 may include: circle Region acquisition submodule 1601 and location information acquisition submodule 1602.
Wherein, border circular areas acquisition submodule 1601 is configured as obtaining using the center of the precise region of the face as the center of circle Border circular areas is taken, so that the precise region proportion of face is more than or equal to predetermined ratio in the border circular areas.
Location information acquisition submodule 1602 is configured as obtaining the circle that border circular areas acquisition submodule 1601 obtains The location information of the boundary rectangle in region is as the corresponding location information of the face.
The Installation practice of above-mentioned recognition of face obtains border circular areas by the center of circle of the center of the precise region of the face, So that the precise region proportion of face is more than or equal to predetermined ratio in the border circular areas.Then, the circle is obtained The location information of the boundary rectangle in domain is as the corresponding location information of the face.To further improve the accurate of recognition of face Degree and efficiency.
It should be appreciated that above-mentioned apparatus can be set in advance in terminal device or server, downloading etc. can also be passed through Mode and be loaded into terminal device or server.Corresponding module in above-mentioned apparatus can be with terminal device or server In module cooperate to realize the scheme of recognition of face.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual The purpose for needing to select some or all of the modules therein to realize disclosure scheme.Those of ordinary skill in the art are not paying Out in the case where creative work, it can understand and implement.
Correspondingly, the disclosure also provides a kind of terminal, which includes processor;For the executable finger of storage processor The memory of order;Wherein, which is configured as:
First area is obtained, the first area is the partial region in target image to be identified, the first area Include all faces in the target image;
Using region corresponding to the every face detected from the first area as target area;
The corresponding location information of every face in the target image is determined according to the target area.
Figure 17 is an a kind of structural schematic diagram of the device 9900 of recognition of face shown according to an exemplary embodiment.Example Such as, device 9900 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, and plate is set It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig.1 7, device 9900 may include following one or more components: processing component 9902, memory 9904, Power supply module 9906, multimedia component 9908, audio component 9910, the interface 9912 of input/output (I/O), sensor module 9914 and communication component 9916.
The integrated operation of the usual control device 9900 of processing component 9902, such as with display, telephone call, data communication, Camera operation and record operate associated operation.Processing element 9902 may include one or more processors 9920 to execute Instruction, to perform all or part of the steps of the methods described above.In addition, processing component 9902 may include one or more moulds Block, convenient for the interaction between processing component 9902 and other assemblies.For example, processing component 9902 may include multi-media module, To facilitate the interaction between multimedia component 9908 and processing component 9902.
Memory 9904 is configured as storing various types of data to support the operation in device 9900.These data Example includes the instruction of any application or method for operating on device 9900, contact data, telephone book data, Message, picture, video etc..Memory 9904 can by any kind of volatibility or non-volatile memory device or they Combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can Program read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory Reservoir, disk or CD.
Power supply module 9906 provides electric power for the various assemblies of device 9900.Power supply module 9906 may include power management System, one or more power supplys and other with for device 9900 generate, manage, and distribute the associated component of electric power.
Multimedia component 9908 includes the screen of one output interface of offer between described device 9900 and user.? In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, Screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes that one or more touch passes Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding is dynamic The boundary of work, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, Multimedia component 9908 includes a front camera and/or rear camera.When device 9900 is in operation mode, as shot When mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition to take the photograph As head and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 9910 is configured as output and/or input audio signal.For example, audio component 9910 includes a wheat Gram wind (MIC), when device 9900 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone quilt It is configured to receive external audio signal.The received audio signal can be further stored in memory 9904 or via communication Component 9916 is sent.In some embodiments, audio component 9910 further includes a loudspeaker, is used for output audio signal.
I/O interface 9912 provides interface, above-mentioned peripheral interface module between processing component 9902 and peripheral interface module It can be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and Locking press button.
Sensor module 9914 includes one or more sensors, and the state for providing various aspects for device 9900 is commented Estimate.For example, sensor module 9914 can detecte the state that opens/closes of device 9900, the relative positioning of component, such as institute The display and keypad that component is device 9900 are stated, sensor module 9914 can be with detection device 9900 or device 9,900 1 The position change of a component, the existence or non-existence that user contacts with device 9900,9900 orientation of device or acceleration/deceleration and dress Set 9900 temperature change.Sensor module 9914 may include proximity sensor, be configured in not any physics It is detected the presence of nearby objects when contact.Sensor module 9914 can also include optical sensor, as CMOS or ccd image are sensed Device, for being used in imaging applications.In some embodiments, which can also include acceleration sensing Device, gyro sensor, Magnetic Sensor, pressure sensor, microwave remote sensor or temperature sensor.
Communication component 9916 is configured to facilitate the communication of wired or wireless way between device 9900 and other equipment.Dress The wireless network based on communication standard, such as WiFi can be accessed by setting 9900,2G or 3G or their combination.It is exemplary at one In embodiment, communication component 9916 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel Information.In one exemplary embodiment, the communication component 9916 further includes near-field communication (NFC) module, to promote short distance Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 9900 can be by one or more application specific integrated circuit (ASIC), number Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 9904 of instruction, above-metioned instruction can be executed by the processor 9920 of device 9900 to complete the above method.Example Such as, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, soft Disk and optical data storage devices etc..
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (17)

1. a kind of method of recognition of face, which is characterized in that the described method includes:
First area is obtained, the first area is the partial region in target image to be identified, and the first area includes All faces in the target image;
Using region corresponding to the every face detected from the first area as target area;
The corresponding location information of every face in the target image is determined according to the target area;
Wherein, the acquisition first area, comprising:
The distributed intelligence of the corresponding face probability of the target image is obtained, the face probability is with the general of face characteristic Rate;
Distributed intelligence fetching portion region conduct from the target image based on the corresponding face probability of the target image First area;
Wherein, the distributed intelligence for obtaining the corresponding face probability of the target image, comprising:
The face probability of each detection zone in the target image is obtained based on full convolutional network;
According to the face probability and location information of each detection zone, the corresponding face probability of the target image is determined Distributed intelligence;
Wherein, the face probability that each detection zone in the target image is obtained based on full convolutional network, comprising:
The target image under the first scale is identified using the full convolutional network human face recognition model of training in advance, with Obtain the face probability of each detection zone in the target image;Wherein, first scale is less than the target image Archeus;
It is wherein, described using region corresponding to the every face detected from the first area as target area, comprising:
Obtain the distributed intelligence of the corresponding face probability in the first area;
Based on the distributed intelligence of the corresponding face probability in the first area, the target area is detected from the first area Domain;
Wherein, following detecting step is repeated, to detect each target area respectively from the first area:
Obtain the set of the detection zone of not labeled face mark in the first area;
The detection zone of the first predetermined condition will be met in the set, centered on detection zone;
The detection zone of the second predetermined condition will be met around the Spot detection region, as corresponding periphery detection zone;
Using the Spot detection region and the connected region that corresponding periphery detection zone is constituted target corresponding as a face Region, and mark identical face to identify in the Spot detection region and corresponding periphery detection zone.
2. the method according to claim 1, wherein described based on the corresponding face probability of the target image Distributed intelligence from the target image fetching portion region as first area, comprising:
Region using face probability in the target image more than or equal to the first probability is as second area, from the target The minimum rectangular area comprising the second area is obtained in image as first area.
3. the method according to claim 1, wherein the first area corresponding face probability of obtaining Distributed intelligence, comprising:
The first area under the second scale is identified using the full convolutional network human face recognition model of training in advance, to obtain The distributed intelligence of the corresponding face probability in the first area, wherein second scale is less than the former ruler of the target image Degree is greater than first scale.
4. the method according to claim 1, wherein described will meet second in advance around the Spot detection region The detection zone of fixed condition, as corresponding periphery detection zone, comprising:
Since the detection zone adjacent with the Spot detection region, the detection zone for meeting the second predetermined condition is successively obtained As corresponding periphery detection zone.
5. the method according to claim 1, wherein the first predetermined condition of the satisfaction include: face probability most Greatly, and face probability is more than or equal to the second probability;
The second predetermined condition of the satisfaction include: face probability be more than or equal to the second probability, and face probability be less than with The face probability of all detection zones between Spot detection region.
6. according to the method described in claim 3, it is characterized in that, described determine the target figure according to the target area The corresponding location information of every face as in, comprising:
The corresponding precise region of every face in the target image is determined based on the target area;
The corresponding location information of every face in the target image is determined based on the precise region.
7. according to the method described in claim 6, it is characterized in that, described determine the target figure based on the target area The corresponding precise region of every face as in, comprising:
Adjacent target area if it exists, using the minimum rectangular area comprising adjacent target region as candidate region;
The candidate region under third scale is identified using the full convolutional network human face recognition model of training in advance, to obtain The distributed intelligence of the corresponding face probability in the candidate region, wherein the third scale is less than or equal to the target figure The archeus of picture is greater than second scale;
The detecting step is repeated, to obtain the corresponding precise region of every face in the target image.
8. determining should in the target image the method according to the description of claim 7 is characterized in that being directed to every face The corresponding location information of face, comprising:
Border circular areas is obtained by the center of circle of the center of the precise region of the face, so that in the border circular areas, the accurate area of face Domain proportion is more than or equal to predetermined ratio;
The location information of the boundary rectangle of the border circular areas is obtained as the corresponding location information of the face.
9. a kind of device of recognition of face, which is characterized in that described device includes:
Module is obtained, is configured as obtaining first area, the first area is the partial region in target image to be identified, The first area includes all faces in the target image;
Detection module is configured as corresponding to the every face that will be detected from the first area that the acquisition module obtains Region is as target area;
Determining module is configured as determining in the target image every according to the target area that the detection module detects The corresponding location information of face;
Wherein, the acquisition module includes:
First acquisition submodule is configured as obtaining the distributed intelligence of the corresponding face probability of the target image, the face Probability is the probability with face characteristic;
Second acquisition submodule is configured as the corresponding face probability of target image obtained based on first acquisition submodule Distributed intelligence from the target image fetching portion region as first area;
Wherein, first acquisition submodule includes:
Probability acquisition submodule is configured as obtaining the face of each detection zone in the target image based on full convolutional network Probability;
Information determines submodule, is configured as the face probability of each detection zone obtained according to the probability acquisition submodule And location information, determine the distributed intelligence of the corresponding face probability of the target image;
Wherein, the probability acquisition submodule includes:
First identification submodule, is configured as the full convolutional network human face recognition model using training in advance under the first scale The target image is identified, to obtain the face probability of each detection zone in the target image;Wherein, described first Scale is less than the archeus of the target image;
Wherein, the detection module includes:
Third acquisition submodule is configured as obtaining the distributed intelligence of the corresponding face probability in the first area;
Detection sub-module is configured as point of the corresponding face probability in first area obtained based on the third acquisition submodule Cloth information detects the target area from the first area;
Wherein, the detection sub-module detects each institute by repeating detecting step respectively from the first area State target area;The detection sub-module includes:
Gather acquisition submodule, is configured as obtaining the collection of the detection zone of not labeled face mark in the first area It closes;
Spot detection region determines submodule, and it is pre- to be configured as satisfaction first in the set for obtaining the set acquisition submodule The detection zone of fixed condition, centered on detection zone;
Periphery detection zone determines submodule, is configured as determining the Spot detection region into the Spot detection that submodule determines The detection zone for meeting the second predetermined condition around region, as corresponding periphery detection zone;
Submodule is marked, is configured as making the Spot detection region with the connected region that corresponding periphery detection zone is constituted Target area is corresponded to for a face, and identical people is marked to the Spot detection region and corresponding periphery detection zone Face mark.
10. device according to claim 9, which is characterized in that second acquisition submodule includes:
Region acquisition submodule is configured as face probability in the target image being more than or equal to the region of the first probability As second area, the minimum rectangular area comprising the second area is obtained from the target image as first area.
11. device according to claim 9, which is characterized in that the third acquisition submodule includes:
Second identification submodule, is configured as the full convolutional network human face recognition model using training in advance under the second scale First area is identified, to obtain the distributed intelligence of the corresponding face probability in the first area, wherein second scale Less than the archeus of the target image, it is greater than first scale.
12. device according to claim 9, which is characterized in that the periphery detection zone determines that submodule includes:
Condition detection sub-module is configured as since the detection zone adjacent with the Spot detection region, is successively obtained full The detection zone of the second predetermined condition of foot is as corresponding periphery detection zone.
13. device according to claim 9, which is characterized in that the first predetermined condition of the satisfaction include: face probability most Greatly, and face probability is more than or equal to the second probability;
The second predetermined condition of the satisfaction include: face probability be more than or equal to the second probability, and face probability be less than with The face probability of all detection zones between Spot detection region.
14. device according to claim 11, which is characterized in that the determining module includes:
Precise region determines submodule, is configured as determining every face pair in the target image based on the target area The precise region answered;
Location information determines submodule, and the precise region for being configured as being determined that submodule is determined based on the precise region is determined The corresponding location information of every face in the target image.
15. device according to claim 14, which is characterized in that the precise region determines that submodule includes:
Submodule is chosen, is configured as when there are adjacent target area, by the smallest rectangular area comprising adjacent target region Domain is as candidate region;
Third identifies submodule, is configured as the full convolutional network human face recognition model using training in advance under third scale Candidate region is identified, to obtain the distributed intelligence of the corresponding face probability in the candidate region, wherein the third scale Less than or equal to the archeus of the target image, it is greater than second scale;
Implementation sub-module is configured as indicating that the detection sub-module repeats the detecting step, to obtain the target The corresponding precise region of every face in image.
16. device according to claim 15, which is characterized in that location information determines that submodule includes:
Border circular areas acquisition submodule is configured as obtaining border circular areas by the center of circle of the center of the precise region of the face, make It obtains in the border circular areas, the precise region proportion of face is more than or equal to predetermined ratio;
Location information acquisition submodule is configured as obtaining the outer of the border circular areas of the border circular areas acquisition submodule acquisition The location information of rectangle is connect as the corresponding location information of the face.
17. a kind of terminal characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
First area is obtained, the first area is the partial region in target image to be identified, and the first area includes All faces in the target image;
Using region corresponding to the every face detected from the first area as target area;
The corresponding location information of every face in the target image is determined according to the target area;
Wherein, the acquisition first area, comprising:
The distributed intelligence of the corresponding face probability of the target image is obtained, the face probability is with the general of face characteristic Rate;
Distributed intelligence fetching portion region conduct from the target image based on the corresponding face probability of the target image First area;
Wherein, the distributed intelligence for obtaining the corresponding face probability of the target image, comprising:
The face probability of each detection zone in the target image is obtained based on full convolutional network;
According to the face probability and location information of each detection zone, the corresponding face probability of the target image is determined Distributed intelligence;
Wherein, the face probability that each detection zone in the target image is obtained based on full convolutional network, comprising:
The target image under the first scale is identified using the full convolutional network human face recognition model of training in advance, with Obtain the face probability of each detection zone in the target image;Wherein, first scale is less than the target image Archeus;
It is wherein, described using region corresponding to the every face detected from the first area as target area, comprising:
Obtain the distributed intelligence of the corresponding face probability in the first area;
Based on the distributed intelligence of the corresponding face probability in the first area, the target area is detected from the first area Domain;
Wherein, following detecting step is repeated, to detect each target area respectively from the first area:
Obtain the set of the detection zone of not labeled face mark in the first area;
The detection zone of the first predetermined condition will be met in the set, centered on detection zone;
The detection zone of the second predetermined condition will be met around the Spot detection region, as corresponding periphery detection zone;
Using the Spot detection region and the connected region that corresponding periphery detection zone is constituted target corresponding as a face Region, and mark identical face to identify in the Spot detection region and corresponding periphery detection zone.
CN201610491320.0A 2016-06-28 2016-06-28 The method, apparatus and terminal of recognition of face Active CN105975961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610491320.0A CN105975961B (en) 2016-06-28 2016-06-28 The method, apparatus and terminal of recognition of face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610491320.0A CN105975961B (en) 2016-06-28 2016-06-28 The method, apparatus and terminal of recognition of face

Publications (2)

Publication Number Publication Date
CN105975961A CN105975961A (en) 2016-09-28
CN105975961B true CN105975961B (en) 2019-06-28

Family

ID=57020916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610491320.0A Active CN105975961B (en) 2016-06-28 2016-06-28 The method, apparatus and terminal of recognition of face

Country Status (1)

Country Link
CN (1) CN105975961B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446862A (en) * 2016-10-11 2017-02-22 厦门美图之家科技有限公司 Face detection method and system
CN107145833A (en) * 2017-04-11 2017-09-08 腾讯科技(上海)有限公司 The determination method and apparatus of human face region
CN107844785B (en) * 2017-12-08 2019-09-24 浙江捷尚视觉科技股份有限公司 A kind of method for detecting human face based on size estimation
CN109990893B (en) * 2017-12-29 2021-03-09 浙江宇视科技有限公司 Photometric parameter calculation method and device and terminal equipment
CN108537208A (en) * 2018-04-24 2018-09-14 厦门美图之家科技有限公司 A kind of multiple dimensioned method for detecting human face and computing device
CN109558839A (en) * 2018-11-29 2019-04-02 徐州立讯信息科技有限公司 Adaptive face identification method and the equipment and system for realizing this method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592147A (en) * 2011-12-30 2012-07-18 深圳市万兴软件有限公司 Method and device for detecting human face
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104794462A (en) * 2015-05-11 2015-07-22 北京锤子数码科技有限公司 Figure image processing method and device
CN105046231A (en) * 2015-07-27 2015-11-11 小米科技有限责任公司 Face detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592147A (en) * 2011-12-30 2012-07-18 深圳市万兴软件有限公司 Method and device for detecting human face
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104794462A (en) * 2015-05-11 2015-07-22 北京锤子数码科技有限公司 Figure image processing method and device
CN105046231A (en) * 2015-07-27 2015-11-11 小米科技有限责任公司 Face detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Face Detection with the Faster R-CNN;Huaizu Jiang 等;《arXiv preprint arXiv:1606.03473》;20160610;1-6
R-FCN:Object Detection via Region-based Fully Convolutional Networks;Jifeng Dai 等;《Advances in Neural Information Processing Systems (NIPS)》;20160621;摘要,正文第2节第1-2,5,11段,第4.1节第11段,第4.2节第2段

Also Published As

Publication number Publication date
CN105975961A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN105975961B (en) The method, apparatus and terminal of recognition of face
CN106651955B (en) Method and device for positioning target object in picture
CN106454336B (en) The method and device and terminal that detection terminal camera is blocked
CN104850828B (en) Character recognition method and device
CN105631403B (en) Face identification method and device
CN105828201B (en) Method for processing video frequency and device
CN106202330A (en) The determination methods of junk information and device
CN107239535A (en) Similar pictures search method and device
WO2017049794A1 (en) Instruction-generating method and device
CN104036240B (en) The localization method and device of human face characteristic point
CN107688781A (en) Face identification method and device
CN106295511A (en) Face tracking method and device
CN107463903B (en) Face key point positioning method and device
CN109801270A (en) Anchor point determines method and device, electronic equipment and storage medium
CN107133352A (en) Photo display methods and device
WO2020114236A1 (en) Keypoint detection method and apparatus, electronic device, and storage medium
CN107560619A (en) Recommend method and apparatus in path
CN106339695A (en) Face similarity detection method, device and terminal
CN108062547A (en) Character detecting method and device
US11245886B2 (en) Method and apparatus for synthesizing omni-directional parallax view, and storage medium
CN104123720A (en) Image repositioning method, device and terminal
CN105513067A (en) Image definition detection method and device
CN106201275A (en) Snapshot is as processing method and processing device
CN107845094A (en) Pictograph detection method, device and computer-readable recording medium
CN107292901B (en) Edge detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant