CN106407909B - Face recognition method, device and system - Google Patents

Face recognition method, device and system Download PDF

Info

Publication number
CN106407909B
CN106407909B CN201610794770.7A CN201610794770A CN106407909B CN 106407909 B CN106407909 B CN 106407909B CN 201610794770 A CN201610794770 A CN 201610794770A CN 106407909 B CN106407909 B CN 106407909B
Authority
CN
China
Prior art keywords
face
pixel
coordinate
point coordinate
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610794770.7A
Other languages
Chinese (zh)
Other versions
CN106407909A (en
Inventor
张勇
何茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beta Technology Co ltd
Original Assignee
Beijing Fotoable Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fotoable Technology Ltd filed Critical Beijing Fotoable Technology Ltd
Priority to CN201610794770.7A priority Critical patent/CN106407909B/en
Publication of CN106407909A publication Critical patent/CN106407909A/en
Application granted granted Critical
Publication of CN106407909B publication Critical patent/CN106407909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a face recognition method, a face recognition device and a face recognition system. Because the average RGB color value is obtained based on the image to be recognized, the recognition result has errors caused by different complexions of human faces in the image to be recognized or different complexions of human faces caused by different light rays. Whether each pixel belongs to the skin area is judged by calculating and converting each pixel in the intermediate image of the preset color space into the weighted distance of the intermediate average color of the preset color space corresponding to the average RGB color value of the human face skin confidence area, and the judgment basis is as follows: the larger the weighted distance is, the larger the probability that the pixel point does not belong to the skin region is. The face skin recognition method provided by the embodiment of the application has strong robustness.

Description

Face identification method, apparatus and system
Technical field
This application involves technical field of image processing, more particularly relate to face identification method, apparatus and system.
Background technique
In the various image procossings for portrait and computer vision application, such as the automatic whitening of portrait, automatic nti-freckle, color Feelings image detection etc. all relies on accurate face skin identification method.
Face skin identification method is the method based on colour of skin priori statistical knowledge at present, is based on colour of skin priori statistical knowledge Method be to count skin distribution probability of the colour of skin sample in a certain color space, utilize using certain colour of skin sample The probability distribution graph arrived calculates the probability that current color is skin area.Such method fast speed, but precision is poor, for Image under different ethnic groups, different light conditions is difficult to obtain accurate skin detection result.
For this purpose, the prior art needs a kind of higher face skin identification method of robustness.
Summary of the invention
In view of this, the present invention provides a kind of face identification method, apparatus and system, to overcome face in the prior art The lower problem of skin identification method robustness.
To achieve the above object, the invention provides the following technical scheme:
A kind of face skin identification method, comprising:
Human face characteristic point coordinate in images to be recognized is obtained, the human face characteristic point coordinate includes face contour characteristic points Coordinate and five features point coordinate;
Based on the human face characteristic point coordinate, determine that face confidence region, the face confidence region include face skin Confidence region and face confidence region;
Calculate the face skin confidence region being located in the images to be recognized, the average RGB color of all pixels point Value;
The images to be recognized is converted to the intermediate image in pre-set color space, and by the average RGB color value Be converted to the intermediate means color value in the pre-set color space;
The Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, obtains with each The Weighted distance is the Weighted distance figure of pixel value;
Mapping function is preset according to monotone decreasing, the pixel value of each pixel in the Weighted distance figure is mapped back 0 To 255, face skin final area is obtained;
The region that the pixel that pixel value in the face skin final area is more than or equal to preset value is formed, is determined as Face skin area.
Wherein, described to be based on the human face characteristic point coordinate, determine that face skin confidence region includes:
The face contour characteristic points coordinate is inside contracted into the first presupposition multiple, acquisition inside contracts face contour characteristic points seat Mark;
Face contour characteristic points coordinate is inside contracted according to described in, determines described to inside contract face contour characteristic points coordinate packet The face confidence region enclosed.
The five features point coordinate is extended out into the second presupposition multiple, acquisition extends out five features point coordinate;
Five features point coordinate is extended out according to described in, determines the face for extending out the encirclement of five features point coordinate Confidence region;
By the region of the face confidence region non-in the face confidence region, it is determined as the face skin confidence area Domain.
Wherein, described that the face contour characteristic points coordinate is inside contracted the first presupposition multiple, acquisition inside contracts face foreign steamer Wide characteristic point coordinate includes:
According to Pnew1=P1 (1-rate1)+Pcenter1*rate1, face contour characteristic points seat is inside contracted described in calculating Mark;
Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the face contour characteristic points coordinate Mass center, Pnew1 be it is described inside contract face contour characteristic points coordinate, rate1 is first presupposition multiple, and described first is pre- If multiple is greater than 0;
Described that the five features point coordinate is extended out the second presupposition multiple, acquisition extends out five features point coordinate and includes:
According to Pnew2=P2 (1-rate2)+Pcenter2*rate2, five features point coordinate is extended out described in calculating;
Wherein, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 is institute It states and extends out five features point coordinate, rate2 is second presupposition multiple, and second presupposition multiple is less than 0.
Wherein, the Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, Obtain include: by the Weighted distance figure of pixel value of each Weighted distance
According to following formula, the Weighted distance of each pixel Yu the intermediate means color value is calculated:
Wherein, WL、Wa、WbFor the weight in three channels in the pre-set color space, (Li,j,ai,j,bi,j) it is in described Between in image three channels of pixel (i, j) color value, (Lmean,amean,bmean) it is the intermediate means color value, di,j For the pixel value of pixel (i, j) in the Weighted distance figure.
A kind of face skin identification device, comprising:
Module is obtained, for obtaining human face characteristic point coordinate in images to be recognized, the human face characteristic point coordinate includes people Face contour characteristic points coordinate and five features point coordinate;
First determining module determines face confidence region, the face confidence for being based on the human face characteristic point coordinate Region includes face skin confidence region and face confidence region;
First computing module, for calculating the face skin confidence region being located in the images to be recognized, all pictures The average RGB color value of vegetarian refreshments;
Conversion module, for the images to be recognized to be converted to the intermediate image in pre-set color space, and will be described Average RGB color value is converted to the intermediate means color value in the pre-set color space;
Second computing module, for calculate each pixel and the intermediate means color value in the intermediate image plus Distance is weighed, is obtained using each Weighted distance as the Weighted distance figure of pixel value;
Mapping block, for presetting mapping function according to monotone decreasing, by each pixel in the Weighted distance figure Pixel value maps back 0 to 255, obtains face skin final area.
Wherein, first determining module includes:
First acquisition unit is inside contracted for the face contour characteristic points coordinate to be inside contracted the first presupposition multiple Face contour characteristic points coordinate;
First determination unit is determined described to inside contract face for inside contracting face contour characteristic points coordinate according to described in The face confidence region that contour characteristic points coordinate surrounds;
Second acquisition unit, for the five features point coordinate to be extended out the second presupposition multiple, acquisition extends out face spy Sign point coordinate;
Second determination unit is determined described to extend out five features point for extending out five features point coordinate according to described in The face confidence region that coordinate surrounds;
Third determination unit, for being determined as the region of the face confidence region non-in the face confidence region Face skin confidence region.
Wherein, the first acquisition unit includes:
First computation subunit inside contracts described in calculating for foundation Pnew1=P1 (1-rate1)+Pcenter1*rate1 Face contour characteristic points coordinate;
Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the face contour characteristic points coordinate Mass center, Pnew1 be it is described inside contract face contour characteristic points coordinate, rate1 is first presupposition multiple, and described first is pre- If multiple is greater than 0;
The second acquisition unit includes:
Second computation subunit extends out described in calculating for foundation Pnew2=P2 (1-rate2)+Pcenter2*rate2 Five features point coordinate;
Wherein, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 is institute It states and extends out five features point coordinate, rate2 is second presupposition multiple, and second presupposition multiple is less than 0.
Wherein, second computing module includes:
Third computation subunit, for calculating each pixel and the intermediate means color value according to following formula Weighted distance:
Wherein, WL、Wa、WbFor the weight in three channels in the pre-set color space, (Li,j,ai,j,bi,j) it is in described Between in image three channels of pixel (i, j) color value, (Lmean,amean,bmean) it is the intermediate means color value, di,j For the pixel value of pixel (i, j) in the Weighted distance figure.
A kind of face identification system, comprising:
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to:
Human face characteristic point coordinate in images to be recognized is obtained, the human face characteristic point coordinate includes face contour characteristic points Coordinate and five features point coordinate;
Based on the human face characteristic point coordinate, determine that face confidence region, the face confidence region include face skin Confidence region and face confidence region;
Calculate the face skin confidence region being located in the images to be recognized, the average RGB color of all pixels point Value;
The images to be recognized is converted to the intermediate image in pre-set color space, and by the average RGB color value Be converted to the intermediate means color value in the pre-set color space;
The Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, obtains with each The Weighted distance is the Weighted distance figure of pixel value;
Mapping function is preset according to monotone decreasing, the pixel value of each pixel in the Weighted distance figure is mapped back 0 To 255, face skin final area is obtained;
The region that the pixel that pixel value in the face skin final area is more than or equal to preset value is formed, is determined as Face skin area.
It can be seen via above technical scheme that compared with prior art, face identification method provided by the embodiments of the present application In, first by obtaining human face characteristic point coordinate, such as five features point coordinate in images to be recognized, determine current figure to be identified It include face as in;It is then based on the human face characteristic point coordinate, determines face skin confidence region, that is, determine face The approximate range of skin, so being calculated here all in face skin confidence region in images to be recognized in order to avoid error The average RGB color value of the pixel value of pixel refers to skin pixels value as one.Since the RGB color value that is averaged is base It is obtained in this images to be recognized, so will not be led because of the colour of skin difference or light difference of face in images to be recognized The face complexion of cause is different, and entire recognition result is caused to have error.
Which pixel belongs to skin area in determination images to be recognized in order to be more accurate, and images to be recognized is converted For the intermediate image in pre-set color space, and the centre that the average RGB color value is converted to the pre-set color space is put down Equal color value can thus calculate the Weighted distance of each pixel and intermediate means color value in intermediate image, be added Weigh distance map;Since each pixel value shows that more greatly Weighted distance is bigger in Weighted distance figure, which is not belonging to skin region The probability in domain is bigger, the pixel value of pixel each in Weighted distance figure can be preset mapping function according to monotone decreasing and be reflected It is emitted back towards 0 to 255, obtains face skin final area.
Face skin identification method provided by the embodiments of the present application has very strong robustness.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of face skin identification method provided by the embodiments of the present application;
Fig. 2 is the schematic diagram for obtaining human face characteristic point coordinate in the embodiment of the present application using ASM method;
Fig. 3 is a kind of schematic diagram of face skin confidence region provided by the embodiments of the present application;
Fig. 4 is a kind of face skin final area figure provided by the embodiments of the present application;
Fig. 5 is being sat in a kind of face skin identification method provided by the embodiments of the present application based on the human face characteristic point Mark, determines a kind of flow diagram of implementation of face skin confidence region;
Fig. 6 is a kind of structural schematic diagram of face identification device provided by the embodiments of the present application;
Fig. 7 is a kind of implementation of the first determining module in a kind of face identification device provided by the embodiments of the present application Structural schematic diagram;
Fig. 8 is a kind of structural schematic diagram of face identification system provided by the embodiments of the present application.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, being a kind of flow chart of face skin identification method provided by the embodiments of the present application, this method packet It includes:
Step S101: human face characteristic point coordinate in images to be recognized is obtained.
Images to be recognized can be RGB color space figure picture.
The human face characteristic point coordinate includes face contour characteristic points coordinate and five features point coordinate.
Human face characteristic point coordinate acquiring method has implementation in very much, such as ASM (Active Shape Model) method Or neural network method etc..
As described in Figure 2, the schematic diagram to obtain human face characteristic point coordinate in the embodiment of the present application using ASM method.
From figure 2 it can be seen that human face characteristic point coordinate includes the face contour characteristic points coordinate (spy i.e. on curve 20 Levy point) and five features point coordinate.
Five features point coordinate refers to two eyes contour characteristic points coordinates 21 and lip contour characteristic points coordinate 22; Five features point coordinate can also include eyebrow contour characteristic points coordinate 23.Five features point coordinate can not include outside nose Contour feature point coordinate 24, because nose areas also belongs to the range of face skin, certain five features point coordinate also be can wrap Include nose contour characteristic points coordinate 24.
Step S102: it is based on the human face characteristic point coordinate, determines face confidence region.
The face confidence region includes face skin confidence region and face confidence region.
It as described in Figure 3, is a kind of schematic diagram of face skin confidence region provided by the embodiments of the present application.
As can be seen from Figure 3 face skin confidence region 31 can be approximately the interior of face contour characteristic points coordinate encirclement Portion region removes the region (such as eyes, lip position, eyebrow position) that five features point coordinate surrounds.
Since forehead position is easy the interference (such as someone keeps neat fringe) by hair, the embodiment of the present application can To use eyebrow contour characteristic points, i.e., the position (including eyebrow position) more than eyebrow is not belonging to face skin confidence region.
The pixel value of face skin confidence region can be disposed as 255, the pixel value of the pixel at other positions is equal It is set as 0, as shown in figure 3, the region that pixel value is 255 is people's face skin confidence region 31.
Step S103: calculating the face skin confidence region being located in the images to be recognized, and all pixels point is put down Equal RGB color value.
Images to be recognized is the image of RGB color, is calculated in the images to be recognized in face skin confidence region 31 The average RGB color value of each pixel, it is to be understood that face skin confidence region 31 is all the skin on face, should Average RGB color value is the average value of the pixel value of the pixel of all skins of the face.That is, which position For pixel value closer to the RGB color value that is averaged, which is more likely to belong to the position of the skin of face.
Since the RGB color value that is averaged is obtained based on this images to be recognized, so will not be because of images to be recognized Face complexion caused by the colour of skin difference or light of middle face are different is different, and entire recognition result is caused to have error.
Step S104: the images to be recognized is converted to the intermediate image in pre-set color space, and will be described average RGB color value is converted to the intermediate means color value in the pre-set color space.
Intermediate image refers to that images to be recognized is converted to the image behind pre-set color space.Intermediate means color value refers to flat Equal RGB color value is converted to the color value in the pre-set color space.
Pre-set color space is the color space more sensitive to skin detection, for example, can for Lab color space or YCrCb color space.
Step S105: calculating the Weighted distance of each pixel and the intermediate means color value in the intermediate image, It obtains using each Weighted distance as the Weighted distance figure of pixel value.
Since there are 3 channels in pre-set color space, the weight in each channel is different, it is possible to according to 3 channel difference Corresponding weight calculates the Weighted distance of each pixel and intermediate means color value in intermediate image.
According to following formula, the Weighted distance of each pixel Yu the intermediate means color value is calculated:
Wherein, WL、Wa、WbFor the weight in three channels in the pre-set color space, can value according to the actual situation, Such as value is Isosorbide-5-Nitrae, 4 etc., (L respectivelyi,j,ai,j,bi,j) it is three channels of pixel (i, j) in the intermediate image Color value, (Lmean,amean,bmean) it is the intermediate means color value, di,jFor pixel (i, j) in the Weighted distance figure Pixel value.
It can be understood that color of the smaller pixel value color for showing pixel of Weighted distance closer to face skin.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
Step S106: mapping function is preset according to monotone decreasing, by the pixel of each pixel in the Weighted distance figure Value maps back 0 to 255, obtains face skin final area.
Default mapping function should have the feature that
Weighted distance value is smaller, indicates the pixel value of the pixel closer to skin color, the Weighted distance reflects at this time Value is penetrated closer to 255, represent the pixel belong to face skin final area probability it is higher;Weighted distance value is bigger, shows The probability that the pixel belongs to face skin final area is lower, and the mapping value of the Weighted distance is closer to 0 at this time.Practical application When, linear function or spline curve function can be used in default mapping function.
To sum up, default mapping function can be monotonic decreasing function, and the corresponding mapping value of Weighted distance maximum value is 0, The corresponding mapping value of Weighted distance minimum value is 255.
As shown in figure 4, being a kind of face skin final area figure provided by the embodiments of the present application.
Figure 4, it is seen that the pixel value of the pixel in face skin final area 41 obviously with background area, eye Eyeball region, lip region are different, therefore can accurately identify face skin final area.
In face identification method provided by the embodiments of the present application, sat first by obtaining human face characteristic point in images to be recognized Mark determines in current images to be recognized comprising face;It is then based on the human face characteristic point coordinate, determines face skin confidence area Domain, that is, determine the approximate range of face skin, so calculating face skin in images to be recognized here in order to avoid error The average RGB color value of the pixel value of all pixels in skin confidence region refers to skin pixels value as one.Due to this Average RGB color value be obtained based on this images to be recognized, so will not because of face in images to be recognized the colour of skin not Together or face complexion caused by light difference is different, and entire recognition result is caused to have error.
Which pixel belongs to skin area in determination images to be recognized in order to be more accurate, and images to be recognized is converted For the intermediate image in pre-set color space, and the centre that the average RGB color value is converted to the pre-set color space is put down Equal color value can thus calculate the Weighted distance of each pixel and intermediate means color value in intermediate image, be added Weigh distance map;Since each pixel value shows that more greatly Weighted distance is bigger in Weighted distance figure, which is not belonging to skin region The probability in domain is bigger, the pixel value of pixel each in Weighted distance figure can be preset mapping function according to monotone decreasing and be reflected It is emitted back towards 0 to 255, obtains face skin final area.
Face skin identification method provided by the embodiments of the present application has very strong robustness.
It is understood that being removed in the part that face skin confidence region can surround for face contour characteristic points coordinate The part of five features point encirclement is gone to, i.e. the part that curve 20 in Fig. 2 surrounds removes the portion that five features point coordinate surrounds Point.But since face contour characteristic points coordinate may be inaccurate, at this point, determining face skin confidence area using the above method Domain may include non-face region, if it is determined that face contour characteristic points coordinate be located on curve 24, then 20 He of curve Region between curve 24 is non-face region, and five features point coordinate may also determine inaccuracy, outside the eyes in Fig. 2 The part that contour feature point surrounds, compares close to the inner portion, eyes skin portion is not included in eyes contour characteristic points coordinate and is wrapped In the range of enclosing.In order to avoid non-face skin area is added, lead to average RGB color value inaccuracy, the embodiment of the present application is skilful Wonderful uses as under type determines face skin confidence region.
Referring to Fig. 5, being special based on the face in a kind of face skin identification method provided by the embodiments of the present application Sign point coordinate, determines a kind of flow diagram of implementation of face skin confidence region, this method comprises:
Step S501: the face contour characteristic points coordinate is inside contracted into the first presupposition multiple, acquisition inside contracts face foreign steamer Wide characteristic point coordinate.
According to Pnew1=P1 (1-rate1)+Pcenter1*rate1, face contour characteristic points seat is inside contracted described in calculating Mark.Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the matter of the face contour characteristic points coordinate The heart, Pnew1 inside contract face contour characteristic points coordinate to be described, and rate1 is first presupposition multiple, and described first presets again Number is greater than 0.
The mass center Pcenter1 of face contour characteristic points coordinate, refers to the mass centre of face.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
Step S502: inside contracting face contour characteristic points coordinate according to described in, determines that described to inside contract face outer profile special The face confidence region that sign point coordinate surrounds.
Step S503: the five features point coordinate is extended out into the second presupposition multiple, acquisition extends out five features point coordinate.
According to Pnew2=P2 (1-rate2)+Pcenter2*rate2, five features point coordinate is extended out described in calculating;Its In, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 extends out face to be described Characteristic point coordinate, rate2 are second presupposition multiple, and second presupposition multiple is less than 0.
Five features point coordinate includes: two eyes contour characteristic points coordinates and lip contour characteristic points coordinate.Five Official's characteristic point coordinate can also include: nose contour characteristic points coordinate, eyebrow contour characteristic points coordinate.
Extending out five features point coordinate accordingly, to include: that eyes contour characteristic points coordinate is corresponding extend out eyes outer profile Characteristic point coordinate;Lip contour characteristic points coordinate is corresponding to extend out lip contour characteristic points coordinate, can also include: nose Contour characteristic points coordinate is corresponding to extend out nose contour characteristic points coordinate;Eyebrow contour characteristic points coordinate is corresponding to be extended out Eyebrow contour characteristic points coordinate.
Different when extending out characteristic point coordinate calculating, the mass center of five features point is different, for example, extending out eyes in calculating When contour characteristic points coordinate, the mass center of five features point is the mass center of eyes;Lip contour characteristic points are extended out in calculating to sit When mark, the mass center of five features point is the mass center of lip;When calculating extends out nose contour characteristic points coordinate, five features point Mass center be nose mass center;When calculating extends out eyebrow contour characteristic points coordinate, the mass center of five features point is eyebrow Mass center.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
Step S504: extending out five features point coordinate according to described in, determines that the five features point coordinate that extends out surrounds The face confidence region.
Step S505: by the region of the face confidence region non-in the face confidence region, it is determined as the face Skin confidence region.
Region as pixel value is 255 in Fig. 3 is face skin confidence region.
In order to which those skilled in the art more understand that the embodiment of the present application provides the recognition speed of face identification method, with 1280*800 size and face are illustrated for occupying the image of most of area.
Individual is measured on Macbook Pro (Retina, 15-inch, Mid 2015), OS X 10.11, XCode7.3 Figure average calculation times are about 9.34ms.
Referring to Fig. 6, being a kind of structural schematic diagram of face identification device provided by the embodiments of the present application, the recognition of face Device include: obtain module 61, the first determining module 62, the first computing module 63, conversion module 64, the second computing module 65 with And mapping block 66, in which:
Module 61 is obtained, for obtaining human face characteristic point coordinate in images to be recognized, the human face characteristic point coordinate includes Face contour characteristic points coordinate and five features point coordinate.
Images to be recognized can be RGB color space figure picture.
Human face characteristic point coordinate acquiring method has implementation in very much, such as ASM (Active Shape Model) method Or neural network method etc..
Detailed description please refers to the description to Fig. 2, and details are not described herein.
First determining module 62 determines face confidence region, the face is set for being based on the human face characteristic point coordinate Believe that region includes face skin confidence region and face confidence region.
Detailed description please refers to the description to Fig. 3, and details are not described herein.
First computing module 63 owns for calculating the face skin confidence region being located in the images to be recognized The average RGB color value of pixel.
Images to be recognized is the image of RGB color, is calculated in the images to be recognized in face skin confidence region 31 The average RGB color value of each pixel, it is to be understood that face skin confidence region 31 is all the skin on face, should Average RGB color value is the average value of the pixel value of the pixel of all skins of the face.That is, which position For pixel value closer to the RGB color value that is averaged, which is more likely to belong to the position of the skin of face.
Since the RGB color value that is averaged is obtained based on this images to be recognized, so will not be because of images to be recognized Face complexion caused by the colour of skin difference or light of middle face are different is different, and entire recognition result is caused to have error.
Conversion module 64, for the images to be recognized to be converted to the intermediate image in pre-set color space, and by institute State the intermediate means color value that average RGB color value is converted to the pre-set color space.
Pre-set color space is the color space more sensitive to skin detection, for example, can for Lab color space or YCrCb color space.
Intermediate image refers to that images to be recognized is converted to the image behind pre-set color space.Intermediate means color value refers to flat Equal RGB color value is converted to the color value in the pre-set color space.
Second computing module 65, for calculating each pixel and the intermediate means color value in the intermediate image Weighted distance is obtained using each Weighted distance as the Weighted distance figure of pixel value.
Since there are 3 channels in pre-set color space, the weight in each channel is different, it is possible to according to 3 channel difference Corresponding weight calculates the Weighted distance of each pixel and intermediate means color value in intermediate image.
Second computing module includes: third computation subunit, for according to following formula, calculate each pixel with it is described The Weighted distance of intermediate means color value:
Wherein, WL、Wa、WbFor the weight in three channels in the pre-set color space, can value according to the actual situation, Such as value is Isosorbide-5-Nitrae, 4 etc., (L respectivelyi,j,ai,j,bi,j) it is three channels of pixel (i, j) in the intermediate image Color value, (Lmean,amean,bmean) it is the intermediate means color value, di,jFor pixel (i, j) in the Weighted distance figure Pixel value.
It can be understood that color of the smaller pixel value color for showing pixel of Weighted distance closer to face skin.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
Mapping block 66, for presetting mapping function according to monotone decreasing, by each pixel in the Weighted distance figure Pixel value map back 0 to 255, obtain face skin final area.
Default mapping function should have the feature that
Weighted distance value is smaller, indicates the pixel value of the pixel closer to skin color, the Weighted distance reflects at this time Value is penetrated closer to 255, represent the pixel belong to face skin final area probability it is higher;Weighted distance value is bigger, shows The probability that the pixel belongs to face skin final area is lower, and the mapping value of the Weighted distance is closer to 0 at this time.Practical application When, linear function or spline curve function can be used in default mapping function.
To sum up, default mapping function can be monotonic decreasing function, and the corresponding mapping value of Weighted distance maximum value is 0, The corresponding mapping value of Weighted distance minimum value is 255.
Detailed description is referring to Fig. 4, details are not described herein.
In face identification device provided by the embodiments of the present application, people in images to be recognized is obtained by obtaining module 61 first Face characteristic point coordinate determines in current images to be recognized comprising face;Then the first determining module 62 is based on the face characteristic Point coordinate, determines face skin confidence region, that is, determine the approximate range of face skin, in order to avoid error, here First computing module 63 calculates the average RGB of the pixel value of pixel all in face skin confidence region in images to be recognized Color value refers to skin pixels value as one.Since the RGB color value that is averaged is obtained based on this images to be recognized, So will not because of face in images to be recognized the colour of skin is different or light it is different caused by face complexion it is different, cause whole A recognition result has error.
Which pixel belongs to skin area in determination images to be recognized in order to be more accurate, and conversion module 64 will be wait know Other image is converted to the intermediate image in pre-set color space, and the average RGB color value is converted to the pre-set color sky Between intermediate means color value, such second computing module 65 can calculate each pixel and intermediate means in intermediate image The Weighted distance of color value obtains Weighted distance figure;Since each pixel value shows that more greatly Weighted distance is got in Weighted distance figure Greatly, the pixel be not belonging to skin area probability it is bigger, mapping block 66 can be by pixel each in Weighted distance figure Pixel value presets mapping function according to monotone decreasing and returns 0 to 255, obtains face skin final area.
Face skin identification device provided by the embodiments of the present application has very strong robustness.
It is understood that being removed in the part that face skin confidence region can surround for face contour characteristic points coordinate The part of five features point encirclement is gone to, i.e. the part that curve 20 in Fig. 2 surrounds removes the portion that five features point coordinate surrounds Point.But since face contour characteristic points coordinate may be inaccurate, at this point, determining face skin confidence area using the above method Domain may include non-face region, if it is determined that face contour characteristic points coordinate be located on curve 24, then 20 He of curve Region between curve 24 is non-face region, and five features point coordinate may also determine inaccuracy, outside the eyes in Fig. 2 The part that contour feature point surrounds, compares close to the inner portion, eyes skin portion is not included in eyes contour characteristic points coordinate and is wrapped In the range of enclosing.In order to avoid non-face skin area is added, lead to average RGB color value inaccuracy, the embodiment of the present application is skilful Wonderful uses as under type determines face skin confidence region.
Referring to Fig. 7, being one kind of the first determining module in a kind of face identification device provided by the embodiments of the present application The structural schematic diagram of implementation, first determining module may include: first acquisition unit 71, the first determination unit 72, Two acquiring units 73, the second determination unit 74 and third determination unit 75, in which:
First acquisition unit 71, for the face contour characteristic points coordinate to be inside contracted the first presupposition multiple, in acquisition Contracting face contour characteristic points coordinate.
First acquisition unit includes:
First computation subunit inside contracts described in calculating for foundation Pnew1=P1 (1-rate1)+Pcenter1*rate1 Face contour characteristic points coordinate.
Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the face contour characteristic points coordinate Mass center, Pnew1 be it is described inside contract face contour characteristic points coordinate, rate1 is first presupposition multiple, and described first is pre- If multiple is greater than 0.
The mass center Pcenter1 of face contour characteristic points coordinate, refers to the mass centre of face.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
First determination unit 72 is determined described to inside contract people for inside contracting face contour characteristic points coordinate according to described in The face confidence region that face contour characteristic points coordinate surrounds.
Second acquisition unit 73, for the five features point coordinate to be extended out the second presupposition multiple, acquisition extends out face Characteristic point coordinate.
Second acquisition unit includes:
Second computation subunit extends out described in calculating for foundation Pnew2=P2 (1-rate2)+Pcenter2*rate2 Five features point coordinate.
Wherein, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 is institute It states and extends out five features point coordinate, rate2 is second presupposition multiple, and second presupposition multiple is less than 0.
Five features point coordinate includes: two eyes contour characteristic points coordinates and lip contour characteristic points coordinate.Five Official's characteristic point coordinate can also include: nose contour characteristic points coordinate, eyebrow contour characteristic points coordinate.
Extending out five features point coordinate accordingly, to include: that eyes contour characteristic points coordinate is corresponding extend out eyes outer profile Characteristic point coordinate;Lip contour characteristic points coordinate is corresponding to extend out lip contour characteristic points coordinate, can also include: nose Contour characteristic points coordinate is corresponding to extend out nose contour characteristic points coordinate;Eyebrow contour characteristic points coordinate is corresponding to be extended out Eyebrow contour characteristic points coordinate.
Different when extending out characteristic point coordinate calculating, the mass center of five features point is different, for example, extending out eyes in calculating When contour characteristic points coordinate, the mass center of five features point is the mass center of eyes;Lip contour characteristic points are extended out in calculating to sit When mark, the mass center of five features point is the mass center of lip;When calculating extends out nose contour characteristic points coordinate, five features point Mass center be nose mass center;When calculating extends out eyebrow contour characteristic points coordinate, the mass center of five features point is eyebrow Mass center.
It should be noted that above-mentioned formula and not constituting a limitation of the invention, those skilled in the art can be according to this The technical idea combination practical application request designed, designed provided is provided.
Second determination unit 74 is determined described to extend out five features for extending out five features point coordinate according to described in The face confidence region that point coordinate surrounds.
Third determination unit 75, for determining the region of the face confidence region non-in the face confidence region For face skin confidence region.
Region as pixel value is 255 in Fig. 3 is face skin confidence region.
Referring to Fig. 8, being a kind of structural schematic diagram of face identification system provided by the embodiments of the present application, the recognition of face System includes: processor 81 and memory 82, is connected between processor 81 and memory 82 by communication bus 83.
Memory 82, for storing the processor-executable instruction.
Wherein, the processor is configured to:
Human face characteristic point coordinate in images to be recognized is obtained, the human face characteristic point coordinate includes face contour characteristic points Coordinate and five features point coordinate;
Based on the human face characteristic point coordinate, determine that face confidence region, the face confidence region include face skin Confidence region and face confidence region;
Calculate the face skin confidence region being located in the images to be recognized, the average RGB color of all pixels point Value;
The images to be recognized is converted to the intermediate image in pre-set color space, and by the average RGB color value Be converted to the intermediate means color value in the pre-set color space;
The Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, obtains with each The Weighted distance is the Weighted distance figure of pixel value;
Mapping function is preset according to monotone decreasing, the pixel value of each pixel in the Weighted distance figure is mapped back 0 To 255, face skin final area is obtained;
The region that the pixel that pixel value in the face skin final area is more than or equal to preset value is formed, is determined as Face skin area.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (9)

1. a kind of face skin identification method characterized by comprising
Human face characteristic point coordinate in images to be recognized is obtained, the human face characteristic point coordinate includes face contour characteristic points coordinate With five features point coordinate;
Based on the human face characteristic point coordinate, determine that face confidence region, the face confidence region include face skin confidence Region and face confidence region;
Calculate the face skin confidence region being located in the images to be recognized, the average RGB color value of all pixels point;
The images to be recognized is converted to the intermediate image in pre-set color space, and the average RGB color value is converted For the intermediate means color value in the pre-set color space;
The Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, obtains with each described Weighted distance is the Weighted distance figure of pixel value;
According to monotone decreasing preset mapping function, by the pixel value of each pixel in the Weighted distance figure map back 0 to 255, obtain face skin final area;
The region that the pixel that pixel value in the face skin final area is more than or equal to preset value is formed, is determined as face Skin area.
2. face skin identification method according to claim 1, which is characterized in that described to be sat based on the human face characteristic point Mark, determines that face skin confidence region includes:
The face contour characteristic points coordinate is inside contracted into the first presupposition multiple, acquisition inside contracts face contour characteristic points coordinate;
Face contour characteristic points coordinate is inside contracted according to described in, determines described to inside contract face contour characteristic points coordinate encirclement The face confidence region;
The five features point coordinate is extended out into the second presupposition multiple, acquisition extends out five features point coordinate;
Five features point coordinate is extended out according to described in, determines the face confidence for extending out the encirclement of five features point coordinate Region;
By the region of the face confidence region non-in the face confidence region, it is determined as face skin confidence region.
3. face skin identification method according to claim 2, which is characterized in that described by the face contour characteristic points Coordinate inside contracts the first presupposition multiple, and acquisition inside contracts face contour characteristic points coordinate and includes:
According to Pnew1=P1 (1-rate1)+Pcenter1*rate1, face contour characteristic points coordinate is inside contracted described in calculating;
Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the matter of the face contour characteristic points coordinate The heart, Pnew1 inside contract face contour characteristic points coordinate to be described, and rate1 is first presupposition multiple, and described first presets again Number is greater than 0;
Described that the five features point coordinate is extended out the second presupposition multiple, acquisition extends out five features point coordinate and includes:
According to Pnew2=P2 (1-rate2)+Pcenter2*rate2, five features point coordinate is extended out described in calculating;
Wherein, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 is described outer Expand five features point coordinate, rate2 is second presupposition multiple, and second presupposition multiple is less than 0.
4. face skin identification method according to claim 1, which is characterized in that each in the calculating intermediate image The Weighted distance of pixel and the intermediate means color value is obtained using each Weighted distance as the Weighted distance of pixel value Figure includes:
According to following formula, the Weighted distance of each pixel Yu the intermediate means color value is calculated:
Wherein, WL、Wa、WbFor the weight in three channels in the pre-set color space, (Li,j,ai,j,bi,j) it is the middle graph The color value in three channels of pixel (i, j), (L as inmean,amean,bmean) it is the intermediate means color value, di,jFor institute State the pixel value of pixel (i, j) in Weighted distance figure.
5. a kind of face skin identification device characterized by comprising
Module is obtained, for obtaining human face characteristic point coordinate in images to be recognized, the human face characteristic point coordinate includes outside face Contour feature point coordinate and five features point coordinate;
First determining module determines face confidence region, the face confidence region for being based on the human face characteristic point coordinate Including face skin confidence region and face confidence region;
First computing module, for calculating the face skin confidence region being located in the images to be recognized, all pixels point Average RGB color value;
Conversion module, for the images to be recognized to be converted to the intermediate image in pre-set color space, and will be described average RGB color value is converted to the intermediate means color value in the pre-set color space;
Second computing module, for calculate the weighting of each pixel and the intermediate means color value in the intermediate image away from From obtaining using each Weighted distance as the Weighted distance figure of pixel value;
Mapping block, for presetting mapping function according to monotone decreasing, by the pixel of each pixel in the Weighted distance figure Value maps back 0 to 255, obtains face skin final area.
6. face skin identification device according to claim 5, which is characterized in that first determining module includes:
First acquisition unit, for the face contour characteristic points coordinate to be inside contracted the first presupposition multiple, acquisition inside contracts face Contour characteristic points coordinate;
First determination unit is determined described to inside contract face foreign steamer for inside contracting face contour characteristic points coordinate according to described in The face confidence region that wide characteristic point coordinate surrounds;
Second acquisition unit, for the five features point coordinate to be extended out the second presupposition multiple, acquisition extends out five features point Coordinate;
Second determination unit is determined described to extend out five features point coordinate for extending out five features point coordinate according to described in The face confidence region surrounded;
Third determination unit, it is described for being determined as the region of the face confidence region non-in the face confidence region Face skin confidence region.
7. face skin identification device according to claim 6, which is characterized in that the first acquisition unit includes:
First computation subunit inside contracts face described in calculating for foundation Pnew1=P1 (1-rate1)+Pcenter1*rate1 Contour characteristic points coordinate;
Wherein, P1 is the face contour characteristic points coordinate, and Pcenter1 is the matter of the face contour characteristic points coordinate The heart, Pnew1 inside contract face contour characteristic points coordinate to be described, and rate1 is first presupposition multiple, and described first presets again Number is greater than 0;
The second acquisition unit includes:
Second computation subunit extends out face described in calculating for foundation Pnew2=P2 (1-rate2)+Pcenter2*rate2 Characteristic point coordinate;
Wherein, P2 is the five features point coordinate, and Pcenter2 is the mass center of the five features point, and Pnew2 is described outer Expand five features point coordinate, rate2 is second presupposition multiple, and second presupposition multiple is less than 0.
8. face skin identification device according to claim 5, which is characterized in that second computing module includes:
Third computation subunit, for calculating the weighting of each pixel Yu the intermediate means color value according to following formula Distance:
Wherein, WL, Wa, Wb are the weight in three channels in the pre-set color space, (Li,j,ai,j,bi,j) it is the middle graph The color value in three channels of pixel (i, j), (L as inmean,amean,bmean) it is the intermediate means color value, di,jFor institute State the pixel value of pixel (i, j) in Weighted distance figure.
9. a kind of face identification system characterized by comprising
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to:
Human face characteristic point coordinate in images to be recognized is obtained, the human face characteristic point coordinate includes face contour characteristic points coordinate With five features point coordinate;
Based on the human face characteristic point coordinate, determine that face confidence region, the face confidence region include face skin confidence Region and face confidence region;
Calculate the face skin confidence region being located in the images to be recognized, the average RGB color value of all pixels point;
The images to be recognized is converted to the intermediate image in pre-set color space, and the average RGB color value is converted For the intermediate means color value in the pre-set color space;
The Weighted distance for calculating each pixel and the intermediate means color value in the intermediate image, obtains with each described Weighted distance is the Weighted distance figure of pixel value;
According to monotone decreasing preset mapping function, by the pixel value of each pixel in the Weighted distance figure map back 0 to 255, obtain face skin final area;
The region that the pixel that pixel value in the face skin final area is more than or equal to preset value is formed, is determined as face Skin area.
CN201610794770.7A 2016-08-31 2016-08-31 Face recognition method, device and system Active CN106407909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610794770.7A CN106407909B (en) 2016-08-31 2016-08-31 Face recognition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610794770.7A CN106407909B (en) 2016-08-31 2016-08-31 Face recognition method, device and system

Publications (2)

Publication Number Publication Date
CN106407909A CN106407909A (en) 2017-02-15
CN106407909B true CN106407909B (en) 2019-04-02

Family

ID=58001673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610794770.7A Active CN106407909B (en) 2016-08-31 2016-08-31 Face recognition method, device and system

Country Status (1)

Country Link
CN (1) CN106407909B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991379B (en) * 2017-03-09 2020-07-10 Oppo广东移动通信有限公司 Human skin recognition method and device combined with depth information and electronic device
CN107818543B (en) * 2017-11-09 2021-03-30 北京小米移动软件有限公司 Image processing method and device
CN108961189B (en) * 2018-07-11 2020-10-30 北京字节跳动网络技术有限公司 Image processing method, image processing device, computer equipment and storage medium
CN109697095A (en) * 2018-11-26 2019-04-30 量子云未来(北京)信息科技有限公司 A kind of method, apparatus and terminal device promoting user's sleep
CN112037162B (en) * 2019-05-17 2022-08-02 荣耀终端有限公司 Facial acne detection method and equipment
CN110414333A (en) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 A kind of detection method and device of image boundary
CN110503659B (en) * 2019-07-09 2021-09-28 浙江浩腾电子科技股份有限公司 Moving object extraction method for video sequence
CN110555929B (en) * 2019-08-19 2020-08-14 北京戴纳实验科技有限公司 Laboratory entrance guard verification system and verification method
CN113551772B (en) * 2020-04-07 2023-09-15 武汉高德智感科技有限公司 Infrared temperature measurement method, infrared temperature measurement system and storage medium
CN113762010A (en) * 2020-11-18 2021-12-07 北京沃东天骏信息技术有限公司 Image processing method, device, equipment and storage medium
CN113947568B (en) * 2021-09-26 2024-03-29 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6711286B1 (en) * 2000-10-20 2004-03-23 Eastman Kodak Company Method for blond-hair-pixel removal in image skin-color detection
US8031936B2 (en) * 2006-03-20 2011-10-04 Accenture Global Services Limited Image processing system for skin detection and localization
CN101763502B (en) * 2008-12-24 2012-07-25 中国科学院自动化研究所 High-efficiency method and system for sensitive image detection
CN102496002A (en) * 2011-11-22 2012-06-13 上海大学 Facial beauty evaluation method based on images

Also Published As

Publication number Publication date
CN106407909A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106407909B (en) Face recognition method, device and system
US11030455B2 (en) Pose recognition method, device and system for an object of interest to human eyes
US8988317B1 (en) Depth determination for light field images
US8879801B2 (en) Image-based head position tracking method and system
CN104992402B (en) A kind of U.S. face processing method and processing device
US20150029322A1 (en) Method and computations for calculating an optical axis vector of an imaged eye
US9697415B2 (en) Recording medium, image processing method, and information terminal
CN106377264A (en) Human body height measuring method, human body height measuring device and intelligent mirror
CN104809424B (en) Method for realizing sight tracking based on iris characteristics
WO2018082388A1 (en) Skin color detection method and device, and terminal
CN111854620B (en) Monocular camera-based actual pupil distance measuring method, device and equipment
CN106469288A (en) A kind of reminding method and terminal
JP2014194617A (en) Visual line direction estimating device, visual line direction estimating method, and visual line direction estimating program
CN109146769A (en) Image processing method and device, image processing equipment and storage medium
KR101854991B1 (en) System and method for correcting color of digital image based on the human sclera and pupil
CN113358231A (en) Infrared temperature measurement method, device and equipment
KR20210084347A (en) Image processing method and apparatus, image processing apparatus and storage medium
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
KR102553141B1 (en) Method and device for providing alopecia information
EP3756164B1 (en) Methods of modeling a 3d object, and related devices and computer program products
EP3937074A1 (en) Method and apparatus for blood pressure measurement processing, and electronic device
CN112711984B (en) Fixation point positioning method and device and electronic equipment
KR101507410B1 (en) Live make-up photograpy method and apparatus of mobile terminal
CN104866808B (en) Human-eye positioning method and device
CN116704125A (en) Mapping method, device, chip and module equipment based on three-dimensional point cloud

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100192, C, room 4, building B-6, building No. 403, Zhongguancun Dongsheng science and Technology Park, Dongsheng Road, Haidian District, 66, Beijing,

Applicant after: Beijing beta Polytron Technologies Inc

Address before: 100000, C, building 4, building B6, Dongsheng Science Park, No. 66 Xiao Dong Road, Beijing, Haidian District

Applicant before: Beijing Yuntu Weidong Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100192 rooms c402 and 403, 4 / F, building C, building B-6, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee after: Beijing beta Technology Co.,Ltd.

Address before: 100192 rooms c402 and 403, 4 / F, building C, building B-6, Dongsheng Science Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee before: BEIJING FOTOABLE TECHNOLOGY LTD.