CN108446639A - Low-power consumption augmented reality equipment - Google Patents

Low-power consumption augmented reality equipment Download PDF

Info

Publication number
CN108446639A
CN108446639A CN201810236568.1A CN201810236568A CN108446639A CN 108446639 A CN108446639 A CN 108446639A CN 201810236568 A CN201810236568 A CN 201810236568A CN 108446639 A CN108446639 A CN 108446639A
Authority
CN
China
Prior art keywords
face
image
recognition
data
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810236568.1A
Other languages
Chinese (zh)
Inventor
张悠
陈熹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Ico Huizhi Technology Co Ltd
Original Assignee
Sichuan Ico Huizhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Ico Huizhi Technology Co Ltd filed Critical Sichuan Ico Huizhi Technology Co Ltd
Priority to CN201810236568.1A priority Critical patent/CN108446639A/en
Publication of CN108446639A publication Critical patent/CN108446639A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

In order to reduce the complexity of recognition of face, the present invention provides a kind of low-power consumption augmented reality equipment.The present invention can accurately realize the processing of facial recognition data and the feature extraction of face texture information, meanwhile, the relatively easy easy realization of algorithm can realize the distributed AR data transmissions based on recognition of face of low-power consumption.

Description

Low-power consumption augmented reality equipment
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of low-power consumption augmented reality equipment.
Background technology
The computer technology compared using analysis is refered in particular in recognition of face.Recognition of face is a popular computer technology Research field, face tracking detecting, adjust automatically image zoom, night infrared detecting, adjust automatically exposure intensity;It belongs to raw Object feature identification technique is to distinguish organism individual to organism (generally the refering in particular to people) biological characteristic of itself.
Face recognition technology is the face feature based on people, to the facial image or video flowing of input.First determine whether it With the presence or absence of face, if there is face, then the position of each face, size and each major facial organ are further provided Location information.And according to these information, the identity characteristic contained in each face is further extracted, and by itself and known people Face is compared, to identify the identity of each face.
Current low-power consumption augmented reality equipment includes very much, but all have the shortcomings that it is respective, below we analyze one by one:
(101) face identification method of geometric properties, geometric properties generally refer to eye, nose, mouth etc. shape and they it Between mutual geometrical relationship the distance between such as, it is fast using this algorithm recognition speed, but discrimination is relatively low.
(102) face identification method of feature based face (PCA):Eigenface method is the recognition of face side converted based on KL Method, KL transformation are a kind of optimal orthogonal transformations of compression of images.The image space of higher-dimension obtained after KL is converted one group it is new Orthogonal basis retains wherein important orthogonal basis, low-dimensional linear space can be turned by these bases.If it is assumed that face is low at these The projection in dimensional linear space has separability, so that it may with by these characteristic vectors of the projection as identification, here it is eigenface sides The basic thought of method.These methods need more training sample, and be based entirely on the statistical property of gradation of image.
(3) face identification method of neural network:The input of neural network can be the facial image for reducing resolution ratio, office The auto-correlation function in portion region, second moment of local grain etc..Such methods also need more sample and are trained, and In many applications, sample size is very limited.
(4) face identification method of elastic graph matching:Elastic graph matching method defined in the two-dimensional space it is a kind of for Common Facial metamorphosis has the distance of certain invariance, and represents face, any of topological diagram using attribute topological diagram Vertex includes a feature vector, for recording information of the face near the vertex position.This method combines gamma characteristic And geometrical factor, can allow image there are elastic deformation when comparing, overcome expression shape change to the influence of identification in terms of receive Preferable effect has been arrived, also multiple samples has no longer been needed to be trained simultaneously for single people, but algorithm is relative complex.
(105) face identification method of support vector machines (SVM):Support vector machines is one of statistical-simulation spectrometry field New hot spot, it attempts so that learning machine reaches a kind of compromise on empiric risk and generalization ability, to improve learning machine Performance.What support vector machines mainly solved is 2 classification problems, its basic thought be attempt to low-dimensional it is linear not The problem of can dividing, is converted to the problem of linear separability of a higher-dimension.It is common the experimental results showed that SVM has preferable discrimination, It require that a large amount of training sample (per class 300), this is often unpractical in practical applications.And supporting vector The machine training time is long, and method realizes that complexity, the function follow the example of ununified theory.
Invention content
In view of the above analysis, the main purpose of the present invention is to provide a kind of low-power consumption augmented reality equipment, including:
Face recognition data obtaining unit, for carrying out positioned at the distributed user images face of predesignated subscriber's surroundings Portion identify, when the image information in recognition result meets preset condition, obtain predesignated subscriber's surroundings dynamic and/ Or static image information is used as distribution AR data;
Preset time acquiring unit, for obtaining the first preset time and the second preset time;
Low power consumption transmission unit, for when the transmission per unit of time data volume of distributed AR data is less than preset unit Between transmitted data amount when, to be acquired with relevant first predetermined manner of the first preset time and transmit distributed AR data, otherwise To be acquired with relevant second predetermined manner of the second preset time and transmit distributed AR data.
Further, first preset time is more than the second preset time, and first predetermined manner is sampling frequency Rate is higher than the sample mode of the sample frequency under the second predetermined manner.
Further, the low-power consumption AR transmission units include:
Human face data acquires subelement, and the face image data for reading people is taken the photograph using the multiple of multiple angles first As head absorbs the facial image that band has powerful connections;
Face recognition subelement passes through confirmation for being detected to face from the complex background image of above-mentioned intake The face image of the face character extraction people of detected object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈[0,1]
Radian greyscale transformation is carried out to image using Tr formula:
N is the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is tested and determined by face Boundary Recognition, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij}|,(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to upper It states Boundary Recognition threshold value to be adjusted, repeat the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold It is [0.3,0.8] to be worth value range;
First judgment sub-unit judges that the factor includes face's posture, illumination for being judged for the first time identification image It spends, have unobstructed, face's distance, be to carry out face's posture judgement first, symmetry is carried out to identification image and integrity degree judges, The symmetry of the image of above-mentioned second step acquisition is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face Flat-hand position is correct, if it exceeds predetermined threshold value require, then it is assumed that face's flat-hand position is incorrect, that is, occur side face excessively or face Portion's overbank phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, is more than 80 pixel 0 is taken, remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part Histogram calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then complete to face Whole degree judged, i.e., to face's finite element inspection in the face mask that identifies, check its eye, eyebrow, face, under Bar whether occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, then has to face It is unobstructed to be judged, subsequent processing is carried out when unobstructed, finally whether face's distance is properly judged, when suitable identification Apart from when, carry out subsequent processing, when the conditions are satisfied, carry out below step.
Second judgment sub-unit, the position for searching for crucial human face characteristic point in the specific region of face image It sets, the grey level histogram using human eye candidate region in identification image is divided, the part that carrying out image threshold segmentation takes gray value minimum The value of pixel is 255, and the value of other pixels is 0, and second judgment sub-unit further includes pupil center's locator unit, For detecting pip from two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye The higher connection block of brightness is deleted in region in the image of binaryzation, select the connection block for being located at extreme lower position as eyes block, And above-mentioned Pupil diameter subelement further includes that low-power consumption processing subelement retains brightness point for carrying out chroma space Amount, obtains the luminance picture of human eye area, enhances into column hisgram linear equalization and contrast luminance picture, then carries out threshold Value transformation carries out corrosion and expansion process to the image after threshold transformation, then implements to treated two-value human eye area Gauss is filtered with median smoothing, carries out threshold transformation again to the image after smooth, then carry out edge detection, ellipse fitting is simultaneously examined Circle in measuring wheel exterior feature, the maximum circle of detection radius obtain the center of pupil;
Texture feature information obtains subelement and is handled facial recognition data after carrying out above-mentioned positioning, uses High-pass filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block to image Segmentation, dimension-reduction treatment calculate the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly pass through correspondence Pixel value point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by using multizone Textural characteristics of the histogram as image, Local textural feature calculation formula are as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary The number of the statistical model feature of pattern, D is the areal of facial image, to the upper of face key area and non-key area It states information to be counted, then be spliced, synthesis obtains the texture feature information of whole picture face image;
Compare and transmission subelement, for the texture feature information and face shelves to whole picture face image obtained above Face texture characteristic information in case database is compared and is transmitted, to realize that the enhancing based on low-power consumption recognition of face is existing Real data obtains and transmission.
Technical scheme of the present invention has the following advantages:
It can accurately realize the processing of facial recognition data and the feature extraction of face texture information, meanwhile, it calculates The relatively easy easy realization of method, can realize based on recognition of face and identify crowd's recognition of face of condition, and then reduces and divide Power consumption needed for cloth AR data transmissions but ensure effective AR data resolutions.
Description of the drawings
Fig. 1 shows augmented reality equipment composition frame chart according to the present invention.
Specific implementation mode
As shown in Figure 1, the low-power consumption augmented reality equipment of the present invention includes:
Face recognition data obtaining unit, for carrying out positioned at the distributed user images face of predesignated subscriber's surroundings Portion identify, when the image information in recognition result meets preset condition, obtain predesignated subscriber's surroundings dynamic and/ Or static image information is used as distribution AR data;
Preset time acquiring unit, for obtaining the first preset time and the second preset time;
Low power consumption transmission unit, for when the transmission per unit of time data volume of distributed AR data is less than preset unit Between transmitted data amount when, to be acquired with relevant first predetermined manner of the first preset time and transmit distributed AR data, otherwise To be acquired with relevant second predetermined manner of the second preset time and transmit distributed AR data.
Preferably, first preset time is more than the second preset time, and first predetermined manner is sample frequency Higher than the sample mode of the sample frequency under the second predetermined manner.
Preferably, the low-power consumption AR transmission units include:
Human face data acquires subelement, and the face image data for reading people is taken the photograph using the multiple of multiple angles first As head absorbs the facial image that band has powerful connections;
Face recognition subelement passes through confirmation for being detected to face from the complex background image of above-mentioned intake The face image of the face character extraction people of detected object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈[0,1]
Radian greyscale transformation is carried out to image using Tr formula:
N is the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is tested and determined by face Boundary Recognition, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij},(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to upper It states Boundary Recognition threshold value to be adjusted, repeat the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold It is [0.3,0.8] to be worth value range;
First judgment sub-unit judges that the factor includes face's posture, illumination for being judged for the first time identification image It spends, have unobstructed, face's distance, be to carry out face's posture judgement first, symmetry is carried out to identification image and integrity degree judges, The symmetry of the image of above-mentioned second step acquisition is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face Flat-hand position is correct, if it exceeds predetermined threshold value require, then it is assumed that face's flat-hand position is incorrect, that is, occur side face excessively or face Portion's overbank phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, is more than 80 pixel 0 is taken, remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part Histogram calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then complete to face Whole degree judged, i.e., to face's finite element inspection in the face mask that identifies, check its eye, eyebrow, face, under Bar whether occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, then has to face It is unobstructed to be judged, subsequent processing is carried out when unobstructed, finally whether face's distance is properly judged, when suitable identification Apart from when, carry out subsequent processing, when the conditions are satisfied, carry out below step.
Second judgment sub-unit, the position for searching for crucial human face characteristic point in the specific region of face image It sets, the grey level histogram using human eye candidate region in identification image is divided, the part that carrying out image threshold segmentation takes gray value minimum The value of pixel is 255, and the value of other pixels is 0, and second judgment sub-unit further includes pupil center's locator unit, For detecting pip from two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye The higher connection block of brightness is deleted in region in the image of binaryzation, select the connection block for being located at extreme lower position as eyes block, And above-mentioned Pupil diameter subelement further includes that low-power consumption processing subelement retains brightness point for carrying out chroma space Amount, obtains the luminance picture of human eye area, enhances into column hisgram linear equalization and contrast luminance picture, then carries out threshold Value transformation carries out corrosion and expansion process to the image after threshold transformation, then implements to treated two-value human eye area Gauss is filtered with median smoothing, carries out threshold transformation again to the image after smooth, then carry out edge detection, ellipse fitting is simultaneously examined Circle in measuring wheel exterior feature, the maximum circle of detection radius obtain the center of pupil;
Texture feature information obtains subelement and is handled facial recognition data after carrying out above-mentioned positioning, uses High-pass filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block to image Segmentation, dimension-reduction treatment calculate the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly pass through correspondence Pixel value point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by using multizone Textural characteristics of the histogram as image, Local textural feature calculation formula are as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary The number of the statistical model feature of pattern, D is the areal of facial image, to the upper of face key area and non-key area It states information to be counted, then be spliced, synthesis obtains the texture feature information of whole picture face image;
Compare and transmission subelement, for the texture feature information and face shelves to whole picture face image obtained above Face texture characteristic information in case database is compared and is transmitted, to realize that the enhancing based on low-power consumption recognition of face is existing Real data obtains and transmission.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.

Claims (3)

1. a kind of low-power consumption augmented reality equipment, which is characterized in that including:
Face recognition data obtaining unit, for carrying out the distributed user images face knowledge positioned at predesignated subscriber's surroundings Not, when the image information in recognition result meets preset condition, the dynamic of predesignated subscriber's surroundings and/or quiet is obtained State image information is used as distribution AR data;
Preset time acquiring unit, for obtaining the first preset time and the second preset time;
Low power consumption transmission unit is passed for the transmission per unit of time data volume when distribution AR data less than the preset unit interval When transmission of data amount, to be acquired with relevant first predetermined manner of the first preset time and transmit distributed AR data, otherwise with Relevant second predetermined manner of second preset time acquires and transmits distributed AR data.
2. according to the method described in claim 1, it is characterized in that, first preset time be more than the second preset time, and First predetermined manner is sample mode of the sample frequency higher than the sample frequency under the second predetermined manner.
3. according to the method described in claim 2, it is characterized in that, the low-power consumption AR transmission units include:
Human face data acquires subelement, and the face image data for reading people uses multiple cameras of multiple angles first The facial image intake having powerful connections to band;
Face recognition subelement, it is tested by confirming from the complex background image of above-mentioned intake for being detected to face Survey the face image of the face character extraction people of object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈ [0,1] utilizes Tr formula Radian greyscale transformation is carried out to image:
R=2 ..., N, N are the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is tested and determined by face Boundary Recognition, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij}|,(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to above-mentioned side Boundary's recognition threshold is adjusted, and is repeated the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold value takes Value is ranging from [0.3,0.8];
First judgment sub-unit judges that the factor includes face's posture, illuminance, has for being judged for the first time identification image Unobstructed, face's distance is to carry out face's posture judgement first, carries out symmetry to identification image and integrity degree judges, to upper The symmetry for stating the image of second step acquisition is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face is horizontal Correct set, if it exceeds predetermined threshold value requires, then it is assumed that face's flat-hand position is incorrect, that is, side face occurs excessively or face inclines Oblique over-education phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, and the pixel more than 80 takes 0, Remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part histogram Figure calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then to face's integrity degree Judged, i.e., to face's finite element inspection in the face mask that identifies, checking its eye, eyebrow, face, chin is It is no to occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, and then to face, whether there is or not screenings Gear judged, subsequent processing is carried out when unobstructed, finally to face distance whether properly judge, when be suitble to identification away from From when, carry out subsequent processing, when the conditions are satisfied, carry out below step.
Second judgment sub-unit, the position for searching for crucial human face characteristic point in the specific region of face image, profit With the grey level histogram segmentation of human eye candidate region in identification image, the partial pixel point that carrying out image threshold segmentation takes gray value minimum Value be 255, the values of other pixels is 0, and second judgment sub-unit further includes pupil center's locator unit, for from Pip is detected in two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye region The higher connection block of brightness is deleted in the image of binaryzation, selects the connection block positioned at extreme lower position as eyes block, and on It further includes that low-power consumption processing subelement retains luminance component, obtain for carrying out chroma space to state Pupil diameter subelement The luminance picture of human eye area enhances luminance picture into column hisgram linear equalization and contrast, then carries out threshold transformation, Corrosion and expansion process are carried out to the image after threshold transformation, then Gauss is implemented in treated two-value human eye area It is worth smothing filtering, threshold transformation is carried out again to the image after smooth, then carry out edge detection, ellipse fitting simultaneously detects in profile Circle, the maximum circle of detection radius i.e. obtain the center of pupil;
Texture feature information obtains subelement and is handled facial recognition data after carrying out above-mentioned positioning, using high pass Filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block segmentation to image, Dimension-reduction treatment calculates the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly passes through respective pixel value Point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by the histogram using multizone As the textural characteristics of image, Local textural feature calculation formula is as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary patterns The number of statistical model feature, D is the areal of facial image, to the above- mentioned information of face key area and non-key area It is counted, is then spliced, synthesis obtains the texture feature information of whole picture face image;
Compare and transmission subelement, for the texture feature information and face archives number to whole picture face image obtained above It is compared and is transmitted according to the face texture characteristic information in library, to realize the augmented reality number based on low-power consumption recognition of face According to acquisition and transmission.
CN201810236568.1A 2018-03-21 2018-03-21 Low-power consumption augmented reality equipment Pending CN108446639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810236568.1A CN108446639A (en) 2018-03-21 2018-03-21 Low-power consumption augmented reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810236568.1A CN108446639A (en) 2018-03-21 2018-03-21 Low-power consumption augmented reality equipment

Publications (1)

Publication Number Publication Date
CN108446639A true CN108446639A (en) 2018-08-24

Family

ID=63196236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810236568.1A Pending CN108446639A (en) 2018-03-21 2018-03-21 Low-power consumption augmented reality equipment

Country Status (1)

Country Link
CN (1) CN108446639A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520208A (en) * 2018-03-23 2018-09-11 四川意高汇智科技有限公司 Localize face recognition method
CN112597911A (en) * 2020-12-25 2021-04-02 百果园技术(新加坡)有限公司 Buffing processing method and device, mobile terminal and storage medium
CN115035520A (en) * 2021-11-22 2022-09-09 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN104916250A (en) * 2015-06-26 2015-09-16 合肥鑫晟光电科技有限公司 Data transmission method and device and display device
CN105530533A (en) * 2014-10-21 2016-04-27 霍尼韦尔国际公司 Low latency augmented reality display
CN105930835A (en) * 2016-04-13 2016-09-07 无锡东游华旅文化传媒有限公司 Enhanced image identification system and method
CN106919262A (en) * 2017-03-20 2017-07-04 广州数娱信息科技有限公司 Augmented reality equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN105530533A (en) * 2014-10-21 2016-04-27 霍尼韦尔国际公司 Low latency augmented reality display
CN104916250A (en) * 2015-06-26 2015-09-16 合肥鑫晟光电科技有限公司 Data transmission method and device and display device
CN105930835A (en) * 2016-04-13 2016-09-07 无锡东游华旅文化传媒有限公司 Enhanced image identification system and method
CN106919262A (en) * 2017-03-20 2017-07-04 广州数娱信息科技有限公司 Augmented reality equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任月庆: "虹膜图像分割算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
周凯 等: "基于多阈值局部二值模式的人脸识别方法", 《计算机工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520208A (en) * 2018-03-23 2018-09-11 四川意高汇智科技有限公司 Localize face recognition method
CN112597911A (en) * 2020-12-25 2021-04-02 百果园技术(新加坡)有限公司 Buffing processing method and device, mobile terminal and storage medium
CN115035520A (en) * 2021-11-22 2022-09-09 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium
CN115035520B (en) * 2021-11-22 2023-04-18 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium

Similar Documents

Publication Publication Date Title
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US6661907B2 (en) Face detection in digital images
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
US6611613B1 (en) Apparatus and method for detecting speaking person's eyes and face
Lin et al. Estimation of number of people in crowded scenes using perspective transformation
Faraji et al. Face recognition under varying illuminations using logarithmic fractal dimension-based complete eight local directional patterns
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN108446642A (en) A kind of Distributive System of Face Recognition
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN109410026A (en) Identity identifying method, device, equipment and storage medium based on recognition of face
EP1271394A2 (en) Method for automatically locating eyes in an image
CN111191573A (en) Driver fatigue detection method based on blink rule recognition
CN108334870A (en) The remote monitoring system of AR device data server states
Celik et al. Facial feature extraction using complex dual-tree wavelet transform
CN107844736A (en) iris locating method and device
CN107066969A (en) A kind of face identification method
CN108446690B (en) Human face in-vivo detection method based on multi-view dynamic features
CN110008793A (en) Face identification method, device and equipment
CN108446639A (en) Low-power consumption augmented reality equipment
CN109711309A (en) A kind of method whether automatic identification portrait picture closes one's eyes
Sari et al. Indonesian traditional food image identification using random forest classifier based on color and texture features
CN108520208A (en) Localize face recognition method
CN108491798A (en) Face identification method based on individualized feature
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180824