CN107239764A - A kind of face identification method of DNR dynamic noise reduction - Google Patents

A kind of face identification method of DNR dynamic noise reduction Download PDF

Info

Publication number
CN107239764A
CN107239764A CN201710422219.4A CN201710422219A CN107239764A CN 107239764 A CN107239764 A CN 107239764A CN 201710422219 A CN201710422219 A CN 201710422219A CN 107239764 A CN107239764 A CN 107239764A
Authority
CN
China
Prior art keywords
identified
people
mrow
information
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710422219.4A
Other languages
Chinese (zh)
Inventor
方引
杨洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Technology Co Ltd
Original Assignee
Chengdu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Technology Co Ltd filed Critical Chengdu Technology Co Ltd
Priority to CN201710422219.4A priority Critical patent/CN107239764A/en
Publication of CN107239764A publication Critical patent/CN107239764A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/60Static or dynamic means for assisting the user to position a body part for biometric acquisition
    • G06V40/67Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

In order to overcome adverse effect of the meteorological condition to three-dimensional face identification technology in recognition accuracy, the invention provides a kind of face identification method of DNR dynamic noise reduction, including:The voice signal of people to be identified is gathered, and the identity information that people to be identified claims is obtained based on this voice signal;Whereabouts information that the natural environment residing for people to be identified rains or snow is gathered, to obtain the density information of the positive rain of people to be identified or snow;Gather the actual three dimensional face image information of people to be identified;The identity information and the actual three dimensional face image information of people to be identified claimed according to the whereabouts information of rain or snow, people to be identified, it is determined that the initial position message of three-dimensional identification;According to initial position message, the three dimensional face information of people to be identified is recognized;According to the facial information of people to be identified, the actual identity information of people to be identified is determined.The present invention reduces the error that rainfall and/or snowfall are brought to recognition of face by the means of dynamic analysis, multiple repairing weld, improves recognition accuracy.

Description

A kind of face identification method of DNR dynamic noise reduction
Technical field
The present invention relates to three-dimensional face identification technology field, more particularly, to a kind of recognition of face side of DNR dynamic noise reduction Method.
Background technology
Face identification system, using face recognition technology as core, is an emerging biological identification technology, is the current world The high-quality precision and sophisticated technology of sciemtifec and technical sphere tackling key problem.Not reproducible, collection is convenient, do not need the cooperation of one be shooted because having for face so that Face identification system has a wide range of applications.Nowadays, face recognition technology has been widely used in the safety-security areas such as gate inhibition.
Face recognition technology is mainly or by two dimensional image identification method, and its method is according to two dimensional surface face silhouette Or certain visual angle is extracted and recognizes face characteristic.The poor reliability of this method, the shadow of facial pose, illumination by identified person Sound is larger.Accordingly, three-dimensional face identification technology degree of accuracy height, strong adaptability, attack tolerant be strong, anti-fraudulent is strong, than two The face recognition technology for tieing up image is relatively reliable.
However, existing three-dimensional face identification technology concern is primarily with how to facial crucial recognition site modeling and Overcome the influence that light is brought.But in fact, the particularity of the environment due to gate inhibition's application, identified person is likely to be at severe day Under gas system, such as drenching with rain, snow, haze, cause the unclear of identified person's local feature, and then had influence on three-dimensional Degree of accuracy during face recognition.
The content of the invention
In order to overcome adverse effect of the meteorological condition to three-dimensional face identification technology in recognition accuracy, the present invention is provided A kind of face identification method of DNR dynamic noise reduction, including:
(1) voice signal of people to be identified is gathered, and the identity information that people to be identified claims is obtained based on this voice signal;
(2) whereabouts information that the natural environment residing for people to be identified rains or snow is gathered, to obtain people front to be identified Rain or snow density information;
(3) the actual three dimensional face image information of people to be identified is gathered;
(4) identity information and the actual three-dimensional surface of people to be identified claimed according to the whereabouts information of rain or snow, people to be identified Portion's image information, it is determined that the initial position message of three-dimensional identification;
(5) according to initial position message, the three dimensional face information of people to be identified is recognized;
(6) according to the facial information of people to be identified, the actual identity information of people to be identified is determined.
Further, the step (1) includes:
(11) voice messaging of prompting problem is provided to people to be identified, the sound letter of people to be identified in the given time is obtained Breath;
(12) vocal print of the acoustic information of people to be identified is obtained;
(13) vocal print of the acoustic information of people to be identified is compared with default voiceprint set, according to comparing knot Fruit determines the identity information that people to be identified claims.
Further, the prompting problem is the prompting problem provided at random.
Further, step (2) includes:
(21) locus in the acoustic information source of people to be identified is determined;
(22) rain estimation of rain or snow is gathered in people underfooting to be identified;
(23) when rain estimation exceedes default rain estimation threshold value, grid type screening is carried out in people's overhead to be identified Gear, in order to which the rain estimation for dropping to the rain on the number of people to be identified or snow is controlled below default rain estimation threshold value;
(24) density information of the positive rain of people to be identified or snow is gathered.
Further, the step (24) includes:
(241) direct picture of multiple people to be identified is gathered between people to be identified and front scan camera;
(242) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(243) in presupposition analysis region, it is determined that the direct picture of multiple people to be identified and the voice to be identified Sharpness information c between the frontal face image of titlej, the presupposition analysis region is centered on default collection point, vertically Direction, the region that radius is preset length R, the space Z-direction coordinate of the default collection point are predetermined for the people crown to be identified At distance, and when progress grid type is blocked, the pre-determined distance, which is less than, to be carried out when grid type is blocked apart from the people crown to be identified Distance, the space X direction of the collection point and Y-direction coordinate are that X-coordinate and Y at the locus that the acoustic information is originated are sat Mark, the sharpness information cjBased on the physiological characteristic at facial each position of people to be identified, j is each position of face Quantity and j >=5.
Further, the step (3) is at the locus that the acoustic information is originated including gathering people to be identified The actual three dimensional face image information of picture centre.
Further, the step (4) includes:
(411) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(412) sharpness information is added on the frontal face image, obtained with reference to identification image;
(413) it is compared described with reference to identification image and the actual three dimensional face image information of people to be identified, it is determined that Facial zone in actual three dimensional face image information;
(414) in the facial zone in the actual three dimensional face image information, the actual three dimensional face figure is determined Match information between the frontal face image claimed as information and the people to be identified, the match information includes i with face Each position physiological characteristic based on deformed region, wherein i be more than 10;
(415) the deformation coefficient respectively α of the i deformed regions is seti, according to using each deformed region in The heart, deformation analysis radius r calculate for the following formula in multiple neighborhoods of radius, it is determined that calculating, obtained minimum value is corresponding, make Centered on deformed region Amin, and the maximum value that calculating is obtained is corresponding, deformed region A as centermax
Distance between deformed regions of the wherein r for some deformed region and around it, the deformation coefficient αiFor with In each physiological characteristic region based on the physiological characteristic at each position for the frontal face image that people to be identified claims, table Show the number that the pixel of the profile of the physiological characteristic occurs in the relevant position with reference to identification image;
(416) using T as the cycle, to the deformed region AminIt is the actual three dimensional face figure of picture centre at where center As information carries out p secondary acquisition, two-dimentional face-image, and the two-dimensional surface being extracted within each cycle described in determination are therefrom extracted Sharpness information between the frontal face image that portion's image and the people to be identified claimThe sharpness informationTo treat Based on the physiological characteristic at facial each position for recognizing people, quantity and j >=5 of the j for each position of face;
(417) with each secondary acquisition during obtainBuild definition discrimination matrix, each behavior of the matrix The facial corresponding definition in each position of the people to be identified obtained during one secondary acquisition, each be classified as of the matrix is treated Some facial position of identification people carries out the definition obtained during each secondary acquisition, i.e.,:First during secondary acquisition The facial corresponding definition in each position of the secondary people to be identified collected is the first row, and what is collected for the second time is to be identified The facial corresponding definition in each position of people is the second row, by that analogy;
(418) the variance D of each row of the definition discrimination matrix is calculatedq, wherein q is the definition discrimination matrix Columns;
(419) variance D in the definition discrimination matrix is removedqMaximum sharpness informationThe row at place, is passed through Cross the definition discrimination matrix of processing;
(420) for each row in the treated definition discrimination matrix, following after-treatment is carried out successively:Root According to centered on each deformed region, the deformation analysis radius r be that following formula in multiple neighborhoods of radius is calculated, it is determined that meter Obtained minimum value:
(421) it is the minimum value obtained in the after-treatment is corresponding as in the geometry of the deformed region at center The heart is used as initial position message as initial position, the positional information of the geometric center.
Further, the rain estimation is the criteria for classifying according to precipitation in meteorology the preceding paragraph time.
Further, methods described also includes the reality of the identity information and people to be identified claimed according to the people to be identified Identity information, determines the safety precaution grade of gate control system.
The beneficial effects of the invention are as follows:
(1) can improve in the rain or snow in recognition of face the degree of accuracy;
(2) can based on the multiple analysis to rain or snow, dynamically determine rather than still specified three-dimensional recognize when just Beginning position, compared with prior art in the commonly used position such as " nose ", " place between the eyebrows " that is directly defaulted as there is more extensive be applicable Property and reliability, can avoid because the people to be identified physiological defect of itself causes recognition failures.
Brief description of the drawings
Fig. 1 shows the flow chart of the face identification method of the DNR dynamic noise reduction according to the present invention.
Embodiment
As shown in figure 1, the invention provides a kind of face identification method of DNR dynamic noise reduction, including:
(1) voice signal of people to be identified is gathered, and the identity information that people to be identified claims is obtained based on this voice signal;
(2) whereabouts information that the natural environment residing for people to be identified rains or snow is gathered, to obtain people front to be identified Rain or snow density information;
(3) the actual three dimensional face image information of people to be identified is gathered;
(4) identity information and the actual three-dimensional surface of people to be identified claimed according to the whereabouts information of rain or snow, people to be identified Portion's image information, it is determined that the initial position message of three-dimensional identification;
(5) according to initial position message, the three dimensional face information of people to be identified is recognized;
(6) according to the facial information of people to be identified, the actual identity information of people to be identified is determined.
The step (1) includes:
(11) voice messaging of prompting problem is provided to people to be identified, the sound letter of people to be identified in the given time is obtained Breath;
(12) vocal print of the acoustic information of people to be identified is obtained;
(13) vocal print of the acoustic information of people to be identified is compared with default voiceprint set, according to comparing knot Fruit determines the identity information that people to be identified claims.
The prompting problem is the prompting problem provided at random.
Step (2) includes:
(21) locus in the acoustic information source of people to be identified is determined;
(22) rain estimation of rain or snow is gathered in people underfooting to be identified;
(23) when rain estimation exceedes default rain estimation threshold value, grid type screening is carried out in people's overhead to be identified Gear, in order to which the rain estimation for dropping to the rain on the number of people to be identified or snow is controlled below default rain estimation threshold value;
(24) density information of the positive rain of people to be identified or snow is gathered.
The step (24) includes:
(241) direct picture of multiple people to be identified is gathered between people to be identified and front scan camera;
(242) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(243) in presupposition analysis region, it is determined that the direct picture of multiple people to be identified and the voice to be identified Sharpness information c between the frontal face image of titlej, the presupposition analysis region is centered on default collection point, vertically Direction, the region that radius is preset length R, the space Z-direction coordinate of the default collection point are predetermined for the people crown to be identified At distance, and when progress grid type is blocked, the pre-determined distance, which is less than, to be carried out when grid type is blocked apart from the people crown to be identified Distance, the space X direction of the collection point and Y-direction coordinate are that X-coordinate and Y at the locus that the acoustic information is originated are sat Mark, the sharpness information cjBased on the physiological characteristic at facial each position of people to be identified, j is each position of face Quantity and j >=5.
It is picture centre that the step (3), which includes gathering people to be identified at the locus that the acoustic information is originated, Actual three dimensional face image information.
The step (4) includes:
(411) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(412) sharpness information is added on the frontal face image, obtained with reference to identification image;
(413) it is compared described with reference to identification image and the actual three dimensional face image information of people to be identified, it is determined that Facial zone in actual three dimensional face image information;
(414) in the facial zone in the actual three dimensional face image information, the actual three dimensional face figure is determined Match information between the frontal face image claimed as information and the people to be identified, the match information includes i with face Each position physiological characteristic based on deformed region, wherein i be more than 10;
(415) the deformation coefficient respectively α of the i deformed regions is seti, according to using each deformed region in The heart, deformation analysis radius r calculate for the following formula in multiple neighborhoods of radius, it is determined that calculating, obtained minimum value is corresponding, make Centered on deformed region Amin, and the maximum value that calculating is obtained is corresponding, deformed region A as centermax
Distance between deformed regions of the wherein r for some deformed region and around it, the deformation coefficient αiFor with In each physiological characteristic region based on the physiological characteristic at each position for the frontal face image that people to be identified claims, table Show the number that the pixel of the profile of the physiological characteristic occurs in the relevant position with reference to identification image;
(416) using T as the cycle, to the deformed region AminIt is the actual three dimensional face figure of picture centre at where center As information carries out p secondary acquisition, two-dimentional face-image, and the two-dimensional surface being extracted within each cycle described in determination are therefrom extracted Sharpness information between the frontal face image that portion's image and the people to be identified claimThe sharpness informationTo treat Based on the physiological characteristic at facial each position for recognizing people, quantity and j >=5 of the j for each position of face;
(417) with each secondary acquisition during obtainBuild definition discrimination matrix, each behavior of the matrix The facial corresponding definition in each position of the people to be identified obtained during one secondary acquisition, each be classified as of the matrix is treated Some facial position of identification people carries out the definition obtained during each secondary acquisition, i.e.,:First during secondary acquisition The facial corresponding definition in each position of the secondary people to be identified collected is the first row, and what is collected for the second time is to be identified The facial corresponding definition in each position of people is the second row, by that analogy;
(418) the variance D of each row of the definition discrimination matrix is calculatedq, wherein q is the definition discrimination matrix Columns;
(419) variance D in the definition discrimination matrix is removedqMaximum sharpness informationThe row at place, is passed through Cross the definition discrimination matrix of processing;
(420) for each row in the treated definition discrimination matrix, following after-treatment is carried out successively:Root According to centered on each deformed region, the deformation analysis radius r be that following formula in multiple neighborhoods of radius is calculated, it is determined that meter Obtained minimum value:
(421) it is the minimum value obtained in the after-treatment is corresponding as in the geometry of the deformed region at center The heart is used as initial position message as initial position, the positional information of the geometric center.
The rain estimation is the criteria for classifying according to precipitation in meteorology the preceding paragraph time.
Methods described also includes the actual identity information of the identity information and people to be identified claimed according to the people to be identified, Determine the safety precaution grade of gate control system.
NM three-dimensional face identification is comprised the concrete steps that after above-mentioned initial position message is determined in the application, can With what is carried out using various models and algorithm of the prior art, and not this Applicant's Abstract graph main technical schemes, and this Shen is not influenceed Implementation please, will not be repeated here.
It is the purpose to illustrate for the narration that presently preferred embodiments of the present invention is made above, and is not intended to limit essence of the invention It is really disclosed form, it is possible based on teaching above or to make an amendment or change from embodiments of the invention study , embodiment is for explanation principle of the invention and allows those skilled in the art to be existed with various embodiments using the present invention Select and describe in practical application, technological thought of the invention attempts to be determined by claim and its equalization.

Claims (8)

1. a kind of face identification method of DNR dynamic noise reduction, including:
(1) voice signal of people to be identified is gathered, and the identity information that people to be identified claims is obtained based on this voice signal;
(2) whereabouts information that the natural environment residing for people to be identified rains or snow is gathered, to obtain the positive rain of people to be identified Or the density information of snow;
(3) the actual three dimensional face image information of people to be identified is gathered;
(4) identity information and the actual three dimensional face figure of people to be identified claimed according to the whereabouts information of rain or snow, people to be identified As information, it is determined that the initial position message of three-dimensional identification;
(5) according to initial position message, the three dimensional face information of people to be identified is recognized;
(6) according to the facial information of people to be identified, the actual identity information of people to be identified is determined.
2. according to the method described in claim 1, it is characterised in that the step (1) includes:
(11) voice messaging of prompting problem is provided to people to be identified, the acoustic information of people to be identified in the given time is obtained;
(12) vocal print of the acoustic information of people to be identified is obtained;
(13) vocal print of the acoustic information of people to be identified is compared with default voiceprint set, it is true according to comparative result The identity information that fixed people to be identified claims.
3. according to the method described in claim 1, it is characterised in that the prompting problem is the prompting problem provided at random.
Further, step (2) includes:
(21) locus in the acoustic information source of people to be identified is determined;
(22) rain estimation of rain or snow is gathered in people underfooting to be identified;
(23) when rain estimation exceedes default rain estimation threshold value, carry out grid type in people's overhead to be identified and block, with It is easy to control the rain estimation for dropping to the rain on the number of people to be identified or snow below default rain estimation threshold value;
(24) density information of the positive rain of people to be identified or snow is gathered.
4. method according to claim 3, it is characterised in that the step (24) includes:
(241) direct picture of multiple people to be identified is gathered between people to be identified and front scan camera;
(242) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(243) in presupposition analysis region, it is determined that what the direct picture of multiple people to be identified and the people to be identified claimed Sharpness information c between frontal face imagej, the presupposition analysis region is centered on default collection point, vertical direction , the region that radius is preset length R, the space Z-direction coordinate of the default collection point is the people crown to be identified preset distance Place, and when progress grid type is blocked, the pre-determined distance is less than the distance carried out when grid type is blocked apart from the people crown to be identified, The space X direction of the collection point and Y-direction coordinate are the X-coordinate and Y-coordinate at the locus that the acoustic information is originated, institute State sharpness information cjBased on the physiological characteristic at facial each position of people to be identified, j is the quantity at each position of face And j >=5.
5. method according to claim 4, it is characterised in that the step (3) includes gathering people to be identified in the sound It is the actual three dimensional face image information of picture centre at the locus of sound information source.
6. method according to claim 5, it is characterised in that the step (4) includes:
(411) identity information claimed according to people to be identified, determines the frontal face image that people to be identified claims;
(412) sharpness information is added on the frontal face image, obtained with reference to identification image;
(413) it is compared described with reference to identification image and the actual three dimensional face image information of people to be identified, it is determined that actual Facial zone in three dimensional face image information;
(414) in the facial zone in the actual three dimensional face image information, the actual three dimensional face image letter is determined Match information between the frontal face image that breath and the people to be identified claim, the match information includes i with each of face Deformed region based on the physiological characteristic at position, wherein i are more than 10;
(415) the deformation coefficient respectively α of the i deformed regions is seti, according to centered on each deformed region, change Conformal analysis radius r calculates for the following formula in multiple neighborhoods of radius, it is determined that calculate obtained minimum value it is corresponding, as center Deformed region Amin, and the maximum value that calculating is obtained is corresponding, deformed region A as centermax
<mrow> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <msub> <mi>c</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>i</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>k</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <mfrac> <msub> <mi>c</mi> <mi>k</mi> </msub> <msub> <mi>&amp;alpha;</mi> <mi>k</mi> </msub> </mfrac> <mo>)</mo> </mrow> </mrow>
Distance between deformed regions of the wherein r for some deformed region and around it, the deformation coefficient αiFor with to be identified In each physiological characteristic region based on the physiological characteristic at each position for the frontal face image that people claims, the life is represented Manage the number that the pixel of the profile of feature occurs in the relevant position with reference to identification image;
(416) using T as the cycle, to the deformed region AminBelieve at where center for the actual three dimensional face image of picture centre Breath carries out p secondary acquisition, therefrom extracts two-dimentional face-image, and the two dimension face figure being extracted within each cycle described in determination Sharpness information between the frontal face image that picture and the people to be identified claimThe sharpness informationWith to be identified Based on the physiological characteristic at facial each position of people, quantity and j >=5 of the j for each position of face;
(417) with each secondary acquisition during obtainBuild definition discrimination matrix, each behavior of the matrix some two The facial corresponding definition in each position of the people to be identified obtained during secondary collection, each of the matrix is classified as to people to be identified Some facial position carry out the definition obtained during each secondary acquisition, i.e.,:First time collection during secondary acquisition The facial corresponding definition in each position of obtained people to be identified is the first row, the face of the people to be identified collected for the second time The corresponding definition in each position in portion is the second row, by that analogy;
(418) the variance D of each row of the definition discrimination matrix is calculatedq, wherein q is the columns of the definition discrimination matrix;
(419) variance D in the definition discrimination matrix is removedqMaximum sharpness informationThe row at place, is obtained by processing Definition discrimination matrix;
(420) for each row in the treated definition discrimination matrix, following after-treatment is carried out successively:According to right Centered on each deformed region, the deformation analysis radius r be that following formula in multiple neighborhoods of radius is calculated, it is determined that calculating The minimum value arrived:
<mrow> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <msub> <mi>c</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>i</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>k</mi> <mo>=</mo> <mi>M</mi> </mrow> </munderover> <mfrac> <msub> <mi>c</mi> <mi>k</mi> </msub> <msub> <mi>&amp;alpha;</mi> <mi>k</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
(421) geometric center of the corresponding deformed region as center of minimum value obtained in the after-treatment is made For initial position, the positional information of the geometric center is used as initial position message.
7. method according to claim 3, it is characterised in that the rain estimation drops according in meteorology the preceding paragraph time Water is the criteria for classifying.
8. according to the method described in claim 1, it is characterised in that methods described also includes what is claimed according to the people to be identified Identity information and the actual identity information of people to be identified, determine the safety precaution grade of gate control system.
CN201710422219.4A 2017-06-07 2017-06-07 A kind of face identification method of DNR dynamic noise reduction Pending CN107239764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710422219.4A CN107239764A (en) 2017-06-07 2017-06-07 A kind of face identification method of DNR dynamic noise reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710422219.4A CN107239764A (en) 2017-06-07 2017-06-07 A kind of face identification method of DNR dynamic noise reduction

Publications (1)

Publication Number Publication Date
CN107239764A true CN107239764A (en) 2017-10-10

Family

ID=59986097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710422219.4A Pending CN107239764A (en) 2017-06-07 2017-06-07 A kind of face identification method of DNR dynamic noise reduction

Country Status (1)

Country Link
CN (1) CN107239764A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301722A (en) * 2004-04-13 2005-10-27 Matsushita Electric Ind Co Ltd Face region detector
CN101512549A (en) * 2006-08-11 2009-08-19 快图影像有限公司 Real-time face tracking in a digital image acquisition device
CN102663354A (en) * 2012-03-26 2012-09-12 腾讯科技(深圳)有限公司 Face calibration method and system thereof
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301722A (en) * 2004-04-13 2005-10-27 Matsushita Electric Ind Co Ltd Face region detector
CN101512549A (en) * 2006-08-11 2009-08-19 快图影像有限公司 Real-time face tracking in a digital image acquisition device
CN102663354A (en) * 2012-03-26 2012-09-12 腾讯科技(深圳)有限公司 Face calibration method and system thereof
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Similar Documents

Publication Publication Date Title
CN110909690B (en) Method for detecting occluded face image based on region generation
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN105095856A (en) Method for recognizing human face with shielding based on mask layer
CN105426870A (en) Face key point positioning method and device
CN106845330A (en) A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN104615996B (en) A kind of various visual angles two-dimension human face automatic positioning method for characteristic point
CN105893946A (en) Front face image detection method
CN105426882B (en) The method of human eye is quickly positioned in a kind of facial image
CN102831408A (en) Human face recognition method
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN106845425A (en) A kind of visual tracking method and tracks of device
CN111553310A (en) Security inspection image acquisition method and system based on millimeter wave radar and security inspection equipment
CN110634131A (en) Crack image identification and modeling method
CN106778574A (en) For the detection method and device of facial image
CN106980845B (en) Face key point positioning method based on structured modeling
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
Wu et al. Block-based hough transform for recognition of zebra crossing in natural scene images
CN107239764A (en) A kind of face identification method of DNR dynamic noise reduction
CN104166847A (en) 2DLDA (two-dimensional linear discriminant analysis) face recognition method based on ULBP (uniform local binary pattern) feature sub-spaces
Latha et al. A novel method for person authentication using retinal images
CN107154096A (en) Gate identification system based on 3-D scanning
CN107239765A (en) 3 D scanning system for recognition of face
CN108734709A (en) A kind of identification of insulator flange shape parameter and destructive test method
CN104766085B (en) A kind of multiple dimensioned pattern recognition method
CN113989789A (en) Face fatigue detection method based on multi-feature fusion in teaching scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010