CN102682297B - Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property - Google Patents

Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property Download PDF

Info

Publication number
CN102682297B
CN102682297B CN201210137335.9A CN201210137335A CN102682297B CN 102682297 B CN102682297 B CN 102682297B CN 201210137335 A CN201210137335 A CN 201210137335A CN 102682297 B CN102682297 B CN 102682297B
Authority
CN
China
Prior art keywords
receptive field
neuron
pulse
image
neuronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210137335.9A
Other languages
Chinese (zh)
Other versions
CN102682297A (en
Inventor
杨娜
王浩全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201210137335.9A priority Critical patent/CN102682297B/en
Publication of CN102682297A publication Critical patent/CN102682297A/en
Application granted granted Critical
Publication of CN102682297B publication Critical patent/CN102682297B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a pulse coupled neutral network (PCNN) disabled face image segmenting method simulating visual cells to feel a field property. The method comprises the following steps of feeling a structure of a feedback domain connection matrix in a field model optimized pulse coupled neutral network by visual cells, obtaining a pulse coupled neutral network model with a directionality and scale; adjusting the model parameters according to the characteristics of a disabled face image; and finally inputting the brightness channel information of the disabled face image to the model to produce human visual simulated face segmenting result. As the cell felt field model optimizes the connection matrix, the pulse coupled neutral network has directionality and scale, the correction rate for segmenting is improved, and better robustness for face segmenting under natural lighting is realized. In addition, compared with the other methods, the method provided by the invention has the advantages of good separation degree between different image contents, well kept details of the image, fast segmenting speed and the like.

Description

The PCNN facial image dividing method of analog vision cell receptive field characteristic
Technical field
The present invention relates to image processing field, relate in particular to a kind of disabled person's facial image dividing method of Pulse Coupled Neural Network (IG-PCNN) of analog vision cell receptive field characteristic.
Background technology
As the most important surface of the mankind, the detection and Identification technology of facial image becomes the study hotspot in artificial intelligence field day by day, it has very wide application prospect in fields such as national security, public security civil administration, financial customs, insurances, but no matter face detects or identification (detection is the existence of definite face, and the position of face in definite image.Identification is to identify face on the basis of detecting), all to cut apart by facial image, this is the first step of image processing, the quality of cutting apart has key effect to feature extraction and identification.Therefore, it is the basic problem in facial image detection and Identification field that facial image is cut apart, and is the basis of target's feature-extraction, recognition and tracking.
The face existing is at present cut apart, detection, recognizer are all the faces for normal person.Main method has the face based on the colour of skin to detect, face based on iris detects and identification, and the face that the face of face detection, the colour of skin and the Adaboost of Harr feature detects, the face of support vector machine kernel function detects, wavelet transformation extracts feature detects and identification scheduling algorithm.
Because disabled person's face has unique feature, current facial image detects the crowd special with inapplicable this class of recognizer, such as the eye areas of blind person in disabled person does not have normal person's eyes place grey scale change abundant, therefore use eye areas contrast locating more difficult, eye areas can be with other region adhesion of face, causes difficulty to the feature extraction in later stage.Face has the disabled person of wound, scar can be used as good recognition feature, therefore, how to embody to greatest extent the difference of scar region and intact face area, scar region is intactly split, and not to be adjacent intact face area adhesion be the challenge that existing image partition method faces.Meanwhile, mentally disabled person's countenance changes greatly, and facial face can be very approaching owing to twitching, and facial face need to be divided into independently region, extracts and lay the foundation for subsequent characteristics, reduces the difficulty of identification.Therefore, the present invention proposes the image segmentation algorithm for disabled person's face.
Biological vision is the focus of image processing field research all the time, and the image partition method based on Pulse Coupled Neural Network (PCNN) has outstanding good characteristic: become threshold property, non-linear modulation characteristic, synchronizing pulse granting and neuron capture characteristic, dynamic pulse granting and auto-wave characteristic etc.Because above-mentioned neuron models propose based on mammal visual cortex neuron activity, the place one's entire reliance upon natural quality of image of the facial image dividing method based on PCNN, need not select the spatial dimension processed in advance, is the more natural partitioning scheme of one.By regulating neuronic strength of joint, can carry out cutting apart of different levels to image easily and splitting speed is very fast.The method is processed image particularly at image and is had stronger advantage aspect cutting apart.
As shown in Figure 1, in traditional PCNN model, single neuron is made up of feed back input territory, be of coupled connections territory and pulse producer three parts, finally forms a neural network individual layer, that two-dimensional transversal connects.Neuronic exciting is subject to neuronic impact in its neighborhood, and the scope of impact and degree are by coefficient of connection matrix representation.In PCNN model, there are two coefficient of connection matrix M ijkland W ijkl, they all represent the size that center neuron is affected by peripheral nerve unit, the power of the contiguous neuron centering of reflection cardiac nerve unit transmission of information.Wherein W ijklbe positioned at the territory that is of coupled connections of PCNN, mainly represent the strength of joint of neural network inside center neuron and adjacent neurons.M ijklbe positioned at the feed back input territory of PCNN, major function is to obtain the outside half-tone information of neuron, the impact of outstanding outside neuron centering cardiac nerve unit.
The document relevant to PCNN model generally believes M at present ijkland W ijkleffect be the same, numerically equate.Especially simplifying in PCNN model, feed back input territory connection matrix M ijklfunction further weakened, feed back input territory is reduced to the external drive that center neuron receives, i.e. F[n]=S ij.The present invention, by the analysis to PCNN model working mechanism, draws feed back input territory connection matrix M ijklit is the important structure of obtaining external image information in PCNN model, its size has directly determined the scope in neuron coupling territory, the coefficient of connection matrix in large coupling territory directly affects the distance of automatic velocity of wave propagation and propagation, has determined that can center neuron be caught by farther neuron.But connection matrix does not have directivity, can not on assigned direction, add strong center neuron and neighborhood is neuronic contacts.
Based on above-mentioned research, the present invention proposes a kind of disabled person's facial image dividing method based on having receptive field characteristic PCNN model.
Summary of the invention
For traditional existing problem of the image partition method based on PCNN model, the present invention proposes and a kind ofly there is the PCNN model of receptive field characteristic for the dividing method of disabled person's facial image.
A kind of PCNN facial image dividing method of analog vision cell receptive field characteristic, neuron in described PCNN comprises acceptance domain, modulation domain and pulse producer three parts, described acceptance domain comprises feedback acceptance domain and is connected acceptance domain, feeds back acceptance domain acceptance pattern as gray-scale value S ijoutput pulse Y with adjacent neurons in receptive field klas input, through receptive field matrix IG conversion output F ijas neuronic feed back input, connection acceptance domain is accepted the pulse output Y of adjacent neurons in receptive field klas input, through connection matrix W conversion output L ijas the neuronic input that is of coupled connections, said method comprising the steps of:
A) the disabled person's facial image collecting is transformed into HSV space by rgb space, extracts the luminance channel information of image;
B) using each pixel in luminance channel information as a neuron, and by the gray-scale value S of pixel ijas this neuronic outside input value, wherein S ijfor the pixel value after normalization;
C) determine that receptive field scope is that size is the neuron array of K × L centered by current neuron, wherein the value of K, L is odd number, determine that dimension is receptive field matrix IG and the connection matrix W of K × L, initialization pulse produces initial value, damping time constant and the iterations of dynamic threshold in district, and wherein connection matrix W is definite by the inverse of the Euclidean distance of center neuron and adjacent neurons in receptive field square; IG adopts following formula to determine:
Figure BSA00000712133300021
Wherein, S K × L = S i - K - 1 2 j - L - 1 2 . . . S i - K - 1 2 j . . . S i - K - 1 2 j + L - 1 2 . . . . . . S ij - L - 1 2 . . . S ij . . . S ij + L - 1 2 . . . . . . S i + K - 1 2 j L - 1 2 . . . S i + K - 1 2 j . . . S i + K - 1 2 j + L - 1 2
x = ( k - K + 1 2 ) cos θ + ( l - L + 1 2 ) sin θ y = - ( k - K + 1 2 ) sin θ + ( l - L + 1 2 ) cos θ
Wherein, IG (k, l) represents the element of the capable l row of IG matrix k, S k × Lrepresent receptive field amplitude matrix, S k × L(k, l) represents the element of the capable l row of described amplitude matrix k, and with neuronic Normalized Grey Level value representation in receptive field, K and L represent the size of receptive field yardstick, σ x, σ ybe respectively Gaussian envelope standard deviation in the x and y direction, λ represents the wavelength of receptive field function,
Figure BSA00000712133300032
be phase pushing figure, θ represents optimal direction, and γ is direction ratio, wherein subscript i, and j represents current neuronic planimetric position coordinate;
D) obtain the neuronic feed back input F of iteration the n time according to IG and W ij[n] and the input L that is of coupled connections ij[n];
E) determine strength of joint factor beta, by formula U ij[n]=F ij[n] (1+ β L ij[n]) obtain the internal activity item U of the n time iteration ij[n], works as U ijwhen [n] is greater than dynamic threshold, neuron excites and produces pulse output, and Regeneration dynamics thresholding;
F) repeating step d, e is until maximum iteration time, and neuronic pulse is output as the result that image is cut apart.
By utilizing Gabor function optimization feedback field connection matrix, make the neuron in Pulse Coupled Neural Network there is directivity and yardstick.Center neuron has had yardstick and directivity in the time affected by its neighborhood neuron, has strengthened neuronic contact of periphery on center neuron and optimal direction, thereby guarantees that the neuron of the same area can simultaneous shots; Meanwhile, in receptive field model, add amplitude matrix, can embody better the excitation of periphery neuron centering cardiac nerve unit.Because it is exactly that in the time that internal activity item is greater than dynamic threshold, this center neuron excites by the neuronic internal activity item of connection matrix computing center that PCNN model is cut apart the mechanism of image, otherwise center neuron does not excite.In the same area, neuronic gray-value variation is relatively slow, and amplitude matrix can be strengthened the degree of separation between zones of different, thereby improves the effect that image is cut apart.
Accompanying drawing explanation:
Fig. 1: traditional Pulse Coupled Neural Network neuronal structure figure
Fig. 2: Pulse Coupled Neural Network neuronal structure figure of the present invention
Fig. 3: disabled person's face dividing method the general frame of the present invention
Fig. 4: disabled person's facial image of algorithm of the present invention and other partitioning algorithm is cut apart comparison diagram
(a) luminance channel image, (b) OSTU is cut apart, and (c) conventional P CNN is cut apart, and (d) the present invention is cut apart
Embodiment:
The present invention utilizes receptive field model to optimize the structure of connection matrix, propose to have the Pulse-coupled Neural Network Model (IG-PCNN) of directivity and yardstick, improved neuronic image segmentation ability, IG-PCNN model neuronal structure figure as shown in Figure 2.
Principle is as follows:
Referring to Fig. 2, to be positioned at coordinate (i, j) in the receptive field that centered by the neuron of locating, size forms for the neuron array of K × L, having single neuron working mechanism in Pulse Coupled Neural Network (improved Gabor-pulse coupled neural networks, the IG-PCNN) model of receptive field characteristic can be described as:
Figure BSA00000712133300041
Figure BSA00000712133300042
S K × L = S i - K - 1 2 j - L - 1 2 . . . S i - K - 1 2 j . . . S i - K - 1 2 j + L - 1 2 . . . . . . S ij - L - 1 2 . . . S ij . . . S ij + L - 1 2 . . . . . . S i + K - 1 2 j L - 1 2 . . . S i + K - 1 2 j . . . S i + K - 1 2 j + L - 1 2 - - - ( 3 )
x = ( k - K + 1 2 ) cos θ + ( l - L + 1 2 ) sin θ y = - ( k - K + 1 2 ) sin θ + ( l - L + 1 2 ) cos θ - - - ( 4 )
L ij [ n ] = Σ kl W ( k , l ) Y kl [ n - 1 ] - - - ( 5 )
U ij[n]=F in[n](1+βL ij[n]) (6)
Y ij [ n ] = 1 U ij [ n ] > E ij [ n ] 0 else - - - ( 7 )
E ij [ n ] = e - α E E ij [ n - 1 ] + V E Y ij [ n ] - - - ( 8 )
Connection matrix M in feed back input territory replaces with receptive field matrix IG, F in formula (1)~(8) ij(i, j) individual neuronic feed back input, F ijfeed back input when [n] represents the n time iteration, IG (k, l) represents the element of the capable l row of receptive field matrix IG k, S ijfor neuron receives dynamic excitation, the gray-scale value of the pixel at the neuron place that is (i, j) with coordinate represents, S in receptive field model k × Lrepresent receptive field amplitude matrix, with neuronic Normalized Grey Level value representation in region, K and L represent the size of receptive field yardstick, L ijconnection input when [n] represents the n time iteration; σ x, σ ybe respectively Gaussian envelope standard deviation in the x and y direction; λ represents the wavelength of receptive field function,
Figure BSA00000712133300048
be phase pushing figure, θ represents optimal direction, and γ is direction ratio; L ijto connect input, Y klthe output of adjacent neurons when [n-1] is the n-1 time iteration, W is connection matrix; β is the strength of joint coefficient between cynapse; α efor damping time constant; U ij[n] is internal activity item, and its size determines whether neuron exports pulse, as inside neurons activity item U ijbe greater than dynamic threshold E ijtime, neuron excites and produces pulse output Y kl, otherwise neuron does not produce pulse output.
Disabled person's facial image partitioning algorithm performing step:
Step1: first the disabled person's facial image collecting is transformed into HSV space by rgb space, extracts the luminance channel information of image, and set it as the outside input of the Pulse-coupled Neural Network Model of analog vision cell receptive field characteristic.
Step2: using each pixel in luminance channel information as a neuron, and using the gray-scale value of pixel as this neuronic outside input value, wherein S ijfor the pixel value after normalization.
Step3: according to the feature of image, determine the parameter in receptive field model, optimal direction θ=0 ° and best scale K=L=3, λ=20,
Figure BSA00000712133300051
γ=0.5.
Step4: receptive field model is optimized Pulse Coupled Neural Network feedback field connection matrix, and neuron receptive field model IG is calculated by formula (2), utilizes formula (3) to obtain amplitude matrix:
S 3 × 3 = S i - 1 j - 1 S i - 1 j S i - 1 j + 1 S ij - 1 S ij S ij + 1 S i + 1 j - 1 S i + 1 j 1 S i + 1 j + 1
Feedback field connection matrix is:
IG ( k , l ) = S K × L ( k , l ) cos ( 2 π 20 x ) · exp [ - 1 2 ( x 2 σ x 2 + 0.5 2 y 2 σ y 2 ) ]
Calculate F now according to formula (1) ij[n].
Step5: other parameters in initialization PCNN model.Initial threshold Eij[0] be the optimal threshold of image; Damping time constant α e=0.185; V e=2; N=4.Link field weight matrix W determines by the inverse of the Euclidean distance of adjacent neurons square,
W = 1 / 2 1 1 / 2 1 0 1 1 / 2 1 1 / 2
And calculating L now ij[n].
Step6: the calculating of link strength factor beta.Calculate the link strength coefficient of each neuron in its 3 × 3 neighborhood,
Figure BSA00000712133300055
x klrepresent the neuronic gray-scale value of neighborhood centered by (i, j),
Figure BSA00000712133300056
the average that represents the K × L region neuron gray-scale value centered by (i, j), m represents neuron number in K × L region, and calculates U now ij[1], U ij[1]=F ij[1] (1+ β L ij[1]).
Step7: relatively U ij[1] and E ij[0] size, if U ij[1] > E ij[0], Y ij[1]=1 is neuron I ijexcite, this neuron of mark is for exciting, and this neuron remains excited state.
Step8: iterations n=n+1, calculates new
Figure BSA00000712133300057
and repeat Step1~Step7, until the iterations n specifying stops.Get n=4 herein, the Y now obtaining ij[n] cuts apart image for final disabled person's face.
Multiple dividing method comparison:
Be illustrated in figure 4 disabled person's facial image ratio of division of the present invention, adopt OSTU, traditional Pulse Coupled Neural Network, the comparison of three kinds of methods of dividing method of the present invention to disabled person's facial image segmentation result.Choose the comparative result figure of two kinds of representative classical dividing methods and this patent partitioning algorithm, as can be seen from the figure, the core of OTSU threshold method (being maximum variance between clusters) is the gamma characteristic by image, image is divided into background and target two parts, but all can cause two parts difference to diminish when part target mistake is divided into when background or part background mistake are divided into target, so easily cause human face region to can not get cutting apart well or by the part background segment of image situation out.Conventional P CNN dividing method is due to the interaction between neuron in the same area, segmentation result has been removed well background, has been retained preferably the information of human face region, but degree of separation between adjacent area is undesirable caused between segmentation object inter-adhesive, for follow-up image characteristics extraction causes difficulty.The inventive method utilizes primary visual cortex receptive field model to optimize the structure of connection matrix, realize the selection of paired pulses coupled neural network model direction and yardstick, strengthen the interaction between the same area neuron, make segmentation result more approach the segmentation result of human vision, solve over-segmentation and the less divided problem of cutting apart middle appearance, be with a wide range of applications.
Beneficial effect
1) the method can solve well general pattern dividing method to illumination sensitivity, image detail information outstanding and cut apart in the normal over-segmentation occurring and the problem of less divided.
2) there is the Pulse-coupled Neural Network Model of receptive field characteristic, image is cut apart and simulated the function of human vision Methods of Segmentation On Cell Images.
3) by the experiment simulation to a large amount of disabled person's facial images, verified the validity that this dividing method is cut apart disabled person's facial image.
4) utilize segmentation evaluation criterion to evaluate segmentation result, further verified that between the each region of face, degree of separation is good, image detail keeps well and the advantage that accuracy rate is high is cut apart at edge.
5) solved cut apart in normal less divided and the over-segmentation problem occurring, for the application of disabled person's identity authorization system is laid a good foundation.
6) this invention is the gordian technique that disabled person's facial information obtains and processes, and has solved the problem that disabled person's face is cut apart, and is also applicable to cutting apart of normal person's face simultaneously.

Claims (2)

1. the Pulse Coupled Neural Network facial image dividing method of an analog vision cell receptive field characteristic, neuron in described Pulse Coupled Neural Network comprises acceptance domain, modulation domain and pulse producer three parts, described acceptance domain comprises feedback acceptance domain and is connected acceptance domain, feedback acceptance domain reception gradation of image value S ijoutput pulse Y with adjacent neurons in receptive field klas input, through receptive field matrix IG conversion output F ijas neuronic feed back input, connect the pulse output Y of adjacent neurons in acceptance domain reception receptive field klas input, through connection matrix W conversion output L ijas the neuronic input that is of coupled connections, said method comprising the steps of:
A), the disabled person's facial image collecting is transformed into HSV space by rgb space, extracts the luminance channel information of image;
B), using each pixel in luminance channel information as a neuron, and by the gray-scale value S of pixel ijas this neuronic outside input value, wherein S ijfor the pixel value after normalization;
C), determine that receptive field scope is that size is the neuron array of K × L centered by current neuron, wherein K, the value of L is odd number, determine that dimension is receptive field matrix IG and the connection matrix W of K × L, initialization pulse produces initial value, damping time constant and the iterations of dynamic threshold in district, and wherein connection matrix W is definite by the inverse of the Euclidean distance of center neuron and adjacent neurons in receptive field square; Wherein IG adopts following formula to determine:
Figure FSB0000119263030000011
Wherein, S K × L = S i - K - 1 2 j - L - 1 2 . . . S i - k - 1 2 j . . . S i - K - 1 2 j + L - 1 2 . . · · . . · · . . · · S ij - L - 1 2 . . . S ij . . . S ij + L - 1 2 . . · . . . · . . . · . S i + K - 1 2 j - L - 1 2 . . . S i + K - 1 2 j . . . S i + K - 1 2 j + L - 1 2
x = ( k - K + 1 2 ) cos θ + ( l - L + 1 2 ) sin θ y = - ( k - K + 1 2 ) sin θ + ( l - L + 1 2 ) cos θ
Wherein, IG (k, l) represents the element of the capable l row of IG matrix k, S k × Lrepresent receptive field amplitude matrix, S k × L(k, l) represents the element of the capable l row of described amplitude matrix k, and with neuronic Normalized Grey Level value representation in receptive field, K and L represent the size of receptive field yardstick, σ x, σ ybe respectively Gaussian envelope standard deviation in the x and y direction, λ represents the wavelength of receptive field function,
Figure FSB0000119263030000014
be phase pushing figure, θ represents optimal direction, and γ is direction ratio, wherein subscript i, and j represents current neuronic planimetric position coordinate;
D), obtain the neuronic feed back input F of iteration the n time according to IG and W ij[n] and the input L that is of coupled connections ij[n];
E), calculate strength of joint factor beta, by formula U ij[n]=F ij[n] (1+ β L ij[n]) obtain the internal activity item U of the n time iteration ij[n], works as U ijwhen [n] is greater than dynamic threshold, neuron excites and produces pulse output, and Regeneration dynamics thresholding;
F), repeating step d, e is until maximum iteration time, and neuronic pulse is output as the result that image is cut apart.
2. method according to claim 1, is characterized in that: described optimal direction θ=0 °, and best scale K=L=3.
CN201210137335.9A 2012-05-07 2012-05-07 Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property Expired - Fee Related CN102682297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210137335.9A CN102682297B (en) 2012-05-07 2012-05-07 Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210137335.9A CN102682297B (en) 2012-05-07 2012-05-07 Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property

Publications (2)

Publication Number Publication Date
CN102682297A CN102682297A (en) 2012-09-19
CN102682297B true CN102682297B (en) 2014-05-14

Family

ID=46814193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210137335.9A Expired - Fee Related CN102682297B (en) 2012-05-07 2012-05-07 Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property

Country Status (1)

Country Link
CN (1) CN102682297B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096121B2 (en) 2014-05-23 2018-10-09 Watrix Technology Human-shape image segmentation method
CN104915960A (en) * 2015-06-08 2015-09-16 哈尔滨工程大学 PCNN text image segmentation method based on bacteria foraging optimization algorithm
CN106250981B (en) * 2015-06-10 2022-04-01 三星电子株式会社 Spiking neural network with reduced memory access and bandwidth consumption within the network
CN106127740B (en) * 2016-06-16 2018-12-07 杭州电子科技大学 One kind being based on the associated profile testing method of the more orientation of sensory field of visual pathway
CN107330900A (en) * 2017-06-22 2017-11-07 成都品果科技有限公司 A kind of automatic portrait dividing method
CN108416391B (en) * 2018-03-16 2020-04-24 重庆大学 Image classification method based on visual cortex processing mechanism and pulse supervised learning
CN111932440A (en) * 2020-07-09 2020-11-13 中国科学院微电子研究所 Image processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102048621A (en) * 2010-12-31 2011-05-11 重庆邮电大学 Human-computer interaction system and method of intelligent wheelchair based on head posture
CN102096950A (en) * 2010-12-10 2011-06-15 汉王科技股份有限公司 Face recognition device and recognition method for ticketing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8482626B2 (en) * 2009-04-07 2013-07-09 Mediatek Inc. Digital camera and image capturing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096950A (en) * 2010-12-10 2011-06-15 汉王科技股份有限公司 Face recognition device and recognition method for ticketing system
CN102048621A (en) * 2010-12-31 2011-05-11 重庆邮电大学 Human-computer interaction system and method of intelligent wheelchair based on head posture

Also Published As

Publication number Publication date
CN102682297A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN102682297B (en) Pulse coupled neural network (PCNN) face image segmenting method simulating visual cells to feel field property
CN101739712B (en) Video-based 3D human face expression cartoon driving method
CN102902956B (en) A kind of ground visible cloud image identifying processing method
CN105512680A (en) Multi-view SAR image target recognition method based on depth neural network
CN107273845A (en) A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion
CN106096561A (en) Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN104346607A (en) Face recognition method based on convolutional neural network
CN105426875A (en) Face identification method and attendance system based on deep convolution neural network
CN105100547A (en) Liveness testing methods and apparatuses and image processing methods and apparatuses
CN105447473A (en) PCANet-CNN-based arbitrary attitude facial expression recognition method
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN104866829A (en) Cross-age face verify method based on characteristic learning
CN107563328A (en) A kind of face identification method and system based under complex environment
CN103942749B (en) A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine
CN106778512A (en) Face identification method under the conditions of a kind of unrestricted based on LBP and depth school
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
CN103258214A (en) Remote sensing image classification method based on image block active learning
CN109034224A (en) Hyperspectral classification method based on double branching networks
CN105354555B (en) A kind of three-dimensional face identification method based on probability graph model
CN103530657B (en) A kind of based on weighting L2 extraction degree of depth study face identification method
CN105138968A (en) Face authentication method and device
CN103473571A (en) Human detection method
CN104751185A (en) SAR image change detection method based on mean shift genetic clustering
CN104636732A (en) Sequence deeply convinced network-based pedestrian identifying method
CN109615616A (en) A kind of crack identification method and system based on ABC-PCNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140514

Termination date: 20150507

EXPY Termination of patent right or utility model