CN1271560C - Attribute normalizing method for human face image collecting apparatus - Google Patents

Attribute normalizing method for human face image collecting apparatus Download PDF

Info

Publication number
CN1271560C
CN1271560C CN 200410047919 CN200410047919A CN1271560C CN 1271560 C CN1271560 C CN 1271560C CN 200410047919 CN200410047919 CN 200410047919 CN 200410047919 A CN200410047919 A CN 200410047919A CN 1271560 C CN1271560 C CN 1271560C
Authority
CN
China
Prior art keywords
gray
value
image
scale
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200410047919
Other languages
Chinese (zh)
Other versions
CN1584916A (en
Inventor
苏光大
章柏幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN 200410047919 priority Critical patent/CN1271560C/en
Publication of CN1584916A publication Critical patent/CN1584916A/en
Application granted granted Critical
Publication of CN1271560C publication Critical patent/CN1271560C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention belongs to the technical field of image processing techniques and relates to an attribute normalizing method for human face image collecting devices. The present invention comprises the following steps: three kinds of human face image collecting devices comprising video cameras, cameras and digital cameras are respectively used for collecting N human face images which are firstly converted into gray level images; the collected N human face images are trained and are used for computing mode class center distribution functions for obtaining the mode class center accumulation figures of the human face image collecting devices; a standard device and a device to be converted are designated; histogram statistics is carried out to one human face image of the device to be converted for obtaining the histogram accumulation figure of the image; the grey level time serial number of each pels point of the human face image collected by the device to be converted is computed; the gray level value of the pels of the human face image is assigned again for forming a transition image; the human face image collected by the device to be converted and the transition image are merged in pels level for finally forming an attribute normalized image of the collecting devices. The present invention can enhances the recognition rate of human face recognition of human face images collected by different collecting devices.

Description

The normalized method of a kind of man face image acquiring device attribute
Technical field
The invention belongs to technical field of image processing, particularly improve the method for recognition of face rate.
Background technology
Recognition of face relates to a lot of subjects, comprises Flame Image Process, computer vision, pattern-recognition etc., also is closely related with physiology and the biology achievement in research to the human brain structure.The recognition of face difficult point of generally acknowledging is:
(1) people's face of causing of age changes;
(2) people's face diversity of causing of attitude;
(3) people's face plastic yield of expressing one's feelings and causing;
(4) multiplicity of people's face pattern of causing of factor such as glasses, makeup;
(5) otherness of going into the face image that illumination causes.
Except above-mentioned several respects problem, in the research of recognition of face, the inventor has also found a such fact: if also put in storage with a people's of camera acquisition facial image, and then go identification with the same individual's of camera acquisition facial image again, the discrimination of this moment is very high; If a human face photo image warehouse-in that the people is current, the current facial image with the same individual of camera acquisition goes identification then, then discrimination can reduce.In other words,, have high recognition, and the discrimination of the facial image of gathering with distinct device is lower with the facial image of identical collecting device collection for same people.
In actual applications, situation about discerning with the facial image of distinct device collection is quite general, as utilize pursuing and capturing an escaped prisoner of recognition of face on the net, have escaped criminal's photo in the database, and the airport with people's face of video camera taken of passengers and with the storehouse in the people's face of depositing discern.For this reason, the image attributes research for different acquisition equipment is very significant.
Summary of the invention
The low problem of facial image discrimination that the objective of the invention is the same individual that causes for the otherness that solves between the image that different acquisition equipment gathered, the normalized method of a kind of man face image acquiring device attribute is proposed, train and normalized by the information of analyzing the facial image inherence, can improve the discrimination of the recognition of face that the facial image gathered with different acquisition equipment carries out.
The present invention proposes the normalized method of a kind of man face image acquiring device attribute, it is characterized in that: comprise training process and normalization process two parts; Wherein, training process may further comprise the steps:
1) at first use various man face image acquiring equipment (comprising video camera, camera, digital camera) to gather N width of cloth facial image respectively, N is greater than 15, if input picture is a coloured image, then convert gray level image earlier to, the method that the coloured image that obtains is converted to gray level image is:
Y=0.299R+0.589G+0.114B
Wherein Y is a gray-scale value, and R, G, B are respectively the red, green, blue primary color component of coloured image;
2) the N width of cloth facial image of described each man face image acquiring equipment collection is trained, computation schema class central distribution function, the mode class center accumulative total that obtains each man face image acquiring equipment is schemed, and concrete grammar is;
(1) grey level histogram of each width of cloth facial image that is used to train of this kind equipment collection is expressed as:
p(y)Gray(g) g=0,1,...,255 y=1,2,...,N
Wherein, y represents to train the sequence number of facial image, and g represents the gray-scale value of facial image pixel;
(2) defining mode class center cumulative distribution function is:
pDevice(g) g=0,1,...,255;
Mode class central distribution value when (3) the definition gray scale is g is B (g), and then the mode class central distribution function of each equipment is:
pDevice(g)=B(g)+pDevice(g-1)
When g=0, pDevice (g-1)=0;
The method of calculating B (g) is:
(31) at first calculate mode class central value B (0) when g=0: the steps include:
If the value of the 1st width of cloth and the 2nd width of cloth image class central point when gray scale is " 0 " is k1, precise radius is r1, then:
K1=(p (1) Gray (0)+p (2) Gray (0))/2; R1=| (p (1) Gray (0)-p (2) Gray (0)) |/2; The error T of the value k1 of the class central point when gray scale is " 0 " is the 3rd width of cloth image: T=|p (3) Gray (0)-k1| with the 1st width of cloth and the 2nd width of cloth image when " 0 " in gray scale, when T≤r1, then the value k2 of the class central point of the 1st, 2 and the 3rd width of cloth image is constant, be k2=k1, precise radius r2 is constant, i.e. r2=r1; And when T>r0, k2=(k1+p (3) Gray (0))/2, r2=| (k1-p (3) Gray (0)) |/2, method like this calculates the 1st, the value k of the class central point of 2... (N-1) width of cloth image and N width of cloth image N-1B (0)=k then N-1, promptly at mode class center cumulative distribution value pDevice (the 0)=B0 of gray-scale value " 0 " point.
(32) then according to the method for calculating the mode class central distribution value when the g=0, calculate the mode class central value B (1) when g=1 again, obtain mode class center cumulative distribution value pDevice (1)=B (1)+pDevice (0) at gray-scale value " 1 " point; Rule thus, the mode class central distribution value when gray scale is g is B (g), then cumulative distribution function in mode class center is:
pDevice(g)=B(g)+pDevice(g-1)
(4) each man face image acquiring equipment carry out this step respectively in (1), (2), (3) step handle, obtain the mode class center cumulative distribution function of three class man face image acquiring equipment (video camera, camera, digital camera):
Video camera: pDeviceCCD (g) g=0,1 ..., 255
Camera: pDeviceScanner (g) g=0,1 ..., 255
Digital camera: pDeviceDigital (g) g=0,1 ..., 255;
The normalization process may further comprise the steps:
3) determine attribute normalization purpose, specified value equipment and equipment to be converted;
4) treat the width of cloth facial image that conversion equipment gathers and carry out statistics with histogram, obtain the histogrammic accumulative total figure of this width of cloth image; The histogram cumulative distribution function of this width of cloth image is:
pDevice(g)=pGyay(g)+pDevice(g-1)
In the formula, pGray (g), g=0,1 ..., 255, be the histogram of this facial image
When g=0, make pDevice (g-1)=0;
Calculate the gray scale sequence number of each picture element of facial image that this equipment to be converted gathers, its concrete grammar is:
(1) definition picture element B (i, j, g, t), wherein i, j are the planimetric coordinates of picture element B, g is the gray-scale value of this point, t is the sequence number in the set of this gray-scale value in full figure point that is g;
(2) pixel that gray-scale value equals g in the traversal full figure statistics full figure is counted, and is designated as K, and the span of gray-scale value g is 0-255;
(3) each gray-scale value is equaled the point of g, all calculating respectively with this point is the average of the M * N neighborhood at center, and M, N are more than or equal to 2 (as 3 * 3);
(4) to this K average by sorting from small to large, if average equate, then can be by i*j numerical value order decision sequence number from small to large, this average sequence number be picture element B (i, j, g, gray scale sequence number t) of correspondence;
5) according to the 4th) histogrammic accumulative total figure, the 2nd in the step) the mode class center accumulative total figure of the standard device that obtains in the step, the and the 6th) the gray scale sequence number of each picture element of facial image of in the step this equipment to be converted being gathered is carried out assignment again to the gray-scale value of its pixel, form a width of cloth transfer image acquisition, form a width of cloth transfer image acquisition image specific practice and be:
(1) in the histogram accumulative total figure of the facial image that equipment to be converted is gathered, begin to search from " 0 " gray-scale value, if the histogram aggregate-value of " 0 " gray-scale value correspondence is " 0 ", search the histogram aggregate-value of " 1 " gray-scale value correspondence again, till the histogram aggregate-value of this gray-scale value correspondence was not " 0 ", this moment, note was made P1 G1, g1 is a gray-scale value, P1 is the histogram aggregate-value of g1 gray-scale value correspondence;
(2) searching class center aggregate-value to the mode class center accumulative total figure of the man face image acquiring equipment of standard device again is P1 corresponding gray scale value G1, the mode class center of checking standard device adds up the gray-scale value pairing mode class center aggregate-value lower than G1 among the figure, find the 1st not to be the mode class center aggregate-value corresponding gray scale value G0 of " 0 "
(3) treat the facial image that conversion equipment gathers according to the concrete numerical value of the mode class center aggregate-value of G0~G1 section correspondence among the accumulative total figure of the mode class center of standard device and carry out assignment again, the assignment concrete grammar is:
(31) be located in the facial image that equipment to be converted gathers, full figure is W * H dot matrix, the histogram aggregate-value that the 1st gray-scale value is the g1 correspondence is not P1 for the histogram aggregate-value of " 0 ", then gray scale is that the pixel number N of g1 is in the full figure: N=P1 * W * H, and the pixel number of G0~G1 also be that N is individual among the accumulative total figure of mode class center;
(32) in the facial image that equipment to be converted is gathered gray scale be the pixel of g1 according to gray scale sequence number order from small to large, carry out assignment by the gray scale numerical value of G0~G1 among the accumulative total figure of mode class center;
(4) search the corresponding histogram aggregate-value P2 of gray-scale value g2 (g2=g1+1) again, searching class center aggregate-value to the mode class center accumulative total figure of the man face image acquiring equipment of standard device again is P2 corresponding gray scale value G2, treats the facial image that conversion equipment gathers according to the concrete numerical value of the mode class center aggregate-value of G1~G2 correspondence among the mode class center accumulative total figure of standard device and carries out assignment again.The assignment concrete grammar is: in the facial image that equipment to be converted is gathered, gray-scale value is N in the pixel number of g1~g2, N=(P2-P1) * W * H, and the pixel number of G1~G2 also is N among the accumulative total figure of mode class center, gray scale is the pixel gray scale sequence number order from small to large according to value of g2 in the facial image that equipment to be converted is gathered, and carries out assignment by the gray scale numerical value of G1~G2 among the accumulative total figure of mode class center;
(5) method like this, each gray-scale value of finishing the facial image pixel that equipment to be converted gathers is assignment again, forms a width of cloth transfer image acquisition;
6) facial image that equipment to be converted is gathered and described transfer image acquisition merge in pixel level, and (i j) is finally to form the normalized image E of collecting device attribute;
E(i,j)=(C(i,j)+D(i,j))/2
In the formula, ((i j) is transfer image acquisition to D to C for i, the j) facial image of gathering for equipment to be converted.
The present invention has improved the discrimination of the recognition of face that the facial image gathered with different acquisition equipment carries out, to adapt to the different situations of practical application.
Description of drawings
Fig. 1 is a people's of embodiments of the invention one a facial image example.
Fig. 2 is a people's of embodiments of the invention two a facial image example.
Embodiment
Being described in detail as follows in conjunction with the embodiments of the normalized method of man face image acquiring device attribute that the present invention proposes:
Embodiment 1
Specifying camera in the present embodiment is standard device, is equipment to be converted and specify video camera.At first making training sample, is that 20 people take a picture at studio camera.Under the condition of the identical illumination in photo studio, be that above-mentioned 20 identical people gather facial image with video camera.With scanner photo is input to computing machine subsequently, forms the facial image of camera input.With the facial image training of 20 camera inputs, obtain the mode class center cumulative distribution functional arrangement of camera.With this method the facial image of 20 camera acquisitions is carried out gray scale normalization again and handle, form 20 new facial images.
Present embodiment specifically comprises training process and normalization process two parts: wherein, training process may further comprise the steps:
1) at first gather 20 width of cloth facial images respectively with video camera, two kinds of man face image acquiring equipment of camera, if input picture is a coloured image, then convert gray level image earlier to, its gray scale Y value that is converted to gray level image is:
Y=0.299R+0.589G+0.114B; Wherein R, G, B are respectively the red, green, blue primary color component of coloured image;
2) 20 width of cloth facial images of described each man face image acquiring equipment collection are trained, computation schema class central distribution function, the mode class center accumulative total that obtains each man face image acquiring equipment is schemed, and concrete grammar is;
(1) grey level histogram of each width of cloth facial image that is used to train of this kind equipment collection is expressed as:
p(y)Gray(g) g=0,1,...,255 y=1,2,...,20
Wherein, y represents to train the sequence number of facial image, and g represents the gray-scale value of facial image pixel;
(2) defining mode class center cumulative distribution function is:
pDevice(g) g=0,1,...,255;
Mode class central distribution value when (3) the definition gray scale is g is B (g), and then the mode class central distribution function of each equipment is:
pDevice(g)=B(g)+pDevice(g-1)
When g=0, pDevice (g-1)=0;
The method of calculating B (g) is:
(31) at first calculate mode class central value B (0) when g=0: the steps include:
If the value of the 1st width of cloth and the 2nd width of cloth image class central point when gray scale is " 0 " is k1, precise radius is r1, then:
K1=(p (1) Gray (0)+p (2) Gray (0))/2; R1=| (p (1) Gray (0)-p (2) Gray (0)) |/2; The error T of the value k1 of the class central point when gray scale is " 0 " is the 3rd width of cloth image: T=|p (3) Gray (0)-k1| with the 1st width of cloth and the 2nd width of cloth image when " 0 " in gray scale, when T≤r1, then the 1st, 2, the value k2 with the class central point of the 3rd width of cloth image is constant, be k2=k1, precise radius r2 is constant, i.e. r2=r1; And when T>r0, k2=(k1+p (3) Gray (0))/2, r2=| (k1-p (3) Gray (0)) |/2, method like this calculates the 1st, the class central point k of 2... (N-1) width of cloth image and N width of cloth image N-1At this moment B (0)=k N-1, then at mode class center cumulative distribution value pDevice (the 0)=B0 of gray-scale value " 0 " point.
(32) then according to the method for calculating the mode class central distribution value when the g=0, calculate the mode class central value B (1) when g=1 again, obtain mode class center cumulative distribution value pDevice (1)=B (1)+pDevice (0) at gray-scale value " 1 " point; Rule thus, the mode class central distribution value when gray scale is g is B (g), then cumulative distribution function in mode class center is:
pDevice(g)=B(g)+pDevice(g-1)
(4) each man face image acquiring equipment carry out this step respectively in (1), (2), (3) step handle, obtain the mode class center cumulative distribution function of two class man face image acquiring equipment:
Video camera: pDeviceCCD (g) g=0,1 ..., 255
Camera: pDeviceScanner (g) g=0,1 ..., 255
The normalization process may further comprise the steps:
1) determine attribute normalization purpose, the appointment camera is a standard device, is equipment to be converted and specify video camera;
4) width of cloth facial image that video camera is gathered carries out statistics with histogram, and obtain the histogrammic accumulative total figure of this width of cloth image: the histogram cumulative distribution function of this width of cloth image is:
pDevice(g)=pGyay(g)+pDevice(g-1)
In the formula, pGray (g), g=0,1 ..., 255, be the histogram of this facial image
When g=0, make pDevice (g-1)=0;
The gray scale sequence number of each picture element of facial image that the calculation video camera is gathered, its concrete grammar is:
(1) definition picture element B (i, j, g, t), wherein i, j are the planimetric coordinates of picture element B, g is the gray-scale value of this point, t is the sequence number in the set of this gray-scale value in full figure point that is g;
(2) pixel that gray-scale value equals g in the traversal full figure statistics full figure is counted, and is designated as K, and the span of gray-scale value g is 0-255;
(3) each gray-scale value is equaled the point of g, all calculating respectively with this point is the average of the M * N neighborhood at center, and M, N are more than or equal to 2 (as 3 * 3):
(4) to this K average by sorting from small to large, if average equate, then can be by i*j numerical value order decision sequence number from small to large, this average sequence number be picture element B (i, j, g, gray scale sequence number t) of correspondence;
5) according to the 4th) histogrammic accumulative total figure in the step, the 2nd) the mode class center of the camera that obtains in step accumulative total figure, the and the 6th) the gray scale sequence number of each picture element of facial image of in the step video camera being gathered is carried out assignment again to the gray-scale value of its pixel, form a width of cloth transfer image acquisition, present embodiment forms a width of cloth transfer image acquisition image specific practice: in the facial image that equipment to be converted is gathered, the histogram aggregate-value of the 1st gray-scale value correspondence is not 0.01% for the histogram aggregate-value of " 0 ", full figure is 512 * 512 dot matrix, the gray-scale value of this moment is 10, and then gray scale is that 10 pixel number is in the full figure 0.01 % × 512 × 512 ≅ 26 , And the g0-g2 value is 20-23 among the mode class center accumulative total figure, and its corresponding mode class center aggregate-value regularity of distribution is that gray-scale value is that 20 pixel has 4, and gray-scale value is that 21 pixel has 5, and gray-scale value is that 22 pixel has 7, and gray-scale value is that 23 pixel has 10.In facial image that equipment to be converted is gathered, gray scale is equaled 26 pixels of 10 assignment again, promptly to equal 10 sequence number be 1,2,3,4 point to gray scale, its gray-scale value is rewritten as 20; Sequence number is 5,6,7,8,9 point, and its gray-scale value all is rewritten as 21; Sequence number is 10,11,12,13,14,15,16 point, and its gray-scale value all is rewritten as 22; Sequence number is 17,18,19,20,21,22,23,24,25,26 point, and its gray-scale value all is rewritten as 23.
Method is like this finished the assignment again of the facial image that equipment to be converted gathers, and forms a width of cloth transfer image acquisition:
6) facial image and 4 gathered of equipment to be converted)-6) transfer image acquisition that forms of step merges in pixel level, finally forms the normalized image of collecting device attribute;
If the facial image that equipment to be converted is gathered be C (i, j), transfer image acquisition be D (i, j), the normalized image of collecting device attribute be E (i, j), the selected blending algorithm of present embodiment is:
E(i,j)=(C(i,j)+D(i,j))/2。
With 20 facial images after original image and the normalization camera images is carried out recognition of face respectively, employing obtains recognition result as shown in table 1 (430000 people storehouse) based on the recognition mode of the eigenface in the multi-mode face identification method of parts principal component analysis.
The facial image discrimination of table 1 embodiment (430000 people storehouse)
Before the processing After the processing
First-selected discrimination 40% 40%
Preceding 10 discriminations 65% 80%
Preceding 50 discriminations 70% 85%
Fig. 1 has provided the facial image example that adopts a people of present embodiment.Wherein, Fig. 1 (1) is the facial image of taking on the same day under the condition of the identical illumination in photo studio with Fig. 1 (4), wherein, Fig. 1 (1) is the facial image of taking with video camera, Fig. 1 (4) is the facial image that forms with the scanner input after taking photos with camera, Fig. 1 (2) handles the transfer image acquisition that obtains with the image of Fig. 1 (1), and Fig. 1 (3) carries out the final image that fusion treatment forms to Fig. 1 (1) and Fig. 1 (2).Table 2 has provided the recognition result that Fig. 1 (1), (2), (3) are discerned Fig. 1 (4) respectively.
(1), (2) among table 2 Fig. 1, (3) figure are to the recognition result (430000 people storehouse) of Fig. 1 (4)
Rank
Fig. 1 (1) 22
Fig. 1 (2) >100
Fig. 1 (3) 1
In table 2, in 430000 personal data storehouses, Fig. 1 that present embodiment obtains (3) image identifies same person on the 1st position, and Fig. 1 (1) image that video camera is taken then identifies same person on 22 position.
Embodiment 2:
It is standard device that present embodiment is specified camera, is equipment to be converted and by the mode class center cumulative distribution functional arrangement that obtains camera and specify video camera.There are 100 people to take photos at camera, the back forms the facial image of 100 different people with the scanner input, in the laboratory 100 identical human video cameras are taken facial image, with the transfer image acquisition and the final normalized facial image that obtain 100 people after handling with the concrete grammar of embodiment 1 respectively, the camera images that goes to discern me with 100 people's transfer image acquisition and final normalized facial image respectively, employing obtains recognition of face rate as shown in table 3 based on the face identification method of the eigenface in the multi-mode face identification method of parts principal component analysis.
The facial image discrimination of table 3 embodiment 2 (430000 people storehouse)
Before the processing After the processing
First-selected discrimination 31.9% 39.1%
Preceding 10 discriminations 46.4% 55.1%
Preceding 50 discriminations 63.7% 66.7%
From table 3 with can see table 1 is compared, the improvement of embodiment 2 on discrimination do not have that embodiment's 1 is big, reason is embodiment different (illumination of breadboard illumination and photo studio has bigger difference) on conditions such as illumination, and same people's 2 class images pick up from the different time, have variations such as attitude.
Fig. 2 adopts a people's of present embodiment facial image example.Wherein.Fig. 2 (1) and Fig. 1 (4) are same individual's facial images, wherein, Fig. 2 (1) is the facial image of taking with video camera, Fig. 2 (4) is the facial image that forms with the scanner input after same people takes photos in the photo studio, Fig. 2 (2) handles the transfer image acquisition that obtains with the image of Fig. 1 (1), and Fig. 2 (3) carries out the final image that fusion treatment forms to Fig. 2 (1) and Fig. 2 (2).Table 4 has provided the recognition result that Fig. 2 (1), (2), (3) are discerned Fig. 2 (4) respectively.
(1), (2) among table 4 Fig. 2, (3) figure are to the recognition result (430000 people storehouse) of Fig. 2 (4)
Rank
Fig. 2 (1) >100
Fig. 2 (2) >100
Fig. 2 (3) 1
In table 4, in 430000 personal data storehouses, Fig. 2 that the present invention obtains (3) image identifies same person on the 1st position, Fig. 2 (1) image that video camera is taken then can not find same person in the most similar preceding 100 of the similar formation in 430000 personal data storehouses.
From the recognition result shown in the table 1-table 4, effect of the present invention is clearly.

Claims (1)

1, the normalized method of a kind of man face image acquiring device attribute is characterized in that: comprise training process and normalization process two parts; Wherein, training process may further comprise the steps:
1) at first gather N width of cloth facial image respectively with various man face image acquiring equipment, N is greater than 15, if input picture is a coloured image, then converts gray level image earlier to, and the method that the coloured image that obtains is converted to gray level image is:
Y=0.299R+0.589G+0.114B
Wherein Y is a gray-scale value, and R, G, B are respectively the red, green, blue primary color component of coloured image;
2) the N width of cloth facial image of described each man face image acquiring equipment collection is trained, computation schema class central distribution function, the mode class center accumulative total that obtains each man face image acquiring equipment is schemed, and concrete grammar is;
(1) grey level histogram of each width of cloth facial image that is used to train of this kind equipment collection is expressed as:
p(y)Gray(g) g=0,1,...,255 y=1,2,...,N
Wherein, y represents to train the sequence number of facial image, and g represents the gray-scale value of facial image pixel;
(2) defining mode class center cumulative distribution function is:
pDevice(g) g=0,1,...,255;
Mode class central distribution value when (3) the definition gray scale is g is B (g), and then the mode class central distribution function of each equipment is:
pDevice(g)=B(g)+pDevice(g-1)
When g=0, pDevice (g-1)=0;
The method of calculating B (g) is:
(31) at first calculate mode class central value B (0) when g=0: the steps include:
If the value of the 1st width of cloth and the 2nd width of cloth image class central point when gray scale is " 0 " is k1, precise radius is r1, then:
K1=(p (1) Gray (0)+p (2) Gray (0))/2; R1=| (p (1) Gray (0)-p (2) Gray (0)) |/2; The error T of the value k1 of the class central point when gray scale is " 0 " is the 3rd width of cloth image: T=|p (3) Gray (0)-k1| with the 1st width of cloth and the 2nd width of cloth image when " 0 " in gray scale, when T≤r1, then the value k2 of the class central point of the 1st, 2 and the 3rd width of cloth image is constant, be k2=k1, precise radius r2 is constant, i.e. r2=r1; And when T>r0, k2=(k1+p (3) Gray (0))/2, r2=| (k1-p (3) Gray (0)) |/2, method like this calculates the 1st, the value k of the class central point of 2... (N-1) width of cloth image and N width of cloth image N-1B (0)=k then N-1, promptly at mode class center cumulative distribution value pDevice (the 0)=B0 of gray-scale value " 0 " point;
(32) then according to the method for calculating the mode class central distribution value when the g=0, calculate the mode class central value B (1) when g=1 again, obtain mode class center cumulative distribution value pDevice (1)=B (1)+pDevice (0) at gray-scale value " 1 " point; Rule thus, the mode class central distribution value when gray scale is g is B (g), then cumulative distribution function in mode class center is:
pDevice(g)=B(g)+pDevice(g-1)
The normalization process may further comprise the steps:
3) determine attribute normalization purpose, specified value equipment and equipment to be converted;
4) treat the width of cloth facial image that conversion equipment gathers and carry out statistics with histogram, obtain the histogrammic accumulative total figure of this width of cloth image; The histogram cumulative distribution function of this width of cloth image is:
pDevice(g)=pGyay(g)+pDevice(g-1)
In the formula, pGray (g), g=0,1 ..., 255, be the histogram of this facial image
When g=0, make pDevice (g-1)=0;
Calculate the gray scale sequence number of each picture element of facial image that this equipment to be converted gathers, its concrete grammar is:
(1) definition picture element B (i, j, g, t), wherein i, j are the planimetric coordinates of picture element B, g is the gray-scale value of this point, t is the sequence number in the set of this gray-scale value in full figure point that is g;
(2) pixel that gray-scale value equals g in the traversal full figure statistics full figure is counted, and is designated as K, and the span of gray-scale value g is 0-255;
(3) each gray-scale value is equaled the point of g, all calculating respectively with this point is the average of the M * N neighborhood at center, and M, N are more than or equal to 2;
(4) to this K average by sorting from small to large, if average equate, then can be by i*j numerical value order decision sequence number from small to large, this average sequence number be picture element B (i, j, g, gray scale sequence number t) of correspondence;
5) according to the 4th) histogrammic accumulative total figure, the 2nd in the step) the mode class center accumulative total figure of the standard device that obtains in the step, the and the 4th) the gray scale sequence number of each picture element of facial image of in the step this equipment to be converted being gathered is carried out assignment again to the gray-scale value of its pixel, form a width of cloth transfer image acquisition, form a width of cloth transfer image acquisition image specific practice and be:
(1) in the histogram accumulative total figure of the facial image that equipment to be converted is gathered, begin to search from " 0 " gray-scale value, if the histogram aggregate-value of " 0 " gray-scale value correspondence is " 0 ", search the histogram aggregate-value of " 1 " gray-scale value correspondence again, till the histogram aggregate-value of this gray-scale value correspondence was not " 0 ", this moment, note was made P1 G1, g1 is a gray-scale value, P1 is the histogram aggregate-value of g1 gray-scale value correspondence;
(2) searching class center aggregate-value to the mode class center accumulative total figure of the man face image acquiring equipment of standard device again is P1 corresponding gray scale value G1, the mode class center of checking standard device adds up the gray-scale value pairing mode class center aggregate-value lower than G1 among the figure, find the 1st not to be the mode class center aggregate-value corresponding gray scale value G0 of " 0 "
(3) treat the facial image that conversion equipment gathers according to the concrete numerical value of the mode class center aggregate-value of G0~G1 section correspondence among the accumulative total figure of the mode class center of standard device and carry out assignment again, the assignment concrete grammar is:
(31) be located in the facial image that equipment to be converted gathers, full figure is W * H dot matrix, the histogram aggregate-value that the 1st gray-scale value is the g1 correspondence is not P1 for the histogram aggregate-value of " 0 ", then gray scale is that the pixel number N of g1 is in the full figure: N=P1 * W * H, and the pixel number of G0~G1 also be that N is individual among the accumulative total figure of mode class center;
(32) in the facial image that equipment to be converted is gathered gray scale be the pixel of g1 according to gray scale sequence number order from small to large, carry out assignment by the gray scale numerical value of G0~G1 among the accumulative total figure of mode class center;
(4) search gray-scale value g2 again, g2=g1+1 wherein, corresponding histogram aggregate-value P2, searching class center aggregate-value to the mode class center accumulative total figure of the man face image acquiring equipment of standard device again is P2 corresponding gray scale value G2, treats the facial image that conversion equipment gathers according to the concrete numerical value of the mode class center aggregate-value of G1~G2 correspondence among the mode class center accumulative total figure of standard device and carries out assignment again; The assignment concrete grammar is: in the facial image that equipment to be converted is gathered, gray-scale value is N in the pixel number of g1~g2, N=(P2-P1) * W * H, and the pixel number of G1~G2 also is N among the accumulative total figure of mode class center, gray scale is the pixel gray scale sequence number order from small to large according to value of g2 in the facial image that equipment to be converted is gathered, and carries out assignment by the gray scale numerical value of G1~G2 among the accumulative total figure of mode class center;
(5) method like this, each gray-scale value of finishing the facial image pixel that equipment to be converted gathers is assignment again, forms a width of cloth transfer image acquisition;
6) facial image that equipment to be converted is gathered and described transfer image acquisition merge in pixel level, and (i j) is finally to form the normalized image E of collecting device attribute;
E(i,j)=(C(i,j)+D(i,j))/2
In the formula, ((i j) is transfer image acquisition to D to C for i, the j) facial image of gathering for equipment to be converted.
CN 200410047919 2004-06-11 2004-06-11 Attribute normalizing method for human face image collecting apparatus Expired - Fee Related CN1271560C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200410047919 CN1271560C (en) 2004-06-11 2004-06-11 Attribute normalizing method for human face image collecting apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200410047919 CN1271560C (en) 2004-06-11 2004-06-11 Attribute normalizing method for human face image collecting apparatus

Publications (2)

Publication Number Publication Date
CN1584916A CN1584916A (en) 2005-02-23
CN1271560C true CN1271560C (en) 2006-08-23

Family

ID=34602039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200410047919 Expired - Fee Related CN1271560C (en) 2004-06-11 2004-06-11 Attribute normalizing method for human face image collecting apparatus

Country Status (1)

Country Link
CN (1) CN1271560C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101324920B (en) * 2007-06-15 2011-06-15 上海银晨智能识别科技有限公司 Method for searching human face remarkable characteristic and human face comparison method
CN101593292B (en) * 2009-05-07 2012-01-04 长沙融威电子科技有限公司 Anti-counterfeiting method and anti-counterfeiting device for separating and counting non-touch paper currency or tickets

Also Published As

Publication number Publication date
CN1584916A (en) 2005-02-23

Similar Documents

Publication Publication Date Title
CN1203677C (en) Vision attention mode
CN1288916C (en) Image dead point and noise eliminating method
CN1237474C (en) Iris identification system and method and computer readable storage medium stored therein computer executable instructions to implement iris identification method
JP2022528961A (en) Methods and systems for selecting embryos
CN1320485C (en) Image searching device, key word providing method and program of same
CN1738426A (en) Video motion goal division and track method
CN1710593A (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
CN1700238A (en) Method for dividing human body skin area from color digital images and video graphs
CN1932847A (en) Method for detecting colour image human face under complex background
CN109377441B (en) Tongue image acquisition method and system with privacy protection function
CN1950844A (en) Object posture estimation/correlation system, object posture estimation/correlation method, and program for the same
CN1137662C (en) Main unit component analysis based multimode human face identification method
CN101034481A (en) Method for automatically generating portrait painting
CN1794264A (en) Method and system of real time detecting and continuous tracing human face in video frequency sequence
JP2018055470A (en) Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system
CN1162798C (en) Chinese medicine tongue colour, fur colour and tongue fur thickness analysis method based on multiclass support vector machine
CN1737824A (en) Set up the method and apparatus of deterioration dictionary
CN1761205A (en) System for detecting eroticism and unhealthy images on network based on content
CN1910613A (en) Method for extracting person candidate area in image, person candidate area extraction system, person candidate area extraction program, method for judging top and bottom of person image, system for j
CN107351080B (en) Hybrid intelligent research system based on camera unit array and control method
CN1928886A (en) Iris identification method based on image segmentation and two-dimensional wavelet transformation
CN100345154C (en) Visual quick identifying method for football robot
CN101075872A (en) Registration device, collation device, extraction method, and program
CN101030258A (en) Dynamic character discriminating method of digital instrument based on BP nerve network
CN1622589A (en) Image processing method and image processing apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20060823