CN107742094A - Improve the image processing method of testimony of a witness comparison result - Google Patents

Improve the image processing method of testimony of a witness comparison result Download PDF

Info

Publication number
CN107742094A
CN107742094A CN201710867260.2A CN201710867260A CN107742094A CN 107742094 A CN107742094 A CN 107742094A CN 201710867260 A CN201710867260 A CN 201710867260A CN 107742094 A CN107742094 A CN 107742094A
Authority
CN
China
Prior art keywords
image
face
facial image
dynamic
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710867260.2A
Other languages
Chinese (zh)
Inventor
何煜埕
陈涛
李家洪
过陈晨
林志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Aerospace Polytron Technologies Inc
Original Assignee
Jiangsu Aerospace Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Aerospace Polytron Technologies Inc filed Critical Jiangsu Aerospace Polytron Technologies Inc
Priority to CN201710867260.2A priority Critical patent/CN107742094A/en
Publication of CN107742094A publication Critical patent/CN107742094A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees

Abstract

The present invention provides a kind of image processing method for improving testimony of a witness comparison result, and the facial image captured first to certificate photograph image and dynamic pre-processes, including:Translation transformation, rotation transformation are carried out to correct the position of face in the picture to the facial image that dynamic is captured;Then the facial image captured to certificate photograph image and dynamic zooms in and out processing;Then the facial image captured to certificate photograph image and dynamic carries out face cutting respectively, then is separately converted to gray-scale map;Then histogram equalization is carried out;Two face gray level images after equalization are smoothed again;Binary conversion treatment is finally carried out, obtains two facial images after binaryzation;The feature extraction and comparison of facial image are carried out again.The present invention can reduce the influence that external environment is extracted to face characteristic.

Description

Improve the image processing method of testimony of a witness comparison result
Technical field
The present invention relates to technical field of face recognition, especially a kind of image processing method for improving testimony of a witness comparison result.
Background technology
Recognition of face is an emerging biological identification technology of today's society, as continuous development is in order to fast and effectively The requirement of auto authentication is realized, recognition of face combination identity card reading technique has derived testimony of a witness comparison technology.Testimony of a witness ratio To technology, refer to by computer by face information (refer to the intrinsic body such as shape of face, the image surface of people and manage feature) collection, processing, with Photograph image in identity card chip is contrasted, to identify personal identification technology.Due to the card in identity card chip Part photograph image is the photo of low resolution, and extractible face characteristic information is relatively fewer, and the result that can be compared to the testimony of a witness is made Into influence.And the position of the face that dynamic is captured in the picture is not fixed, unfixed etc. the knot that the testimony of a witness can be also compared of size Fruit impacts.So need to carry out some image procossings to the facial image that certificate photograph image and dynamic are captured, to improve The recovery rate of face characteristic, it can thus improve the correct recognition rata of testimony of a witness comparison.
The testimony of a witness is compared at present suitable for the safety in major safeguard protection area such as station safety check, enterprise visitor, hotel occupancy Management system, effective ID card information, and the photo of Field Force are obtained by testimony of a witness all-in-one, automatically to certificate photograph and Comparison is identified in personnel on site's photo, to recognize the identity of personnel.So testimony of a witness comparison technology is current face's identification technology The technology that middle comparison is popular.
What the testimony of a witness compared focuses on to grab with dynamic from the low resolution certificate photograph image in identity card chip The influence that other environmental factors are extracted to face characteristic is reduced in the facial image of bat as far as possible, in other words in alignment algorithm one The environmental factor influenceed on face characteristic extraction is reduced in the case of cause as far as possible, then extract face characteristic value can effectively carry The accuracy that the high testimony of a witness compares.
The content of the invention
It is an object of the invention to overcome the deficiencies in the prior art, there is provided a kind of figure for improving testimony of a witness comparison result As processing method, last face characteristic extract and the influence of recognition result for reducing environmental factor so that face alignment standard True rate can improve more than 10% on the original basis.The technical solution adopted by the present invention is:
A kind of image processing method for improving testimony of a witness comparison result, include pretreatment, the spy of facial image of facial image Two steps of sign extraction and comparison;
The facial image captured first to certificate photograph image and dynamic pre-processes, including:
Translation transformation, rotation transformation are carried out to correct the position of face in the picture to the facial image that dynamic is captured;So The facial image captured afterwards to certificate photograph image and dynamic zooms in and out processing, and it is same size to make them, to realize face The normalization of image;
Then to certificate photograph image and dynamic capture facial image carry out face cutting respectively, to after cutting respectively from The facial image obtained in the facial image that certificate photograph image and dynamic are captured is separately converted to gray-scale map;
The two face gray level images obtained after conversion are subjected to histogram equalization;Again to two faces after equalization Gray level image is smoothed;Two face gray level images after finally will be smooth carry out binary conversion treatment, obtain binaryzation Two facial images afterwards;
Finally, to two obtained respectively from certificate photograph image and the facial image of dynamic candid photograph by pretreatment Facial image extracts characteristic point, builds the eigenface space of two facial images;Both are compared again, to verify whether people Card is consistent.
The advantage of the invention is that:Last face characteristic is extracted reduction environmental factor and the influence of recognition result so that Face alignment accuracy rate can improve more than 10% on the original basis.
Brief description of the drawings
The facial image captured to dynamic that Fig. 1 is the present invention carries out rotation transformation schematic diagram.
Fig. 2 is bilinear interpolation of the invention to obtain the pixel value schematic diagram of output image.
Fig. 3 is the carry out histogram equalization schematic diagram of the present invention.
Fig. 4 is the mask schematic diagram when face gray level image of the present invention is smoothed.
Fig. 5 is flow chart of the method for the present invention.
Embodiment
With reference to specific drawings and examples, the invention will be further described.
Improving the accuracy rate that the testimony of a witness compares mainly includes pretreatment, the feature extraction and comparison of facial image of facial image Two steps;
According to current situation, the accuracy rate that improve testimony of a witness comparison is preferably to start with from the pretreatment of facial image, In the case that testimony of a witness alignment algorithm is consistent, the influence that outside environmental elements extract to face characteristic is reduced as far as possible, it is possible to Improve the accuracy rate that the testimony of a witness compares.
The facial image captured first to certificate photograph image and dynamic pre-processes, including:
Translation transformation, rotation transformation are carried out to correct the position of face in the picture to the facial image that dynamic is captured;So The facial image captured afterwards to certificate photograph image and dynamic zooms in and out processing, and it is same size to make them, to realize face The normalization of image;
Then to certificate photograph image and dynamic capture facial image carry out face cutting respectively, to after cutting respectively from The facial image obtained in the facial image that certificate photograph image and dynamic are captured is separately converted to gray-scale map;
Face gray level image after conversion is subjected to histogram equalization, highlights the characteristic point of face;Again to equalization Face gray level image afterwards is smoothed, and reduces the influence of noise on image;Face gray level image after finally will be smooth Binary conversion treatment is carried out, obtains two facial images after binaryzation;
The pretreatment of these facial images is carried out, the shadow that external environment is extracted to face characteristic can be reduced as far as possible Ring.
Finally, to two obtained respectively from certificate photograph image and the facial image of dynamic candid photograph by pretreatment Facial image extracts characteristic point, builds the eigenface space of two facial images;Both are compared again, to verify whether people Card is consistent;
Various processes of the present invention are elaborated below;
(1) pretreatment of facial image;
(1.1) certificate photograph image and the facial image of dynamic candid photograph, are inputted;
Certificate photograph image can be obtained by identity card reader, and the letter of personnel is read out by identity card reader Breath, include the certificate photograph of personnel;The facial image that dynamic is captured can be obtained by the seizure of camera, by reading in real time Take each photograph frame of camera and captured to obtain;
(1.2) translation transformation, is carried out to the facial image that dynamic is captured;
Because when the candid photograph of dynamic human face is carried out with camera face capture image position be it is unfixed, May so have an impact when last identification to recognition result.The certificate photograph image read from China second-generation identity card is then In the absence of this influence, so the left and right that human face photo is captured to reduce dynamic offsets the influence to last recognition result, need Translation transformation is carried out to the facial image that dynamic is captured;If the former coordinate system of image is XOY, former coordinate origin is moved to figure As certain point O1(dx,dy), the new coordinate system of image is just X1O1Y1, it is assumed that any pixel p points of original image, in the two coordinates Coordinate in system is respectively (X, Y) and (XT,YT), 1. line translation can be entered according to formula between them:
1. entered by formula Mobile state capture facial image translation transformation after can reduce face left and right skew pair The influence of last recognition result;
(1.3) rotation transformation, is carried out to the facial image that dynamic is captured;
When carrying out dynamic human face candid photograph by camera, the position of face may be not in the facial image captured Level, because be not kept on horizontal level also can be to knowing for line when finally carrying out recognition of face between two of face Other result has influenceed;This influence is not present in the certificate photograph image read in China second-generation identity card, is captured to reduce dynamic Facial image in face two lines be not kept on horizontal level to recognition result influence, it is necessary to dynamic capture Facial image carries out rotation transformation;When carrying out rotation transformation, with the central point between face two by all of image Pixel is rotated, it is assumed that on XOY coordinate systems, the central point between face two is origin, and the point coordinates before rotation is (x0,y0), postrotational point coordinates is (x1,y1), with the angle of X-axis be b before rotation, with the angle of X-axis be a after rotation, rotate The distance of former and later two point to (0,0) coordinates is r;As shown in Figure 1;
(x0,y0) and (x1,y1) 2. can be 3. converted to formula by formula:
x1=r cos (b-a)=r cos (b) cos (a)+r sin (b) sin (a)=x0cos(a)+y0sin(a) ②
y1=r sin (b-a)=r sin (b) cos (a)-r cos (b) sin (a)=y0cos(a)-x0sin(a) ③
It is so 2. rotationally-varying with the formula facial image progress that 3. can is captured to dynamic by formula, it is possible to drop Influence of the line of low face two not on horizontal level to recognition result;
(1.4), the size for the facial image that certificate photograph image and dynamic are captured is different big, because eyes are people Important part in face image, so the different big distance that refers between two here is different big, if by this two Open different big photograph image to go to be compared, last recognition result can be had an impact, in order to reduce this influence, needed The facial image to be captured to certificate photograph image and dynamic zooms in and out processing, and they are scaled to the same size.
Assuming that certain point (x, y) in image before image zooms in and out, obtains new coordinate after zooming in or out (X,Y);4. relational expression between them is
X=a*x
Y=b*y is 4.
A, b are x directions respectively, the magnifying power in y directions, are amplified when a, b are more than 1, are reduced during less than 1;
The scaling of certificate photograph image and the facial image of dynamic candid photograph has thus been carried out, but has carried out image scaling When need solve image mapping recoil target round problem;Because " reverse Mapping " method used obtains output image Pixel value, that is, on the basis of output image, to each pixel of output image, search its respective pixel in input picture Value, so it finds that the problem of original coordinate may not be integer when being mapped to former coordinate, but the coordinate of input picture must Must be that integer just has corresponding pixel value, so can obtains the pixel value of output image by bilinear interpolation;It is false If it is required that the pixel value of floating-point coordinate be f, then the pixel value of 4 points around it is T1、T2、T3、T4;As shown in Figure 2: The pixel value of f points can be 5. obtained according to formula, it is as follows:
f1=T1×(1-u)+T2×u
f2=T3×(1-u)+T4×u
F=f1×(1-v)+f2×v ⑤
U and v represents the coordinate distance T of f points1The distance of coordinate;Pass through public affairs when finding former coordinate and being not integer 5. bilinear interpolation can obtain the pixel value of output image to formula, and can is direct when finding former coordinate and being integer Get the pixel value of output image.The facial image that certificate photograph image and dynamic are captured so is zoomed in and out into processing and arrives phase Influence of the image size to last recognition result is reduced with can after size.
(1.5) facial image, captured to certificate photograph image and dynamic carries out face cutting respectively;
According to face picture rule:Assuming that the distance of face interpupillary distance is d, the central point position O between interpupillary distance, then face Width is 2 times of as 2d of face interpupillary distance, and forehead to the central point O of interpupillary distance of face distance is 0.5d, and the lower jaw of face arrives Interpupillary distance central point O distance is 1.5d;The interpupillary distance of any two people all differs, so only needing to know this image Face interpupillary distance is how many, it is possible to calculates the size of face in image;1. and formula summary step, can be according to formula 2., 3. calculate facial image translation and postrotational two eyes centre coordinate, i.e. the coordinate of O points;Then according to face As learning rule, it is possible to calculate the face row and column where border up and down in the picture, obtained row and column can Image is cut, the facial image after being cut;When capturing face due to dynamic, the distance of camera and people are not Together, the facial image of cutting is just different, and the area of facial image when close acquired in camera is just larger, causes the pupil of two Away from just it is larger, otherwise distance it is then smaller;It is identified, result can have been influenceed with such image;So with regard to needs pair 4. and 5. the facial image that certificate photograph image and dynamic are captured is zoomed in and out with formula, the interpupillary distance for zooming to their faces is the same Size, then be identified compare can reduce its influence to recognition result;The benefit of face cutting is carried out just to image It is the influence that can reduce the hair of face and the background of photo to last recognition result.
(1.6), the facial image to being obtained respectively from certificate photograph image and the facial image of dynamic candid photograph after cutting It is separately converted to gray-scale map;
In order to accelerate the arithmetic speed of algorithm, it is necessary to which the RGB image for cutting obtained facial image is converted into gray-scale map; Gray-scale map is exactly to make 3 component values of red, green, blue in RGB image equal in fact, it is assumed that the component value of gray-scale map is gray, is led to Weighted average method is crossed to calculate gray value, exactly gives 3 components of red, green, blue different weights, is then added;Such as public affairs Formula is 6. shown,
Gray=R=G=B=0.299R+0.587G+0.114B is 6.
The facial image obtained in the facial image that will so be captured respectively from certificate photograph image and dynamic after cutting turns Gray-scale map is turned to, the arithmetic speed of face characteristic extraction below and identification can be accelerated.
(1.7) the two face gray level images obtained after conversion, are subjected to histogram equalization;
To this step, after the cutting obtained in the facial image for having been obtained for capturing from certificate photograph image and dynamic Face gray level image;Then histogram equalization processing will be carried out to face gray level image, and will redistribute gray value, stretching ash Angle value is distributed closeer part, the sparse part of compression grey value profile, can so strengthen the contrast of facial image, make one Face image becomes apparent from, and some details and feature of prominent face, is advantageous to the extraction followed by human face characteristic point;Histogram is equal Weighing apparatusization is that each point of image is operated, according to current gray level level and the above total amount of gray level all pixels divided by total picture It is exactly desired gray value (scope 0~1) in fact that prime number, which is measured to a hundredths, this hundredths, convert it into 0~ 255 scopes, multiplied by 255;The gray value tried to achieve and gray value before are replaced, are achieved that the histogram equalization of gray level image Change.As shown in Figure 3;
The formula for summarizing histogram equalization is 7. as follows:
Wherein, q represents gray value, niExpression gray level is i (i ∈ [0, q]) number of pixels, and n represents the total pixel of image Number, p represent the pixel value after equalization;So 7. face gray level image is carried out by histogram equalization according to formula, it is prominent The details of face is advantageous to extraction of the later stage to the characteristic point of facial image in facial image;
(1.8), two face gray level images after equalization are smoothed;
In order to reduce influence of the picture noise in facial image to last recognition result, it is necessary to enter to face gray level image Row median filter process, medium filtering are a kind of nonlinear filterings, and its principle is exactly to select a mask, and the mask is exactly by it Then all pixels in the mask are ranked up, then substitute original with the intermediate value of the mask by neighbouring some pixels composition Pixel;Such as take 3*3 mask as shown in Figure 4;Stored firstly the need of by pixel 1 to the value in pixel 9, it is then right Pixel 1 is ranked up operation to pixel 9, obtains the pixel value of centre, uses it to the value for replacing pixel 1 to arrive pixel 9;Thus may be used To reduce influence of some picture noises to last recognition result in facial image;
(1.9) two face gray level images after, will be smooth carry out binary conversion treatment;
Face gray level image after will be smooth carries out binary conversion treatment, and gray-scale map exactly is converted into monochromatic (black, white) figure Picture, the bright domain of the hair in facial image, eyes, the profile of face and background and face is separated using binaryzation;Binaryzation Formula such as formula 8.:
8. middle g (x, y) is positioned at the gray value of (x, y) place pixel, g in original image to formulab(x, y) is after binaryzation at this Pixel value, it can only take 1 (white) or 0 (black), and in facial image binarization, the part that image values are 0 is background, The part that numerical value is 1 represents face, and T is the threshold values of binary conversion treatment process;It is 1 more than this threshold values, less than this threshold values Just it is 0, so can be obtained by the facial image of the facial image dynamically captured after binaryzation and certificate photo.
(2) feature extraction and comparison of facial image;
(2.1), structure passes through in the facial image captured respectively from certificate photograph image and dynamic of pretreatment what is obtained The eigenface space of two facial images;
(2.1.1), assume initially that the facial image of training average face is characterized as N number of, diFor face characteristic distance average face The distance between feature vector, if matrix A=[d1,d2,d3,…,dN];
(2.1.2), the eigenface of facial image is then calculated according to matrix A;Need to obtain using covariance matrix The individual characteristic value of preceding c (c≤N), the formula of covariance matrix are 9. as follows:
Because 9. formula is the computing of the high dimension vector of one, its amount of calculation is very big;So will to formula 9. in AATChanged, filled and be changed to ATA, i.e., by formula 9. in AATTransposition is ATA, because ATA is a low-dimensional vector, so Can be according to ATA first calculates its characteristic vector ui=ATA, obtain uiAfterwards, eigenvalue λ is 9. obtained further according to formulai
(2.1.3), its eigenface V is 10. obtained further according to formula after calculating characteristic value and characteristic vectori,
(2.1.4), then the eigenface V by being tried to achieveiThe eigenface space W finally to be obtained is formed, such as W={ V1, V2..., Vc};
The eigenface space computational methods of above-mentioned two facial image are identical, such as step (2.1.1)~step (2.1.4) It is shown;
(2.2) to the spy of two facial images obtained respectively from certificate photograph image and the facial image of dynamic candid photograph Sign face space is compared;
Two eigenface spaces are above had been obtained for, in fact process of the facial image Jing Guo feature extraction, be all finally As the point in eigenface space, the just similitude by comparison between them, to realize facial image and card that dynamic is captured The comparison of facial image in part photo, that is, carry out testimony of a witness comparison;By setting a similarity threshold values to be to judge the testimony of a witness No consistent, the result that two eigenface spaces compare is more than the similarity threshold values set and then represents that the testimony of a witness is consistent, less than phase Then represent that the testimony of a witness is inconsistent like bottom valve value.

Claims (10)

  1. A kind of 1. image processing method for improving testimony of a witness comparison result, it is characterised in that including:
    The facial image captured first to certificate photograph image and dynamic pre-processes, including:
    Translation transformation, rotation transformation are carried out to correct the position of face in the picture to the facial image that dynamic is captured;Then it is right The facial image that certificate photograph image and dynamic are captured zooms in and out processing, to realize the normalization of facial image;
    Then to certificate photograph image and dynamic capture facial image carry out face cutting respectively, to after cutting respectively from certificate The facial image obtained in the facial image that photograph image and dynamic are captured is separately converted to gray-scale map;
    The two face gray level images obtained after conversion are subjected to histogram equalization;Again to two face gray scales after equalization Image is smoothed;Two face gray level images after finally will be smooth carry out binary conversion treatment, after obtaining binaryzation Two facial images;
    Finally, two faces to being obtained in the facial image captured respectively from certificate photograph image and dynamic by pretreatment Image zooming-out characteristic point, build the eigenface space of two facial images;Both are compared again, to verify whether the testimony of a witness one Cause.
  2. 2. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    Translation transformation is carried out to the facial image that dynamic is captured, including:If the former coordinate system of image is XOY, former coordinate origin It is moved to image point O1(dx,dy), the new coordinate system of image is just X1O1Y1, it is assumed that any pixel p points of original image, at this Coordinate in two coordinate systems is respectively (X, Y) and (XT,YT), 1. line translation can be entered according to formula between them:
    1. enter the translation transformation that Mobile state captures facial image by formula.
  3. 3. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    Rotation transformation is carried out to the facial image that dynamic is captured, including:
    When carrying out rotation transformation, all pixels of image are rotated with the central point between face two, it is assumed that On XOY coordinate systems, the central point between face two is origin, and the point coordinates before rotation is (x0,y0), postrotational point is sat It is designated as (x1,y1), the angle before rotation with X-axis is b, and the angle after rotation with X-axis is a, rotates former and later two points and is sat to (0,0) Target distance is r;
    (x0,y0) and (x1,y1) 2. can be 3. converted to formula by formula:
    x1=r cos (b-a)=r cos (b) cos (a)+r sin (b) sin (a)=x0cos(a)+y0sin(a) ②
    y1=r sin (b-a)=r sin (b) cos (a)-r cos (b) sin (a)=y0cos(a)-x0sin(a) ③
    It is so 2. rotationally-varying with the formula facial image progress that 3. can is captured to dynamic by formula.
  4. 4. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    The facial image captured to certificate photograph image and dynamic zooms in and out processing so that the interpupillary distance one of face in two images Sample, including:
    Assuming that certain point (x, y) in image before image zooms in and out, obtained after zooming in or out new coordinate (X, Y);4. relational expression between them is
    X=a*x
    Y=b*y is 4.
    A, b are x directions respectively, the magnifying power in y directions, are amplified when a, b are more than 1, are reduced during less than 1;
    The scaling of certificate photograph image and the facial image of dynamic candid photograph has thus been carried out, has been needed when image scaling is carried out Solve image mapping recoil target and round problem;The pixel value of output image is obtained using " reverse Mapping " method, that is, On the basis of output image, to each pixel of output image, its respective pixel value in input picture is searched, can thus be sent out The problem of original coordinate may not be integer when being mapped to former coordinate now, but the coordinate of input picture, which must be integer, just to be had pair The pixel value answered, so can obtains the pixel value of output image by bilinear interpolation;Assuming that desired floating-point is sat Target pixel value is f, then the pixel value of 4 points around it is T1、T2、T3、T4
    The pixel value of f points can be 5. obtained according to formula, it is as follows:
    f1=T1×(1-u)+T2×u
    f2=T3×(1-u)+T4×u
    F=f1×(1-v)+f2×v ⑤
    U and v represents the coordinate distance T of f points1The distance of coordinate;It is 5. double by formula when finding former coordinate and being not integer Linear interpolation method can obtain the pixel value of output image, and can is directly obtained when finding former coordinate and being integer The pixel value of output image.
  5. 5. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    The facial image captured to certificate photograph image and dynamic carries out face cutting respectively, including:
    It is regular according to face picture according to face interpupillary distance in image, determine the face row where border up and down in the picture And row, face cutting, the facial image after being cut are carried out to image according to obtained row and column.
  6. 6. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    Gray-scale map is converted into specifically include:
    If the component value of gray-scale map is gray, gray value is calculated by weighted average method, exactly give in image it is red, The different weights of green, blue 3 components, are then added;As formula is 6. shown,
    Gray=R=G=B=0.299R+0.587G+0.114B is 6.
    The facial image obtained in the facial image that 6. will be captured respectively from certificate photograph image and dynamic after cutting using formula It is converted into gray-scale map.
  7. 7. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    The two face gray level images obtained after conversion are subjected to histogram equalization, the formula of histogram equalization 7. following institute Show:
    Wherein, q represents gray value, niExpression gray level is i (i ∈ [0, q]) number of pixels, and n represents the total number of pixels of image, p Represent the pixel value after equalization;So 7. face gray level image is carried out by histogram equalization according to formula.
  8. 8. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    Face gray level image is smoothed specifically carries out median filter process to face gray level image.
  9. 9. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    The eigenface space of two facial images of the structure, is specifically included:
    (2.1), structure passes through in the facial image captured respectively from certificate photograph image and dynamic of pretreatment two obtained The eigenface space of facial image;
    (2.1.1), assume initially that the facial image of training average face is characterized as N number of, diFor face characteristic distance average face feature it Between distance vector, if matrix A=[d1,d2,d3,…,dN];
    (2.1.2), the eigenface of facial image is then calculated according to matrix A;Need to obtain preceding c (c using covariance matrix ≤ N) individual characteristic value, the formula of covariance matrix is 9. as follows:
    Because 9. formula is the computing of the high dimension vector of one, its amount of calculation is very big;So will to formula 9. in AATEnter Row conversion, is filled and is changed to ATA, i.e., by formula 9. in AATTransposition is ATA, because ATA is a low-dimensional vector, so can be with According to ATA first calculates its characteristic vector ui=ATA, obtain uiAfterwards, eigenvalue λ is 9. obtained further according to formulai
    (2.1.3), its eigenface V is 10. obtained further according to formula after calculating characteristic value and characteristic vectori,
    (2.1.4), then the eigenface V by being tried to achieveiThe eigenface space W finally to be obtained is formed, such as W={ V1, V2..., Vc};
    The eigenface space computational methods of above-mentioned two facial image are identical, such as step (2.1.1)~step (2.1.4) institute Show.
  10. 10. the image processing method of testimony of a witness comparison result is improved as claimed in claim 1, it is characterised in that
    It is described that both are compared again, be specially:
    To the eigenface space of two facial images obtained in the facial image captured respectively from certificate photograph image and dynamic, By setting a similarity threshold values to judge whether the testimony of a witness is consistent, the result that two eigenface spaces compare, which is more than, to be set Similarity threshold values then represent that the testimony of a witness is consistent, then represent that the testimony of a witness is inconsistent less than similarity threshold values.
CN201710867260.2A 2017-09-22 2017-09-22 Improve the image processing method of testimony of a witness comparison result Pending CN107742094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710867260.2A CN107742094A (en) 2017-09-22 2017-09-22 Improve the image processing method of testimony of a witness comparison result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710867260.2A CN107742094A (en) 2017-09-22 2017-09-22 Improve the image processing method of testimony of a witness comparison result

Publications (1)

Publication Number Publication Date
CN107742094A true CN107742094A (en) 2018-02-27

Family

ID=61235971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710867260.2A Pending CN107742094A (en) 2017-09-22 2017-09-22 Improve the image processing method of testimony of a witness comparison result

Country Status (1)

Country Link
CN (1) CN107742094A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875556A (en) * 2018-04-25 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium veritified for the testimony of a witness
CN108875623A (en) * 2018-06-11 2018-11-23 辽宁工业大学 A kind of face identification method based on multi-features correlation technique
CN109087429A (en) * 2018-09-19 2018-12-25 重庆第二师范学院 The method of library ticket testimony of a witness consistency check based on face recognition technology
CN111339943A (en) * 2020-02-26 2020-06-26 重庆中科云从科技有限公司 Object management method, system, platform, equipment and medium
CN111931771A (en) * 2020-09-16 2020-11-13 深圳壹账通智能科技有限公司 Bill content identification method, device, medium and electronic equipment
WO2021218183A1 (en) * 2020-04-30 2021-11-04 平安科技(深圳)有限公司 Certificate edge detection method and apparatus, and device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
CN101908121A (en) * 2010-06-01 2010-12-08 福建新大陆电脑股份有限公司 Two-dimensional coordinate scanning device for bar code image processing
CN102902959A (en) * 2012-04-28 2013-01-30 王浩 Face recognition method and system for storing identification photo based on second-generation identity card
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN104866862A (en) * 2015-04-27 2015-08-26 中南大学 Strip steel surface area type defect identification and classification method
US20160217320A1 (en) * 2014-04-14 2016-07-28 International Business Machines Corporation Facial recognition with biometric pre-filters
CN106650623A (en) * 2016-11-18 2017-05-10 广东工业大学 Face detection-based method for verifying personnel and identity document for exit and entry

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
CN101908121A (en) * 2010-06-01 2010-12-08 福建新大陆电脑股份有限公司 Two-dimensional coordinate scanning device for bar code image processing
CN102902959A (en) * 2012-04-28 2013-01-30 王浩 Face recognition method and system for storing identification photo based on second-generation identity card
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
US20160217320A1 (en) * 2014-04-14 2016-07-28 International Business Machines Corporation Facial recognition with biometric pre-filters
CN104866862A (en) * 2015-04-27 2015-08-26 中南大学 Strip steel surface area type defect identification and classification method
CN106650623A (en) * 2016-11-18 2017-05-10 广东工业大学 Face detection-based method for verifying personnel and identity document for exit and entry

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875556A (en) * 2018-04-25 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium veritified for the testimony of a witness
CN108875623A (en) * 2018-06-11 2018-11-23 辽宁工业大学 A kind of face identification method based on multi-features correlation technique
CN108875623B (en) * 2018-06-11 2020-11-10 辽宁工业大学 Face recognition method based on image feature fusion contrast technology
CN109087429A (en) * 2018-09-19 2018-12-25 重庆第二师范学院 The method of library ticket testimony of a witness consistency check based on face recognition technology
CN111339943A (en) * 2020-02-26 2020-06-26 重庆中科云从科技有限公司 Object management method, system, platform, equipment and medium
WO2021218183A1 (en) * 2020-04-30 2021-11-04 平安科技(深圳)有限公司 Certificate edge detection method and apparatus, and device and medium
CN111931771A (en) * 2020-09-16 2020-11-13 深圳壹账通智能科技有限公司 Bill content identification method, device, medium and electronic equipment
CN111931771B (en) * 2020-09-16 2021-01-01 深圳壹账通智能科技有限公司 Bill content identification method, device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107742094A (en) Improve the image processing method of testimony of a witness comparison result
Adouani et al. Comparison of Haar-like, HOG and LBP approaches for face detection in video sequences
CN104766063B (en) A kind of living body faces recognition methods
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
CN105956578A (en) Face verification method based on identity document information
Anand et al. An improved local binary patterns histograms techniques for face recognition for real time application
CN107066969A (en) A kind of face identification method
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN111339897B (en) Living body identification method, living body identification device, computer device, and storage medium
CN110008793A (en) Face identification method, device and equipment
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
CN111178130A (en) Face recognition method, system and readable storage medium based on deep learning
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN107368819A (en) Face identification method and system
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN106886771A (en) The main information extracting method of image and face identification method based on modularization PCA
Liang et al. PKLNet: Keypoint localization neural network for touchless palmprint recognition based on edge-aware regression
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
Ribarić et al. Personal recognition based on the Gabor features of colour palmprint images
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
CN110223273A (en) A kind of image repair evidence collecting method of combination discrete cosine transform and neural network
CN106228163B (en) A kind of poor ternary sequential image feature in part based on feature selecting describes method
Das et al. Person identification through IRIS recognition
CN107463864A (en) A kind of face identification system of binding sequence image super-resolution rebuilding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180227