CN105100547A - Liveness testing methods and apparatuses and image processing methods and apparatuses - Google Patents

Liveness testing methods and apparatuses and image processing methods and apparatuses Download PDF

Info

Publication number
CN105100547A
CN105100547A CN201510208889.7A CN201510208889A CN105100547A CN 105100547 A CN105100547 A CN 105100547A CN 201510208889 A CN201510208889 A CN 201510208889A CN 105100547 A CN105100547 A CN 105100547A
Authority
CN
China
Prior art keywords
image
pixel
diffusion
characteristic
input picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510208889.7A
Other languages
Chinese (zh)
Other versions
CN105100547B (en
Inventor
金沅俊
徐成住
韩在濬
黄元俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020140077333A external-priority patent/KR102257897B1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN105100547A publication Critical patent/CN105100547A/en
Application granted granted Critical
Publication of CN105100547B publication Critical patent/CN105100547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

A liveness testing apparatus includes a testing circuit. The testing circuit is configured to test a liveness of an object included in a received input image based on whether an image of the object has a characteristic indicative of a flat surface or a characteristic indicative of a three-dimensional (3D) structure.

Description

Activity test method and equipment and image processing method and equipment
The priority of the 10-2014-0077333 korean patent application that this application claims the 10-2014-0055687 korean patent application submitted in Korean Intellectual Property Office on May 9th, 2014 and submit on June 24th, 2014 in Korean Intellectual Property Office, the full content of described patent application is incorporated herein by reference.
Technical field
One or more example embodiment relates to activity test method, active testing equipment, image processing method, image processing equipment and/or comprises the electronic installation of described activity test method, active testing equipment, image processing method, image processing equipment.
Background technology
Bio-identification (biometric) technology can based on the identity of the unique biological characteristic identification people of each user.In traditional biological identification technology, facial-recognition security systems can identify user naturally based on the face of user, and contacts with transducer (such as fingerprint scanner etc.) without the need to user.But traditional facial-recognition security systems easily may be subject to the personation of the picture of the face using registration target.
Summary of the invention
At least one example embodiment provides a kind of activity test method, comprise: the characteristic that the image based on the object be included in the input picture of reception has instruction plane still has the characteristic indicating three-dimensional (3D) structure, and test is included in the activity of the object in the input picture of reception.
The image being included in the object in the input picture of reception can be corresponding to face.
According at least some example embodiment, described method also can comprise: based on the distribution of light energy included in multiple pixels corresponding to the image of object, determines that the image of object has the characteristic indicating the characteristic of plane still to have instruction 3D structure.
According at least some example embodiment, described method also can comprise: based on the degree of the uniformity of the distribution of light energy included in multiple pixels corresponding to the image of object, determines that the image of object has the characteristic indicating the characteristic of plane still to have instruction 3D structure.
According at least some example embodiment, described method also can comprise: based on the statistical information relevant with the diffusion velocity of multiple pixels corresponding to the image of object, determines that characteristic that the image of object has an instruction plane still has the characteristic of instruction 3D structure.
According at least some example embodiment, described method also can comprise: the value calculating described multiple pixel based on diffusion equation iteratively; Based on the difference between the pixel value after the pixel value before each iterative computation and each iterative computation, calculate the diffusion velocity of each pixel in described multiple pixel.The statistical information relevant with diffusion velocity can comprise at least one item in following item: the diffusion velocity among described multiple pixel is more than or equal to the quantity of the pixel of threshold value; Diffusion velocity is more than or equal to the distribution of the pixel of threshold value; The amount of noise component(s) included in the first dimensional area that the size based on diffusion velocity is extracted; The mean value of diffusion velocity; The standard deviation of diffusion velocity; Based on the filter response of diffusion velocity.
According at least some example embodiment, described method also can comprise: carry out filtering to produce the image of filtering to the input picture received; Determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction 3D structure based on the statistical information relevant with the change of pixel value, wherein, the change of described pixel value is the change of the value of the multiple pixel corresponding to the image of the object in the input picture received and the multiple pixels corresponding with the image of the object in the image of filtering.
Described filter step can comprise: carried out spreading to produce diffusion image by the input picture of reception; Based on the value of each pixel in the input picture received and the difference between the analog value of the pixel in diffusion image, calculate the diffusion velocity of each pixel in described multiple pixel corresponding to the image of object; Wherein, described determining step has based on the image of the diffusion velocity determination object calculated the characteristic indicating the characteristic of plane still to have instruction 3D structure.
According at least some example embodiment, at least one step during described method also can comprise the following steps: when object is confirmed as the characteristic with instruction plane, export the signal corresponding to failed test; When object is confirmed as the characteristic with instruction 3D structure, exports and successfully test corresponding signal.
Such as, input picture can be the single image comprising face.
At least one other example embodiment provides a kind of activity test method, comprising: carry out filtering to produce the image of filtering to the image of reception of the image comprising object; Determine the amplitude of the change of the pixel value corresponding to the image of the object in the image of reception and the image of filtering; Based on the activity of the amplitude tested object of described change.
Described filter step can comprise: the pixel corresponding to the image of the object in the image received carried out spreading to produce diffusion image, wherein, described method also can comprise: based on the value of the pixel corresponding to the image of the object in the image received and diffusion image, calculate the diffusion velocity of the pixel corresponding to the image of object; Wherein, described testing procedure is based on the activity of the diffusion velocity tested object calculated.
According at least some example embodiment, described diffusing step also can comprise: the value upgrading pixel based on diffusion equation iteratively.Described step of updating is iteratively by being applied to additive operator division (AOS) scheme the value that diffusion equation upgrades pixel iteratively.
According at least some example embodiment, described testing procedure can comprise: estimate the surface characteristic relevant to object based on diffusion velocity; Based on the activity of the surface characteristic tested object estimated.
Described surface characteristic can comprise at least one item in following item: the reflective character on the surface of object; The dimension on the surface of object; The material on the surface of object.
According at least some example embodiment, described estimating step can comprise: the distribution being included in the light energy in the image of object based on diffusion velocity analysis, to estimate surface characteristic.
According at least some example embodiment, at least one step during described method also can comprise the following steps: when the surface characteristic estimated is corresponding to the surface characteristic of the medium of display face, export the signal corresponding to failed test; When the surface characteristic estimated is corresponding to the surface characteristic of actual face, exports and successfully test corresponding signal.
According at least some example embodiment, described testing procedure also can comprise: calculate the statistical information relevant to diffusion velocity; Based on the activity of the statistical information tested object calculated.
According at least some example embodiment, the step of described counting statistics information can comprise at least one in following operation: calculate the quantity being more than or equal to the pixel of the diffusion velocity of threshold value had among diffusion velocity; Calculate the distribution being more than or equal to the pixel of the diffusion velocity of threshold value had among diffusion velocity; Calculate at least one in the mean value of diffusion velocity and standard deviation; Respond based on diffusion velocity calculating filter.
According at least some example embodiment, the step of described counting statistics information also can comprise: based on diffusion velocity size from receive image zooming-out first dimensional area; Extract the characteristic of the first dimensional area; Wherein, described testing procedure is based on the activity of the characteristic test object extracted.
The characteristic of the first dimensional area can comprise the amount of noise component(s) included in the first dimensional area, can carry out calculating noise component based on the difference between the first dimensional area and result medium filtering being applied to the first dimensional area.
According at least some example embodiment, described method also can comprise: when statistical information corresponds to the statistical information relevant to the medium of display face, export the signal corresponding to failed test; When statistical information corresponds to the statistical information relevant to actual face, export and successfully test corresponding signal.
According at least some example embodiment, the step of described calculating diffusion velocity can comprise: the diffusion velocity calculating each pixel based on the diffuseness values of pixel after the original value of the pixel before diffusion and diffusion.Along with the difference between original value and diffuseness values increases, the diffusion velocity of the pixel of calculating increases, and along with the difference reduction between original value and diffuseness values, the diffusion velocity of the pixel of calculating reduces.
At least one other example embodiment provides a kind of image processing method, comprising: receive the first image comprising illumination component and non-illumination component; Filtering is carried out to produce second image relevant to illumination component to the multiple pixels be included in the first image; Based on the first image and the second image, produce three image relevant to non-illumination component.
Second image can be diffusion image, and described filter step can comprise: the described multiple pixel be included in the first image carried out spreading to produce diffusion image.
According at least some example embodiment, at least one step during described method also can comprise the following steps: based on the 3rd image recognition face; Based on the 3rd image authentication user.
Described diffusing step can comprise: by additive operator division (AOS) scheme is applied to the value that diffusion equation upgrades described multiple pixel iteratively.
3rd image can be based at least one generation in following item: the first image in the ratio of the first image and the second image and log-domain and the difference of the second image.
Non-illumination component can be included in the first dimensional area, and illumination component can be included in the second dimensional area, and wherein, the first dimensional area does not affect by lighting change, and the second dimensional area is responsive to lighting change.
The diffusion velocity of the pixel corresponding to non-illumination component can higher than the diffusion velocity of the pixel corresponding with illumination component.
At least one other example embodiment provides a kind of non-transitory computer-readable medium comprising program, when described program performs on a computing means, computer installation is made to perform the activity test method comprised the following steps: the characteristic that the image based on the object be included in the input picture of reception has instruction plane still has the characteristic indicating three-dimensional (3D) structure, and test is included in the activity of the object in the input picture of reception.
At least one other example embodiment provides a kind of non-transitory computer-readable medium comprising program, when described program performs on a computing means, computer installation is made to perform the activity test method comprised the following steps: to carry out filtering to produce the image of filtering to the image of reception of the image comprising object; Determine the amplitude of the change of the pixel value corresponding to the image of the object in the image of reception and the image of filtering; Based on the activity of the amplitude tested object of described change.
At least one other example embodiment provides a kind of non-transitory computer-readable medium comprising program, when described program performs on a computing means, computer installation is made to perform the image processing method comprised the following steps: to receive the first image comprising illumination component and non-illumination component; Filtering is carried out to produce second image relevant to illumination component to the multiple pixels be included in the first image; Based on the first image and the second image, produce three image relevant to non-illumination component.
At least one other example embodiment provides a kind of active testing equipment, comprise: test circuit, the characteristic being configured to have based on the image of the object be included in the input picture of reception instruction plane still has the characteristic indicating three-dimensional (3D) structure, and test is included in the activity of the object in the input picture of reception.
Test circuit also can be configured to: carry out filtering to produce the image of filtering to the input picture received; Determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure based on the statistical information relevant with the change of pixel value, wherein, the change of described pixel value is the change of the value of the multiple pixel corresponding to the image of the object in the input picture received and the multiple pixels corresponding with the image of the object in the image of filtering.
Test circuit also can be configured to: carried out spreading to produce diffusion image by the input picture of reception; Based on the value of each pixel in the input picture received and the difference between the analog value of the pixel in diffusion image, calculate the diffusion velocity of each pixel in described multiple pixel corresponding to the image of object; The characteristic that image based on the diffusion velocity determination object calculated has instruction plane still has the characteristic indicating 3D structure.
At least one other example embodiment provides a kind of image processing equipment, comprising: acceptor circuit, is configured to receive the first image comprising illumination component and non-illumination component; Filter circuit, is configured to carry out filtering to produce second image relevant to illumination component to the multiple pixels be included in the first image; Generator circuitry, is configured to based on the first image and the second image, produces three image relevant to non-illumination component.
Accompanying drawing explanation
From the following description of the example embodiment shown in accompanying drawing, example embodiment will become clearly and easier to understand, in the accompanying drawings:
Figure 1A and Figure 1B illustrates the active testing according to example embodiment;
Fig. 2 illustrates the principle of the active testing according to example embodiment;
Fig. 3 illustrates the active testing equipment according to example embodiment;
Fig. 4 illustrates the DIFFUSION TREATMENT according to example embodiment;
Fig. 5 illustrates example Small-scale space (SR) figure according to example embodiment;
Fig. 6 illustrates the active testing equipment according to example embodiment;
Fig. 7 illustrates example input picture and the example image according to example embodiment process;
Fig. 8 illustrates that the example of the input picture of the result as lighting change according to example embodiment changes;
Fig. 9 illustrates the image processing equipment according to example embodiment;
Figure 10 illustrates the activity test method according to example embodiment;
Figure 11 illustrates the image processing method according to example embodiment;
Figure 12 illustrates image procossing according to another example embodiment and authentication/validation method;
Figure 13 is the block diagram of the electronic system illustrated according to example embodiment.
Embodiment
Now with reference to accompanying drawing, various example embodiment is more fully described, example embodiment more shown in the drawings.
In this open specifically illustrated embodiment.But concrete 26S Proteasome Structure and Function details disclosed herein just representative describes the object of example embodiment.But example embodiment can be implemented with multiple alternative form, and should not be interpreted as only being limited to embodiment set forth herein.
Therefore, although example embodiment can have various amendment and alternative form, embodiment is illustrated by way of example in the accompanying drawings and will be described in detail at this.It is to be understood, however, that, be not intended to example embodiment to be restricted to disclosed concrete form.On the contrary, example embodiment should contain all modifications fallen in the scope of the present disclosure, equivalent and alternative form.Run through the description of accompanying drawing, identical label represents identical element.
Although term " first ", " second " etc. can be used herein to describe various element, these elements should not limit by these terms.These terms are only for separating an element and another element region.Such as, without departing from the scope of the disclosure, the first element can be named as the second element, and similarly, the second element can be named as the first element.As used herein, term "and/or" comprises and associates one or more any combination in the item listed and all combinations.
When element is called as " connection " or " coupling " to another element, this element can be directly connected or coupled to this another element, or can there is intermediary element.On the contrary, when element is called as " directly connection " or " directly coupling " to another element, there is not intermediary element.Other words for describing the relation between element should be explained in a similar manner (such as, " ... between " to " and directly exist ... between ", " adjacent " to " direct neighbor " etc.).
Term is only the object for describing specific embodiment as used herein, is not intended to restriction.As used herein, singulative is also intended to comprise plural form, indicates unless the context clearly.It will also be understood that, when term " comprises " and/or " comprising " uses at this, these terms specify state feature, entirety, step, operation, element and/or assembly existence, but do not get rid of and exist or add one or more other features, entirety, step, operation, element, assembly and/or their group.
Should also be noted that in some alternate embodiments, described function/action can not according to occurring in sequence shown in accompanying drawing.Such as, according to involved function/action, two accompanying drawings illustrated continuously can be performed simultaneously in fact substantially, or sometimes can be performed by contrary order.
There is provided detail to provide the thorough understanding to example embodiment in the following description.But those of ordinary skill in the art will be appreciated that example embodiment can be implemented when not having these details.Such as, system can illustrate in form of a block diagram, with can not in unnecessary details fuzzy example embodiment.In other cases, known process, structure and technology can be shown when there is no unnecessary details, to avoid fuzzy example embodiment.
In the following description, with reference to can being implemented as program module or function treatment and such as existing electronic installation (such as smart phone can being used, personal digital assistant, laptop computer or flat computer etc.) the operation that realizes of existing hardware (such as, with flow chart, flow diagram, data flow diagram, structure chart, the form of block diagram etc.) behavior and symbol represent to describe example embodiment, wherein, program module or function treatment comprise the routine performing particular task or realize particular abstract data type, program, object, assembly, data structure etc.Such existing hardware can comprise one or more CPU (CPU), Graphics Processing Unit (GPU), image processor, SOC (system on a chip) (SOC) device, digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) computer etc.
Although operation can be described as processed in sequence by flow chart, many operations can executed in parallel, concurrence performance or simultaneously perform.In addition, the order of operation can rearrange.When its operations are completed, process can stop, but process also can have other steps do not comprised in the accompanying drawings.Process can be corresponding with method, function, process, subroutine, subprogram etc.When process and function can turn back to calling function with function to the termination processed time corresponding or the function of tonic chord is corresponding.
Now with detailed reference to the example embodiment shown in accompanying drawing, wherein, identical reference number represents identical element all the time.Example embodiment is described below to explain the disclosure by referring to accompanying drawing.One or more example embodiment described below is applicable to various field, such as, and smart phone, laptop computer or flat computer, intelligent television (TV), smart home systems, intelligent automobile, surveillance etc.Such as, one or more example embodiment can be used for the activity (liveness) of test input image and/or authenticated user to sign in smart phone or other devices.In addition, one or more example embodiment can be used for the activity of test input image and/or authenticated user to allow to control and/or monitor public domain and/or safety zone.
According to the active testing of example embodiment
Figure 1A and Figure 1B illustrates the active testing according to example embodiment.
According at least some example embodiment, active testing represents that test (or determination) comprises whether corresponding to the true three-dimension object method of object in the input image.In one example, active testing can verify the face that comprises in the input image and true three-dimension (3D) object (such as, actual face) corresponding or represent (such as with the puppet two dimension (2D) of object, the picture of face) corresponding, or comprise face in the input image from true 3D object (such as, actual face) obtain or represent that (such as, the picture of face) obtains from the puppet two dimension (2D) of object.By active testing, the trial to using the picture forged and/or distort to verify another face effectively can be refused.
With reference to Figure 1A and Figure 1B, according at least one example embodiment, active testing equipment 110 receives the input picture comprising the face of user 120, and test is included in the activity of the face in the input picture of reception.In one example, active testing equipment 110 can be mobile device (or can comprise in the mobile device), such as, and mobile phone, smart phone, personal digital assistant (PDA), flat computer, laptop computer etc.In another example, active testing equipment 110 can be calculation element (maybe can be included in calculation element), such as, and personal computer (PC), electronic product (such as TV, safety device for gate) etc.Active testing equipment 110 can receive input picture from the imageing sensor 115 of the face of shooting user 120.Imageing sensor 115 can also be the part of active testing equipment 110.
In one example, as shown in Figure 1A, the actual face by taking user 120 produces input picture.In this example, it is corresponding to true (or living) three dimensional object that active testing equipment 110 determines to comprise face in the input image, and export instruction and comprise the face in the input image signal corresponding to true three-dimension object.Namely, such as, active testing equipment 110 test comprises face in the input image, and to truly whether (or living) three dimensional object is corresponding, and due to the face in input picture, (or living) three dimensional object is corresponding really to truly, therefore exports the successful signal of instruction test.
In another example, as shown in fig. 1b, the actual face by taking the face that is presented on display medium 125 and non-user 120 produces input picture.At least according to this example, display medium 125 represents the medium of display object (such as, face) two-dimensionally.Display medium 125 can comprise (such as) and be printed on the one-page (such as, photo) of user's face, the electronic installation etc. of display user's face.In an exemplary scene, user 120 is by attempting the face be presented on display medium 125 guiding imageing sensor 115 the Account login electronic installation (such as, smart phone etc.) utilizing another user.In fig. ib, utilize intermittent line to mark the face be presented on display medium 125, to indicate the face be presented on display medium 125 to be directed to imageing sensor 115, and non-user 120 is directed to imageing sensor 115.In this example, it is corresponding to the pseudo-two-dimensional representation of object that active testing equipment 110 determines to comprise face in the input image, and export instruction and comprise the face in the input image signal corresponding to the pseudo-two-dimensional representation of object.Namely, such as, active testing equipment 110 test comprises face in the input image, and to truly whether (or living) three dimensional object is corresponding, and due to face in input picture, to truly (or living) three dimensional object is not corresponding, and corresponding to the pseudo-two-dimensional representation of object, export the signal of instruction test crash.In some cases, term " pseudo-object " can be used for the pseudo-two-dimensional representation of indicated object.
Active testing equipment 110 can detect facial zone from input picture.In this example, activity test method and equipment are applicable to the facial zone detected from input picture.
Fig. 2 illustrates the principle of the active testing according to example embodiment.
At least test the activity of the object comprised in the input image based on one or more characteristic that one or more characteristic that object has a plane (two dimension) still has three-dimensional (3D) structure according to the active testing equipment of this example embodiment.
With reference to Fig. 2, active testing equipment distinguishes the actual face 220 of the face 211 and user be presented on medium 210.The face 211 be presented on medium 210 is corresponding to two dimension (2D) plane.When input picture be produce by taking the face 211 that is presented on medium 210 time, the object comprised in the input image has one or more characteristic of plane.Because the surface of medium 210 is corresponding to 2D plane, therefore incide light 215 on the face 211 that is presented on medium 210 more uniformly by the surface reflection of medium 210.Therefore, light energy is more uniformly distributed on the object that comprises in the input image.Even if medium 210 bends, the surface of bending display also can be still corresponding to 2D plane, and have the characteristic of 2D plane.
On the contrary, the actual face 220 of user is 3D structure.When input picture be by take user actual face 220 and produce time, the object comprised in the input image has the characteristic of 3D structure.Because the actual face 220 of user is corresponding to the 3D structure with various 3D curve and shape, therefore incide the light 225 of the actual face 220 of user more unevenly (or anisotropically) by the surface reflection of the actual face 220 of user.Therefore, light energy more unevenly (or anisotropically) be distributed on the object that comprises in the input image.
According at least one example embodiment, active testing equipment tests the activity of the object comprised in the input image based on the distribution of the light energy in object.In one example, active testing device analysis is included in the distribution of the light energy in the object of input picture, to determine that characteristic that the object comprised in the input image has a 2D plane still has the characteristic of 3D structure.
Still reference Fig. 2, in one example, the face 211 be presented on medium 210 by shooting is to produce input picture.In this case, active testing device analysis is included in the distribution of the light energy in the face of input picture, and determines that the face of input picture has one or more characteristic of plane.Then, the face of active testing equipment determination input picture is corresponding to pseudo-object, and the signal of the face exporting indicative input image corresponding to pseudo-object (such as, indicating test crash).
In another example, the actual face 220 by taking user produces input picture.In this case, active testing device analysis is included in the distribution of the light energy in the face of input picture, and determines that the face of input picture has one or more characteristic of 3D structure.Then, the face of active testing equipment determination input picture is corresponding to true 3D object, and the signal of the face exporting indicative input image corresponding to true 3D object (such as, indicate test successfully).
According at least some example embodiment, the degree of the uniformity of the distribution of the light energy that active testing equipment can comprise based on object in the input image determines the activity of the object comprised in the input image.Referring again to Fig. 2, in one example, because the light 215 inciding the face 211 be presented on medium 210 is reflected substantially equably, the light energy be therefore included in the face of input picture distributes substantially equably.When the degree of the uniformity of the distribution of the light energy in the face being included in input picture be more than or equal to uniformity given (or alternatively, to expect or predetermined) threshold level time, the face of active testing equipment determination input picture has one or more characteristic of 2D plane.In this case, the face of active testing equipment determination input picture is corresponding to pseudo-object, and the signal of the face exporting indicative input image corresponding to pseudo-object (such as, indicating test crash).
About in another example of Fig. 2, owing to inciding light 225 in the actual face 220 of user by (or anisotropically) reflection more unevenly, the light energy be therefore included in the face of input picture has the Light distribation of more uneven (or non-homogeneous).When the degree of the uniformity of the distribution of the light energy in the face being included in input picture is less than the given threshold level of uniformity, the face of active testing equipment determination input picture has one or more characteristic of 3D structure.In this case, the face of active testing equipment determination input picture is corresponding to true 3D object, and the signal of the face exporting indicative input image corresponding to true 3D object (such as, indicate test successfully).
In one example, the threshold level of uniformity can be the value corresponding to such situation, in this case, be included in the pixel of about 50% in image section or more quantity and facial zone (such as, more at large, the region that corresponding to facial zone part is indicated by frame) corresponding.
Active testing equipment can based on the activity of single input picture tested object.Single input picture can be corresponding to rest image of single picture, single image, single frames etc.Active testing equipment tests the activity of the object be included in single input picture by one or more characteristic that one or more characteristic determined the object be included in single input picture and have a 2D plane still has a 3D structure.In more detail, such as, active testing equipment tests the activity of the object be included in single input picture by the degree of the uniformity calculating the distribution being included in light energy in object.
Fig. 3 illustrates the active testing equipment 310 according to example embodiment.
With reference to Fig. 3, active testing equipment 310 comprises receiver 311 and tester 312.
In exemplary operations, receiver 311 receives input picture.Receiver 311 can receive the input picture produced by imageing sensor (not shown).Receiver 311 can use and wire, wirelessly or by network be connected to imageing sensor.Alternatively, receiver 311 can receive input picture from storage device (such as main storage, cache memory, hard disk drive (HDD), solid-state drive (SSD), flash memory device, network drive etc.).
Tester 312 test comprises the activity of object in the input image.As discussed above, tester 312 carrys out the activity of tested object by one or more characteristic that one or more characteristic determined object and have a 2D plane still has a 3D structure.In one example, successfully test is the test that object is confirmed as one or more characteristic with 3D structure, and the test of the test of failure to be object be confirmed as one or more characteristic with 2D plane.In more detail, such as, tester 312 carrys out the activity of tested object by the distribution analyzing the light energy be included in the object of input picture.In example more specifically, tester 312 is by calculating the degree of the uniformity of the distribution of the light energy be included in the object of input picture and comparing the degree of the uniformity of the distribution of the light energy in the object of input picture and threshold value come the activity of tested object.If the degree of the uniformity determined is more than or equal to threshold value, then the object in input picture is confirmed as corresponding to 2D plane (obtaining from 2D plane).On the other hand, if the degree of the uniformity determined is less than or equal to threshold value, then the object in input picture is confirmed as corresponding to 3D structure (obtaining from 3D structure).
According at least some example embodiment, tester 312 can carry out filtering to analyze the distribution of the light energy comprised in multiple pixels corresponding with the object comprised in the input image to multiple pixels corresponding to the object comprised in the input image.In one example, tester 312 can use DIFFUSION TREATMENT to carry out filtering to multiple pixel.In this example, the multiple pixels corresponding to the object comprised in the input image can be carried out the distribution spreading to analyze the light energy comprised in multiple pixels corresponding with the object comprised in the input image by tester 312.Below with reference to Fig. 4, example DIFFUSION TREATMENT is described in more detail.
Although example embodiment can be discussed in detail for DIFFUSION TREATMENT, it should be understood that, can use any suitable filtering process in conjunction with example embodiment.In one example, example embodiment can utilize bilateral filtering.Because bilateral filtering is normally known, describe in detail so omit.In addition, can use to keep fringe region and any suitable filtering of fuzzy non-edge to diffusion mode that is similar with bilateral filtering or basic simlarity in conjunction with example embodiment discussed herein.
Fig. 4 illustrates the DIFFUSION TREATMENT according to example embodiment.
According at least some example embodiment, the multiple pixels corresponding to the object comprised in the input image can spread by active testing equipment.Active testing equipment can use diffusion equation to upgrade the value of this multiple pixel iteratively.In one example, the multiple pixels corresponding to the object comprised in the input image can spread according to the equation 1 illustrated below by active testing equipment.
[equation 1]
u k + 1 = u k + div ( d ( | ▿ u k | ) ▿ u k )
In equation 1, k represents iteration count, u kthe value of pixel after representing kth time iteration, u k+1the value of pixel after representing (k+1) secondary iteration.The value of the pixel in input picture is represented as u 0.
Still with reference to equation 1, represent gradient operator, div () represents divergence function, and d () represents spread function.
Spread function d () can be the function of given (or alternatively, expect or make a reservation for).In one example, spread function can definition as shown in equation 2 below.
[equation 2]
d ( | ▿ u | ) = 1 / ( | ▿ u | + β )
In equation 2, β represents relatively little positive number (such as, small value, such as about 10 -6).When using the spread function of definition as shown in above equation 2, can during DIFFUSION TREATMENT the border of keeping object relatively goodly.When spread function is pixel gradient as shown in equation 2 function time, diffusion equation be Nonlinear Diffusion equation.
Active testing equipment can apply additive operator division (AOS) scheme with solve equation 1, and the multiple pixels corresponding to the object comprised in the input image can spread according to the equation 3 illustrated below by active testing equipment.
[equation 3]
u k + 1 = 1 2 ( ( I - 2 τA x ( u k ) ) - 1 + ( I - 2 τA y ( u k ) ) - 1 ) u k
In equation 3, I represents the value of the pixel in input picture, A xrepresent horizontal proliferation matrix, A yrepresent vertical proliferation matrix, τ represents time stepping.Final iteration count L and time stepping τ can be given (or alternatively, being expectation or predetermined).Usually, when time step enter τ be set to relatively little and final iteration count L is set to relatively large time, represent the u of the value of the pixel of final diffusion lreliability can increase.
According at least some example embodiment, active testing equipment can use AOS scheme for solve equation 1 to reduce final iteration count L.When using AOS scheme, although use to the time stepping τ of sizing, as the u of the value of the pixel of final diffusion lreliability can be enough high.Active testing equipment can use for solve diffusion equation AOS scheme to improve the efficiency of the operation of DIFFUSION TREATMENT.Active testing equipment can use processor relatively in a small amount and/or storage resources to perform DIFFUSION TREATMENT.
Active testing equipment can use the AOS scheme for solving diffusion equation effectively to keep the texture of input picture.Even if under relatively low brightness and backlighting environment, active testing equipment also can keep the original texture of input picture effectively.
With reference to Fig. 4, image 410 is corresponding to input picture, and image 420 is corresponding to middle diffusion image, and image 430 is corresponding to final diffusion image.In this example, final iteration count L is set to " 20 ".Image 420 is acquired after the value comprising pixel in the input image upgrades five times based on equation 3 with being iterated.Image 430 is acquired after the value comprising pixel in the input image upgrades 20 times based on equation 3 with being iterated.
At least according to this example embodiment, one or more characteristic that active testing equipment can utilize diffusion velocity to determine that the object comprised in the input image has a plane still has one or more characteristic of 3D structure.Diffusion velocity represents the speed that each pixel value is diffused.Diffusion velocity can definition as shown in following equation 4.
[equation 4]
s(x,y)=|u L(x,y)-u 0(x,y)|
In equation 4, the diffusion velocity of the pixel at s (x, y) denotation coordination (x, y) place, u 0(x, y) represents the value of the pixel at coordinate (x, the y) place in input picture, u l(x, y) represents the value of the pixel at coordinate (x, the y) place in final diffusion image.As indicated in equation 4, along with the difference between the pixel value after the pixel value before diffusion and diffusion increases, the diffusion velocity of calculating also increases, and when the pixel value before spreading and the difference between the pixel value after diffusion reduce, the diffusion velocity of calculating reduces.
Broadly, based on the amplitude of the change of the pixel value in image after L iteration of the filtering process of above-mentioned discussion, active testing equipment can determine that the object comprised in the input image has a plane one or more characteristic still has one or more characteristic of 3D structure.
Face-image can be divided into Small-scale space (small-scaleregion) and Large-scale areas (large-scaleregion).Small-scale space can represent the region of existing characteristics point or characteristic curve.In one example, Small-scale space can comprise facial eyes, eyebrow, nose and mouth.Large-scale areas can represent the region that relatively large part is occupied by the skin of face.In one example, Large-scale areas can comprise forehead and the cheek of face.
The diffusion velocity belonging to the pixel of Small-scale space can be greater than the diffusion velocity of the pixel belonging to Large-scale areas.Referring back to the example shown in Fig. 4, the pixel 411 corresponding to the spectacle-frame in image 410 is different from the neighborhood pixels corresponding with skin, and therefore, the value of pixel 411 can due to (such as, the relatively significantly) change fully of the result of diffusion.The value of the pixel 411 in image 410 can owing to spreading the value of the pixel 431 be updated in image 430.On the contrary, the pixel 412 corresponding to the cheek in image 410 is similar to neighborhood pixels, and therefore, the value of pixel 412 can change less (such as, changing relatively slightly) owing to spreading compared with the value of pixel 411.The value of the pixel 412 in image 410 can be updated to the value of the pixel 432 in image 430 due to the result spread.
The difference of diffusion velocity also can be caused by the distribution of the light energy in image.When the light energy in image more uniformly distributes, relatively little diffusion velocity can be calculated.In addition, when light energy more uniformly distributes, neighbor has the probability of similar pixel value may higher (such as, relatively high).On the contrary, when the light energy in image more unevenly (or anisotropically) distribution time, can be observed relatively high diffusion velocity.In addition, when light energy more unevenly (or anisotropically) distribution time, neighbor has the probability of different pixel values may be relatively high.
According at least some example embodiment, active testing equipment can carry out the degree of the uniformity of the distribution of the light energy in computed image based on the statistical information relevant to diffusion velocity.Active testing equipment can carry out the activity of the object in test pattern based on the statistical information relevant to diffusion velocity.In order to calculate the statistical information relevant to diffusion velocity, active testing equipment can according to the equation 5 illustrated below from image zooming-out Small-scale space.
[equation 5]
In equation 5, SR (x, y) is the designator indicating the pixel at coordinate (x, y) place whether to belong to Small-scale space.In this example, when the value of SR (x, y) is corresponding to " 1 ", the pixel at coordinate (x, y) place belongs to Small-scale space, and as SR (x, y), when value is corresponding to " 0 ", the pixel at coordinate (x, y) place does not belong to Small-scale space.
The value of SR (x, y) can be determined based on the diffusion velocity of the pixel at coordinate (x, y) place.Such as, when diffusion velocity s (x, y) is greater than given (or alternatively, expect or make a reservation for) threshold value, the value of SR (x, y) is confirmed as " 1 ".Otherwise the value of SR (x, y) is confirmed as " 0 ".Threshold value can be arranged based on the standard deviation of the average value mu of whole image and whole image.The average value mu of whole image can be corresponding to the mean value of the diffusion velocity of the pixel be included in whole image, and the standard deviation of whole image can be corresponding to the standard deviation of the diffusion velocity of the pixel be included in whole image.
Hereinafter, the value of the pixel at coordinate (x, y) place and the corresponding image of SR (x, y) will be called as Small-scale space (SR) figure.Because each pixel be included in SR figure can have value " 0 " or " 1 ", therefore SR figure also can be called as binary map.SR figure effectively can represent the fabric of face under various photoenvironment.
Fig. 5 illustrates two example SR figure according to example embodiment.
With reference to Fig. 5, as shown in the figure, the SR Figure 51 0 obtained when taking the medium of display user's face is different from the SR Figure 52 0 obtained when taking the actual face of same user.In SR Figure 51 0 and SR Figure 52 0, corresponding with the pixel meeting SR (x, y)=1 by the part of density bullet, and corresponding with the pixel meeting SR (x, y)=0 by the part of white marking.In this example, in SR Figure 51 0 and SR Figure 52 0, black part has relatively fast diffusion velocity, and white portion has relatively slow diffusion velocity.
According at least one example embodiment, active testing equipment carrys out the activity of the face in test pattern by analyzing SR figure.Such as, active testing equipment is by extracting from SR figure the activity that various feature carrys out the face in test pattern.
When taking the actual face of user, due to the curve in the actual face of user, the reflection of various light can be there is.
When the light energy in image more unevenly (or anisotropically) distribution time, relatively a large amount of pixels with pixel value " 1 " can be comprised at SR figure.In this example, active testing equipment can determine the activity of the face in image below based on the equation 6 illustrated.
[equation 6]
In equation 6, N (SR (x, y)=1) represents the quantity meeting the pixel of SR (x, y)=1, and ξ represents threshold value, and ξ can be given, expectation or preset alternatively.When meeting time, active testing equipment can determine that the face in image is corresponding to pseudo-object.
In another example, when (anisotropically) distributes the light energy in image more unevenly, relatively a large amount of noise component(s)s can be comprised at SR figure.In this case, active testing equipment can determine the activity of the face in image below based on the equation 7 illustrated.
[equation 7]
In equation 7, SR m(x, y) represents the value scheming the pixel at coordinate (x, the y) place in the image obtained by medium filtering being applied to SR, and ξ represents threshold value, and ξ can be given, expectation or preset alternatively.Calculating noise component is carried out based on the difference between SR figure and result medium filtering being applied to SR figure.Along with the amount of noise component(s) increases, there is value and the SR of SR (x, y) mthe amount of the pixel of the difference of the value of (x, y) increases.In this example, when meeting time, the face in image is confirmed as pseudo-object.
Equation 6 and 7 is only exemplarily provided.Active testing equipment can carry out the activity of the object in test pattern according to the statistical information based on various diffusion velocity.In one example, active testing equipment can use diffusion velocity be more than or equal to given, expect or the distribution of the pixel of predetermined threshold alternatively.
In more detail, active testing equipment can determine the activity of the face in image below based on the equation 8 illustrated.
[equation 8]
When meeting time, active testing equipment can determine that the face in image is corresponding to true 3D object.
In another example, active testing equipment can determine the activity of the face in image below based on the equation 9 illustrated.
[equation 9]
In this example, when meeting time, the face in image is confirmed as true 3D object.
Active testing equipment also can use the statistical information based on diffusion velocity, and does not use SR to scheme.In this example, active testing equipment can use the standard deviation of each value of the diffusion velocity of all pixels, the mean value of the diffusion velocity of all pixels and the diffusion velocity of all pixels.Active testing equipment also can use the filter response based on diffusion velocity.Active testing equipment can use the result of diffusion velocity medium filtering being applied to all pixels.
According at least some example embodiment, active testing equipment can extract various feature according to the statistical information based on diffusion velocity, and learns the feature extracted.At learning phase, active testing equipment can from the statistical information of various training image calculating based on diffusion velocity, and can make grader learn the feature extracted from statistical information.Training image can comprise the image of true 3D object and the image of pseudo-2D object.
The grader of simple structure can obtain the distance (such as, Euclidean distance) between vector or the similarity (such as, normalization correlation) with threshold value, and the distance compared between vector or the similarity with threshold value.Neural net, Bayes classifier, SVMs (SVM) or self adaptation strengthen (AdaBoost) Study strategies and methods and can be used as more accurate grader.
Active testing equipment can calculate based on the statistical information of diffusion velocity from input picture, and use given, to expect or preordering method extracts feature from statistical information alternatively.Described method can be corresponding to the method used at learning phase.Active testing equipment can by extract characteristic sum study parameters input in grader, to test the activity of the object comprised in the input image.Based on the parameter of the characteristic sum study of extracting, the exportable instruction of grader comprises object in the input image or with pseudo-object corresponding signal corresponding to real object.
Fig. 6 illustrates the active testing equipment 600 according to example embodiment.
With reference to Fig. 6, active testing equipment 600 comprises receiver 611, diffuser 612 and tester 613.Receiver 611 can be corresponding to the receiver 311 shown in Fig. 3.
In exemplary operations, receiver 611 receives input picture, and input picture is outputted to diffuser 612 and tester 613.
Although element 612 is called as diffuser, and describe the example embodiment shown in Fig. 6 by for dispersion operation, element 612 can be called as filter or filter circuit 612 more at large.In addition, as desired, any applicable filtering operation can be used.
The multiple pixels corresponding to the object comprised in the input image, by upgrading the value of the multiple pixels corresponding to the object comprised in the input image iteratively based on diffusion equation, spread by diffuser 612.In one example, diffuser 612 can use equation 1 discussed above the multiple pixels corresponding to the object comprised in the input image to be spread.
Diffuser 612 upgrades the value of the multiple pixels corresponding to the object comprised in the input image iteratively by AOS scheme being applied to diffusion equation.In one example, diffuser 612 can use equation 3 discussed above the multiple pixels corresponding to the object comprised in the input image to be spread.The exportable diffusion image produced when multiple pixel is diffused of diffuser 612.
Still with reference to Fig. 6, tester 613 comprises the activity of object in the input image based on the diffusion velocity test of multiple pixel.In one example, based on diffusion velocity, tester 613 is by estimating that the surface characteristic relevant to object carrys out the activity of tested object.Surface characteristic represents the characteristic relevant to the surface of object, and can comprise the material on the reflective character on the surface of such as object, the dimension on the surface of object and/or the surface of object.
Tester 613 can analyze the distribution of the light energy comprised in the input image, with the surface characteristic that the object estimated to comprise in the input image is relevant.In one example, tester 613 can analyze the distribution of the light energy comprised in the input image, to determine that the object comprised in the input image has the medium of display face (such as, 2D plane) surface characteristic (one or more characteristic) still there is the surface characteristic (one or more characteristic) of the actual face (such as, 3D structure) of user.
When comprise object be in the input image confirmed as having the surface characteristic of the medium (such as, 2D plane) of display face time, the exportable signal corresponding to the test of failure of tester 613.Namely, such as, when tester 613 determines that the object comprised in the input image has the surface characteristic of the medium (such as, 2D plane) of display face, the signal of the exportable instruction test crash of tester 613.When comprising object in the input image and being confirmed as having the surface characteristic of actual face (such as, 3D structure) of user, the exportable signal corresponding with successful test of tester 613.Namely, such as, when tester 613 determines that the object comprised in the input image has the surface characteristic of actual face (such as, 3D structure) of user, the successful signal of the exportable instruction test of tester 613.
In another example, tester 613 carrys out the activity of tested object by calculating the statistical information relevant to diffusion velocity.As discussed above, 2D object and 3D object have different reflective characters.Different reflective character between 2D object and 3D object can be modeled based on diffusion velocity.
In this example, tester 613 based on the input picture from receiver 611 and can calculate diffusion velocity from the diffusion image of diffuser 612.In one example, tester 613 can use equation 4 discussed above to calculate the diffusion velocity of each pixel.In order to calculate the statistical information relevant to diffusion velocity, tester 613 can use equation 5 to extract Small-scale space.The Small-scale space extracted can be represented as SR figure.Then, tester 613 can use equation 6,7,8 or 9 to determine the activity of the object comprised in the input image.
Tester 613 can test the activity of the object comprised in the input image according to the statistical information based on various diffusion velocity.Tester 613 can use diffusion velocity be more than or equal to given, expect or the distribution of the pixel of predetermined threshold alternatively.Tester 613 also can use the statistical information based on diffusion velocity, and does not use SR to scheme.
When the statistical information calculated corresponds to the statistical information relevant to the medium of display face, the exportable signal corresponding to the test of failure of tester 613.When the statistical information calculated corresponds to the statistical information relevant to the actual face of user, the exportable signal corresponding with successful test of tester 613.Namely, such as, when the medium of the statistical information instruction display face calculated, the signal of the exportable instruction test crash of tester 613, and when the actual face of the statistical information indicating user calculated, the successful signal of the exportable instruction test of tester 613.
Active testing equipment 600 can based on the activity of single input picture tested object.Single input picture can be corresponding to the rest image of single picture, single image or single frames.
According to the image procossing of example embodiment
Fig. 7 is the flow chart of the image procossing illustrated according to example embodiment.Fig. 8 illustrates and to change according to the example of the input picture according to illumination of example embodiment.
With reference to Fig. 7, input picture 710 comprises the face of user.Be included in the face of the user in input picture 710 by illumination effect (such as, being subject to illumination effect significantly or fully).For example, referring to Fig. 8, although take the face of same user, different images can be produced according to illumination.When input picture be subject to lighting change affect time, the reliability of face recognition and/or user rs authentication can reduce (such as, significantly reduce or fully reduce), and/or computation complexity can increase (such as, fully increase or significantly increase).
The image of lighting change impact (such as, not affecting by the lighting change of input picture) not being subject to input picture can be produced according to the image processing method of one or more example embodiment and/or equipment.When obtaining input picture, also can produce substantially not by the image of the illumination effect on object according to the image processing method of one or more example embodiment and/or equipment.One or more example embodiment can provide to produce not easily to be affected (such as by lighting change, do not affect by lighting change) the technology of image, thus improve the reliability of face recognition and/or user rs authentication, and/or reduce the computation complexity of face recognition and/or user rs authentication.
Still with reference to Fig. 7, input picture 710 comprises illumination component 715 and non-illumination component.In this example, illumination component 715 represents the component by outside illumination effect (such as, substantially by outside illumination effect) among the component of formation pixel value.Non-illumination component represents the component not substantially being subject to outside illumination effect among the component of formation pixel value.Illumination component 715 can be separated from input picture 710 by image processing equipment, to produce the image not easily affecting (such as, not affecting by lighting change) by lighting change.
Image processing equipment can detect facial zone from input picture.In this example, example embodiment can be applicable to the facial zone that detects from input picture.Hereinafter, term " face-image " represents the input picture comprising face or the facial zone extracted from input picture.
Face-image can represent based on illumination component and non-illumination component.Face-image can based on Lambert's model (Lambertianmodel), as shown in equation 10.
[equation 10]
I=w·v
In equation 10, I represents face-image, and w represents illumination component, and v represents non-illumination component.For the example shown in Fig. 7, I corresponds to input picture 710, w and corresponds to the image 720, v relevant to illumination component corresponding to the image 730 relevant to non-illumination component.
The image 720 relevant to illumination component can comprise illumination component 715, and the image 730 relevant to non-illumination component can not comprise illumination component 715.Therefore, relevant to non-illumination component image 730 can be the image not easily affecting (such as, not affecting by lighting change) by lighting change.The image 730 relevant to non-illumination component also can be called as norm image (canonicalimage).
Illumination component 715 can have the relatively high probability be distributed in the Large-scale areas of image.Therefore, relevant to illumination component image 720 can be the image corresponding with Large-scale areas.Illumination component 715 can have the relatively low probability be distributed in Small-scale space.Therefore, relevant to non-illumination component image 730 can be the image corresponding with Small-scale space.
Image processing equipment can produce the image 730 relevant to non-illumination component based on input picture 710.In one example, image processing equipment can receive input picture 710, and produces the image 720 relevant to illumination component based on input picture 710.Image processing equipment can use the above equation 10 illustrated, calculates the image 730 relevant with non-illumination component based on input picture 710 and the image 720 relevant to illumination component.
According at least one example embodiment, input picture 710 can carry out spreading to produce the image 720 relevant to illumination component by image processing equipment.The diffusion velocity belonging to the pixel of Small-scale space can be greater than the diffusion velocity of the pixel belonging to Large-scale areas.Small-scale space and Large-scale areas can be separated based on the difference of diffusion velocity by image processing equipment.Image processing equipment can by the multiple pixels be included in input picture 710 diffusion with given, to expect or predetermined iteration count is (such as alternatively, about 20) corresponding number of times, with the image 720 that the illumination component produced to correspond to Large-scale areas is relevant.
According at least one example embodiment, image processing equipment can use diffusion equation to upgrade the value of multiple pixel iteratively.In one example, image processing equipment can use the equation 11 illustrated the multiple pixels corresponding to the face be included in input picture 710 to be spread below.
[equation 11]
u k + 1 = u k + div ( d ( | ▿ u k | ) ▿ u k )
In equation 11, k represents iteration count, u kthe value of pixel after representing kth time iteration, u k+1the value of pixel after representing (k+1) secondary iteration, u kwith u k(x, y) is corresponding, u k(x, y) be kth time diffusion after the value of pixel at coordinate (x, y) place in image.Value u k+1with u k+1(x, y) is corresponding, u k+1(x, y) be (k+1) secondary diffusion after the value of pixel at coordinate (x, y) place in image.In this example, u 0represent the value of the pixel in input picture 710.When final iteration count is corresponding to " L ", u lrepresent the value of the pixel in the image 720 relevant to illumination component.
As before, show gradient operator, div () represents divergence function, and d () represents spread function.Spread function can be given, expectation or predetermined alternatively.In one example, image processing equipment can define spread function as shown in equation 12.
[equation 12]
d ( | ▿ u | ) = 1 / ( | ▿ u | + β )
In equation 12, β represents little positive number.When using the spread function of definition in equation 12, during DIFFUSION TREATMENT, the border of face can be kept relatively goodly.When spread function as shown in equation 12 with pixel gradient function corresponding time, diffusion equation be nonlinear.Here, the image produced due to diffusion is called as diffusion image.When spread function is nonlinear, the image produced due to diffusion is called as Nonlinear Diffusion image.
Equation 12 is provided as the example of spread function, but example embodiment can utilize other spread functions.Such as, one of multiple candidate's spread function can be selected based on input picture.
In addition, although discuss example embodiment for spread function, also other filter functions can be used, as mentioned above.
According at least some example embodiment, image processing equipment can be applied AOS scheme and carry out solve equation 11.In one example, image processing equipment can use the equation 13 illustrated the multiple pixels corresponding to the face be included in input picture 710 to be spread below.
[equation 13]
u k + 1 = 1 2 ( ( I - 2 τA x ( u k ) ) - 1 + ( I - 2 τA y ( u k ) ) - 1 ) u k
In equation 13, I represents the value of the pixel in input picture 710, A xrepresent horizontal proliferation matrix, A yrepresent vertical proliferation matrix, τ represents time stepping.Final iteration count L and time stepping τ can be given, expectation or predetermined alternatively.Usually, when time step enter τ be set to relatively little and final iteration count L is set to relatively large time, represent the u of the value of final diffusion pixel lreliability can increase.
Using AOS scheme to carry out solve equation 11 can enable image processing equipment reduce final iteration count L.When using AOS scheme, although use given, expect or the time stepping τ of pre-sizing alternatively, final diffusion pixel u lreliability can be fully high.AOS Scheme Solving diffusion equation can be used to improve the efficiency of the operation of DIFFUSION TREATMENT according to the image processing equipment of one or more example embodiment.
Image processing equipment can produce the image 730 relevant with non-illumination component based on input picture 710 and the image 720 relevant to illumination component.In one example, image processing equipment can use equation 14 or 15 to produce the image 730 relevant to non-illumination component.Due to " w " in equation 10 and u lcorresponding, therefore can derive equation 14 and 15 from equation 10.
[equation 14]
v=I/u L
[equation 15]
logv=logI-logu L
In equation 14 and 15, I represents face-image, and can be corresponding to such as input picture 710.Face-image I also can with u 0corresponding.Final diffusion pixel u lrepresent Large-scale areas, and may correspond in such as relevant to illumination component image 720.Still represent Small-scale space with reference to equation 14 and 15, v, and may correspond in such as relevant to non-illumination component image 730.
Fig. 9 illustrates the image processing equipment 910 according to example embodiment.In addition, Fig. 9 shows face recognition and/or user rs authentication circuit 920, will be discussed in more detail after a while.
With reference to Fig. 9, image processing equipment 910 comprises: receiver 911; Diffuser 912; And generator 913.
As Fig. 6, although the element 912 in Fig. 9 is called as diffuser and the example embodiment shown in Fig. 9 will be described for dispersion operation, element 912 can be called as filter or filter circuit 912 more at large.In addition, as mentioned above, filter 912 can utilize any applicable filtering operation, instead of DIFFUSION TREATMENT discussed here.
In exemplary operations, receiver 911 can receive input picture.In example more specifically, receiver 911 can receive the input picture produced by imageing sensor (not shown).Receiver 911 can use and wire, wirelessly or by network be connected to imageing sensor.Alternatively, receiver 911 can receive input picture from storage device (such as main storage, cache memory, hard disk drive (HDD), solid-state drive (SSD), flash memory device, network drive etc.).
The multiple pixels corresponding to the object comprised in the input image can spread by diffuser 912.In one example, object can be corresponding to the face of user.Diffuser 912 can upgrade the value of the multiple pixels corresponding to the object comprised in the input image iteratively based on diffusion equation.In one example, the multiple pixels corresponding to the object comprised in the input image can spread according to equation 11 by diffuser 912.
Diffuser 912 upgrades the value of the multiple pixels corresponding to the object comprised in the input image iteratively by AOS scheme being applied to diffusion equation.In one example, diffuser 912 can use equation 13 the multiple pixels corresponding to the object comprised in the input image to be spread.The exportable diffusion image produced when multiple pixel is diffused of diffuser 912.Diffusion image may correspond in the image (such as, in Fig. 7 720) relevant to illumination component.
Still with reference to Fig. 9, generator 913 can produce output image based on input picture and diffusion image.Generator 913 can use equation 14 or 15 to produce output image.Output image may correspond in the image (such as, in Fig. 7 730) relevant to non-illumination component.Output image can be outputted to face recognition and/or user rs authentication circuit 920 by generator 913.Face recognition and/or user rs authentication circuit 920 can perform any known face recognition and/or user rs authentication circuit 920 operates, and will carry out some details discussion after a while to this.Alternatively, output image can be outputted to memory (not shown) by generator 913.
According at least some example embodiment, image processing equipment 910 can produce output image based on single input picture.Single input picture can be corresponding to the rest image of single picture, single image or single frames.
Still with reference to Fig. 9, in one example, face recognition and/or user rs authentication circuit 920 can identify based on the output image from generator 913 face comprised in the input image.Output image may correspond in relevant to non-illumination component and not easily affect the image of (such as, not affecting by lighting change) by lighting change.Face recognition and/or user rs authentication circuit 920 can identify based on the image not easily affecting (such as, not affecting by lighting change) by lighting change the face comprised in the input image.Therefore, the accuracy of face recognition and/or reliability can improve.When use not easily affects the image of (such as, not affecting by lighting change) by lighting change, in relatively low lightness environment, the performance of alignment operation can improve.
In another example, face recognition and/or user rs authentication circuit 920 can based on the output image authentication of users from generator 913.Image processing equipment 910 is by carrying out authentication of users based on the face of output image identification user.Output image may correspond in relevant to non-illumination component and not easily by the image of lighting change (such as, not affecting by lighting change).Image processing equipment 910 can carry out authentication of users based on the image not easily by lighting change (such as, not affecting by lighting change).Therefore, the accuracy of user rs authentication and/or reliability can improve.
According to the flow chart of example embodiment
Figure 10 is the flow chart of the activity test method illustrated according to example embodiment.In some cases, by for the active testing equipment shown in Fig. 3 and Fig. 6, the flow chart shown in Figure 10 is discussed.
With reference to Figure 10, in operation 1010, active testing equipment receives input picture.In operation 1020, active testing testing of equipment is included in the activity of the object in the input picture of reception.
Such as, for the active testing equipment 310 shown in Fig. 3, in operation 1010, receiver 311 receives input picture, and in operation 1020, tester 312 test is included in the activity of the object in the input picture of reception.One or more characteristic that tester 312 can have a plane one or more characteristic based on the object in input picture still has a 3D structure tests the activity of object included in the input picture received at receiver 311.Discuss the details of the operation performed by tester 312 above for Fig. 3, therefore, do not repeat here to discuss in detail.
Such as, for the active testing equipment 600 shown in Fig. 6, in operation 1010, receiver 611 receives input picture.In operation 1020, the multiple pixels corresponding to object included in the input picture received at receiver 611 spread by diffuser 612, and tester 613 is based on the activity of the diffusion velocity tested object of multiple pixel.Discuss the details of the operation performed by diffuser 612 and tester 613 above for Fig. 6, therefore, do not repeat here to discuss in detail.
Figure 11 is the flow chart of the image processing method illustrated according to example embodiment.For illustrative purposes, by for the image processing equipment shown in Fig. 9, the image processing method shown in Figure 11 is discussed.
With reference to Figure 11, in operation 1110, image processing equipment receives the first image.In operation 1120, image processing equipment produces the second image, and in operation 1130, image processing equipment produces the 3rd image.
First image may correspond in input picture (such as, in Fig. 7 710), second image may correspond in the image relevant to illumination component (such as, in Fig. 7 720), 3rd image may correspond in the image (such as, in Fig. 7 730) relevant to non-illumination component.
Such as, in more detail, for Fig. 9 and Figure 11, in operation 1110, receiver 911 receives the first image (input picture).
In operation 1120, diffuser 912 produces the second image based on the first image (input picture).In this example, the second image is the image (such as, in Fig. 7 720) relevant to illumination component.
In operation 1130, generator based on the first image (input picture) and by diffuser 912 produce second image produce the 3rd image (output image).In this case, the image (such as, 730 in Fig. 7) that the 3rd image (output image) right and wrong illumination component is relevant.Discuss the details of the operation performed by diffuser 912 and generator 913 above for Fig. 9, therefore, do not repeat here to discuss in detail.
More at large, the description that reference Figure 1A to Fig. 9 provides is applicable to the operation of Figure 10 and Figure 11, therefore, omits more detailed description for the sake of simplicity.
Figure 12 illustrates the image processing method according to another example embodiment.
The image processing method of the activity test method of Figure 10 and Figure 11 combines by the example embodiment shown in Figure 12.For illustrative purposes, by for the image processing equipment shown in Fig. 9, the method shown in Figure 12 is described.Provide the details of the operation described for Figure 12 above for such as Fig. 3, Fig. 6, Fig. 9, Figure 10 and Figure 11, therefore, do not repeat here to discuss in detail.
With reference to Figure 12, in operation 1210, the receiver 911 of image processing equipment 910 receives the first image.First image can be corresponding to the input picture comprising user's face.First image is outputted to diffuser 912 and generator 913 by receiver 911.
In operation 1220, diffuser 912 produces the second image based on the first image received at receiver 911.Diffuser 912 is by carrying out diffusion to produce the second image by the first image from receiver 911.Second image can be the image relevant to illumination component.
In operation 1240, generator 913 calculates the diffusion velocity of each pixel based on the first image and the second image.The diffusion velocity of each pixel can be calculated based on the difference between the respective pixel values in the pixel value in the second image and the first image.
In operation 1250, generator 913 extracts statistical information based on diffusion velocity.Such as, generator 913 calculate diffusion velocity be greater than given, expect or the quantity of the pixel of predetermined threshold alternatively.
In operation 1270, generator 913 performs active testing according to the statistical information based on diffusion velocity.In one example, the quantity that generator 913 is greater than the pixel of threshold value based on diffusion velocity determines that whether input picture is corresponding to true 3D object.
If input picture not corresponding to true 3D object (active testing failure) determined by generator 913, then in operation 1260, face recognition and/or user rs authentication circuit 920 do not perform face recognition and/or user rs authentication, and process stops.
Turn back to operation 1270, if input picture corresponding to true 3D object (active testing success) determined by generator 913, then perform face recognition and/or user rs authentication.In this example, the image not easily affecting (such as, not affecting by lighting change) by lighting change can be produced, to use in face recognition and/or user rs authentication operation.
Still with reference to Figure 12, in operation 1230, generator 913 produces the 3rd image based on the first image and the second image.In one example, generator 913 is based on the ratio of such as above the first image and the second image discussed for equation 14 or calculate the 3rd image as the above difference of the first image and the second image in log-domain discussed for equation 15.3rd image can be correlated with and not by the image that lighting change affects by right and wrong illumination component.
In operation 1260, face recognition and/or user rs authentication circuit 920 perform face recognition and/or user rs authentication based on the 3rd image.In the example embodiment shown in Figure 12, only when input picture corresponding to true 3D object (active testing success), face recognition and/or user rs authentication circuit 920 perform face recognition and/or user rs authentication in operation 1260.In this example, face recognition and/or user rs authentication can be performed based on corresponding 3rd image of image not easily affecting (such as, not affecting by lighting change) by lighting change.
The details of the description that reference Figure 1A to Figure 11 provides is applicable to the operation of Figure 12, therefore, omits repeated description for the sake of simplicity.
Figure 13 is the block diagram of the electronic system illustrated according to example embodiment.
With reference to Figure 13, electronic system comprises such as: imageing sensor 1300, image-signal processor (ISP) 1302, display 1304 and memory 1308.Imageing sensor 1300, ISP1302, display 1304 are intercomed by bus 1306 mutually with memory 1308.
Imageing sensor 1300 can be the above imageing sensor 115 described for Figure 1A and Figure 1B.Imageing sensor 1300 is configured in any well-known manner, and (such as, by converting light image to the signal of telecommunication) catches image (being also called as view data).Image is output to ISP1302.
ISP1302 can comprise above for one or more equipment in the equipment of Figure 1A to Figure 12 discussion, and/or can perform above for the one or more of methods in the method for Figure 1A to Figure 12 discussion.ISP1302 also can comprise face recognition and/or user rs authentication circuit 920, to perform the above face recognition for Figure 1A to Figure 12 discussion and/or user rs authentication operation.In example more specifically, ISP1302 can comprise the active testing equipment 310 shown in Fig. 3, the active testing equipment 610 shown in Fig. 6, the image processing equipment 910 shown in Fig. 9 and/or the face recognition shown in Fig. 9 and/or user rs authentication circuit 920.Memory 1308 can store the image of being caught by imageing sensor 1300 and/or the image produced by active testing equipment and/or image processing equipment.Memory 1308 can be any applicable volatibility or nonvolatile memory.Display 1304 can show the image of being caught by imageing sensor 1300 and/or the image produced by active testing equipment and/or image processing equipment.
ISP1302 also can be configured to executive program and control electronic system.The program code performed by ISP1302 can be stored in memory 1308.
Electronic system shown in Figure 13 is connected to external device (ED) (such as, personal computer or network) by input/output device (not shown), and can with external device (ED) swap data.
Electronic system shown in Figure 13 can realize various electronic system, comprising: mobile device, such as mobile phone, smart phone, personal digital assistant (PDA), flat computer, laptop computer etc.; Calculation element, such as personal computer (PC), dull and stereotyped PC, notebook; Or electronic product, such as TV (TV) or intelligent TV, safety device etc. for gate.
One or more example embodiment described herein (such as, active testing equipment, image processing equipment, electronic system etc.) can use nextport hardware component NextPort and component software to realize.Such as, nextport hardware component NextPort can comprise microphone, amplifier, band pass filter, audio frequency to digital quantizer and processing unit.Processing unit can use one or more special-purpose computer (such as processor, controller and ALU, application-specific integrated circuit (ASIC), SOC (system on a chip) device, digital signal processor, microcomputer, field programmable gate array, programmable logic cells, microprocessor or can respond and perform any other device of instruction in a limiting fashion) to realize.Processing unit can operation system (OS) and one or more software application of running on OS.Processing unit also can be accessed in response to the execution of software, store, handle, process and create data.For succinct object, odd number is used as to the description of processing unit; But person of skill in the art will appreciate that, processing unit can comprise multiple treatment element and polytype treatment element.Such as, processing unit can comprise multiple processor or comprise a processor and a controller.In addition, different processing configuration is feasible, such as parallel processor.
In addition, example embodiment realizes by hardware, software, firmware, middleware, microcode, hardware description language or their any combination.When being embodied as software, firmware, middleware or microcode, must the program code of task or code segment can be stored in machine or computer-readable medium (such as computer-readable recording medium) for performing.When being embodied as software, processor will perform required task.
Code segment can represent any combination of process, function, subprogram, program, routine, subroutine, module, software kit, class or instruction, data structure or program statement.Code segment is by transmission and/or reception information, data, independent variable, parameter or store content and be attached to another code segment or hardware circuit.Information, independent variable, parameter, data etc. are passed by any applicable means comprising Memory Sharing, Message Transmission, alternative space, network transmission etc., forward or send.
Software can comprise computer program, code segment, instruction or their some combinations, so that instruction or configuration process device desirably operate independently or jointly.Software and data can by for good and all or temporarily realize any type machine, assembly, physics or virtual unit, computer-readable storage medium or device or instruction or data can be supplied in the transmitting signal ripple of processing unit or processing means explaination.Software also can be distributed in the computer system of networking, thus stores in a distributed fashion and executive software.Software and data store by one or more non-transitory computer readable recording medium storing program for performing.
Example embodiment described herein can be recorded in non-transitory computer-readable medium, and non-transitory computer-readable medium comprises for realizing by the program command of computer-implemented various operations.Medium also can comprise independent program command, data file, data structure etc. or their combination.The program command be recorded on medium can be the program command being especially designed for the object implemented here and constructing, or they can be known and to the technical staff of computer software fields can.The example of non-transitory computer-readable medium comprises: magnetizing mediums, such as hard disk, floppy disk and tape; Optical medium, such as CDROM dish and DVD; Magnet-optical medium, such as CD; With the hardware unit be specially configured as storage and execution of program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory etc.The example of program command comprises the machine code such as produced by compiler and both the files comprising the more high-level code that interpreter can be used to perform by computer.Above-described device can be configured to be used as one or more software module to perform the operation of above-mentioned example embodiment, or vice versa.
Be described above some examples.However, it should be understood that and can carry out various amendment.Such as, if if described technology combines by different way by the assembly in different order execution and/or described system, framework, device or circuit and/or substituted by other assemblies or its equivalent or supplement, then the result be applicable to can be realized.Therefore, other execution modes within the scope of the claims.

Claims (42)

1. an activity test method, comprising:
Receive input picture;
The characteristic that image based on the object be included in the input picture of reception has instruction plane still has the characteristic indicating three-dimensional structure, and test is included in the activity of the object in the input picture of reception.
2. activity test method as claimed in claim 1, wherein, the image being included in the object in the input picture of reception is corresponding to face.
3. activity test method as claimed in claim 1, also comprises:
Based on the distribution of light energy included in multiple pixels corresponding to the image of object, determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure.
4. activity test method as claimed in claim 1, also comprises:
Based on the degree of the uniformity of the distribution of light energy included in multiple pixels corresponding to the image of object, determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure.
5. activity test method as claimed in claim 1, also comprises:
Based on the statistical information relevant with the diffusion velocity of multiple pixels corresponding to the image of object, determine that characteristic that the image of object has an instruction plane still has the characteristic of instruction three-dimensional structure.
6. activity test method as claimed in claim 5, also comprises:
The value of described multiple pixel is calculated iteratively based on diffusion equation;
Based on the difference between the pixel value after the pixel value before each iterative computation and each iterative computation, calculate the diffusion velocity of each pixel in described multiple pixel.
7. activity test method as claimed in claim 5, wherein, the statistical information relevant with diffusion velocity comprises at least one item in following item:
Diffusion velocity among described multiple pixel is more than or equal to the quantity of the pixel of threshold value;
Diffusion velocity among described multiple pixel is more than or equal to the distribution of the pixel of threshold value;
The amount of noise component(s) included in the particular dimensions region that the diffusion velocity based on described multiple pixel is extracted;
The mean value of the diffusion velocity of described multiple pixel;
The standard deviation of the diffusion velocity of described multiple pixel;
Based on the filter response of the diffusion velocity of described multiple pixel.
8. activity test method as claimed in claim 1, also comprises:
Filtering is carried out to produce the image of filtering to the input picture received;
Determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure based on the statistical information relevant with the change of pixel value, wherein, the change of described pixel value is the change of the value of the multiple pixel corresponding to the image of the object in the input picture received and the multiple pixels corresponding with the image of the object in the image of filtering.
9. activity test method as claimed in claim 8, wherein, described filter step comprises:
The input picture of reception is carried out spreading to produce diffusion image;
Based on the value of each pixel in the input picture received and the difference between the analog value of the pixel in diffusion image, calculate the diffusion velocity of each pixel in multiple pixels corresponding to the image of object,
Wherein, described determining step has based on the image of the diffusion velocity determination object calculated the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure.
10. activity test method as claimed in claim 1, at least one step in further comprising the steps of:
When object is confirmed as the characteristic with instruction plane, export the signal corresponding to failed test;
When object is confirmed as the characteristic with instruction three-dimensional structure, exports and successfully test corresponding signal.
11. activity test methods as claimed in claim 1, wherein, input picture is single image.
12. 1 kinds of activity test methods, comprising:
Filtering is carried out to produce the image of filtering to the image of reception of the image comprising object;
Determine the amplitude of the change of the value of the multiple pixels corresponding to the image of the object in the image of reception and the image of filtering;
Based on the activity of the amplitude tested object of described change.
13. activity test methods as claimed in claim 12, wherein, described filter step comprises: the multiple pixels corresponding to the image of the object in the image received carried out spreading to produce diffusion image,
Wherein, described method also comprises:
Based on the value of the multiple pixels corresponding to the image of the object in the image received and diffusion image, calculate the diffusion velocity of the pixel corresponding to the image of object,
Wherein, described testing procedure is based on the activity of the diffusion velocity tested object calculated.
14. activity test methods as claimed in claim 12, wherein, described object is corresponding to face.
15. activity test methods as claimed in claim 13, wherein, described diffusing step comprises:
The value of described multiple pixel is upgraded iteratively based on diffusion equation.
16. activity test methods as claimed in claim 15, wherein, described step of updating is iteratively applied to by additive operator being divided scheme the value that diffusion equation upgrades described multiple pixel iteratively.
17. activity test methods as claimed in claim 13, wherein, described testing procedure comprises:
The surface characteristic relevant to object is estimated based on diffusion velocity;
Based on the activity of the surface characteristic tested object estimated.
18. activity test methods as claimed in claim 17, wherein, described surface characteristic comprises at least one item in following item:
The reflective character on the surface of object;
The dimension on the surface of object;
The material on the surface of object.
19. activity test methods as claimed in claim 17, wherein, described estimating step comprises:
The distribution of the light energy in the image of object is included in, to estimate surface characteristic based on diffusion velocity analysis.
20. activity test methods as claimed in claim 17, at least one step in further comprising the steps of:
When the surface characteristic estimated is corresponding to the surface characteristic of the medium of display face, export the signal corresponding to failed test;
When the surface characteristic estimated is corresponding to the surface characteristic of actual face, exports and successfully test corresponding signal.
21. activity test methods as claimed in claim 13, wherein, described testing procedure comprises:
Calculate the statistical information relevant to diffusion velocity;
The activity of tested object is carried out based on the statistical information calculated.
22. activity test methods as claimed in claim 21, wherein, the step of described counting statistics information comprises at least one in following operation:
The diffusion velocity calculated among multiple pixels corresponding to the image of object is more than or equal to the quantity of the pixel of threshold value;
The diffusion velocity calculated among multiple pixels corresponding to the image of object is more than or equal to the distribution of the pixel of threshold value;
Calculate at least one in the mean value of the diffusion velocity of multiple pixels corresponding to the image of object and standard deviation;
Calculate the filter response based on the diffusion velocity of the multiple pixels corresponding to the image of object.
23. activity test methods as claimed in claim 21, wherein, the step of described counting statistics information comprises:
Based on diffusion velocity from image zooming-out first dimensional area received;
Extract the characteristic of the first dimensional area,
Wherein, described testing procedure is based on the activity of the characteristic test object extracted.
24. activity test methods as claimed in claim 23, wherein:
The characteristic of the first dimensional area comprises the amount of noise component(s) included in the first dimensional area;
Calculating noise component is carried out based on the difference between the first dimensional area and result medium filtering being applied to the first dimensional area.
25. activity test methods as claimed in claim 21, also comprise:
When statistical information corresponds to the statistical information relevant to the medium of display face, export the signal corresponding to failed test;
When statistical information corresponds to the statistical information relevant to actual face, export and successfully test corresponding signal.
26. activity test methods as claimed in claim 13, wherein, the step of described calculating diffusion velocity comprises:
The diffusion velocity of each pixel is calculated based on the diffuseness values of pixel after the original value of the pixel before diffusion and diffusion.
27. activity test methods as claimed in claim 26, wherein, along with the difference between original value and diffuseness values increases, the diffusion velocity of the pixel of calculating increases, and along with the difference reduction between original value and diffuseness values, the diffusion velocity of the pixel of calculating reduces.
28. activity test methods as claimed in claim 13, wherein, input picture is corresponding to the single image of user's face.
29. 1 kinds of image processing methods, comprising:
Receive the first image comprising illumination component and non-illumination component;
Filtering is carried out to produce second image relevant to illumination component to the multiple pixels be included in the first image;
Based on the first image and the second image, produce three image relevant to non-illumination component.
30. image processing methods as claimed in claim 29, wherein, the second image is diffusion image, and described filter step comprises:
The described multiple pixel be included in the first image is carried out spreading to produce diffusion image.
31. image processing methods as claimed in claim 29, at least one step in further comprising the steps of:
Based on the 3rd image recognition face;
Based on the 3rd image authentication user.
32. image processing methods as claimed in claim 30, wherein, described diffusing step comprises:
The value that diffusion equation upgrades described multiple pixel is iteratively applied to by additive operator being divided scheme.
33. image processing methods as claimed in claim 29, wherein, the 3rd image produces based in following item at least one: the first image in the ratio of the first image and the second image and log-domain and the difference of the second image.
34. image processing methods as claimed in claim 29, wherein, non-illumination component is included in the first dimensional area, illumination component is included in the second dimensional area, wherein, the first dimensional area does not affect by lighting change, and the second dimensional area is responsive to lighting change.
35. image processing methods as claimed in claim 30, wherein, the diffusion velocity of the pixel corresponding to non-illumination component is higher than the diffusion velocity of the pixel corresponding with illumination component.
36. image processing methods as claimed in claim 29, wherein, the first image is corresponding to the single image of face.
37. 1 kinds of active testing equipment, comprising:
Acceptor circuit, is configured to receive input picture;
Test circuit, the characteristic being configured to have based on the image of the object be included in the input picture of reception instruction plane still has the characteristic indicating three-dimensional structure, and test is included in the activity of the object in the input picture of reception.
38. active testing equipment as claimed in claim 37, wherein, described object is corresponding to face.
39. active testing equipment as claimed in claim 37, wherein, test circuit is also configured to:
Filtering is carried out to produce the image of filtering to the input picture received;
Determine that the image of object has the characteristic indicating the characteristic of plane still to have instruction three-dimensional structure based on the statistical information relevant with the change of pixel value, wherein, the change of described pixel value is the change of the value of the multiple pixel corresponding to the image of the object in the input picture received and the multiple pixels corresponding with the image of the object in the image of filtering.
40. active testing equipment as claimed in claim 39, wherein, test circuit is also configured to:
The input picture of reception is carried out spreading to produce diffusion image;
Based on the value of each pixel in the input picture received and the difference between the analog value of the pixel in diffusion image, calculate the diffusion velocity of each pixel in multiple pixels corresponding to the image of object;
The characteristic that image based on the diffusion velocity determination object calculated has instruction plane still has the characteristic indicating three-dimensional structure.
41. 1 kinds of image processing equipments, comprising:
Acceptor circuit, is configured to receive the first image comprising illumination component and non-illumination component;
Filter circuit, is configured to carry out filtering to produce second image relevant to illumination component to the multiple pixels be included in the first image;
Generator circuitry, is configured to based on the first image and the second image, produces three image relevant to non-illumination component.
42. image processing equipments as claimed in claim 41, wherein, the first image is corresponding to the single image comprising face.
CN201510208889.7A 2014-05-09 2015-04-28 Activity test method and equipment and image processing method and equipment Active CN105100547B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20140055687 2014-05-09
KR10-2014-0055687 2014-05-09
KR1020140077333A KR102257897B1 (en) 2014-05-09 2014-06-24 Apparatus and method for liveness test,and apparatus and method for image processing
KR10-2014-0077333 2014-06-24

Publications (2)

Publication Number Publication Date
CN105100547A true CN105100547A (en) 2015-11-25
CN105100547B CN105100547B (en) 2019-10-18

Family

ID=52997256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510208889.7A Active CN105100547B (en) 2014-05-09 2015-04-28 Activity test method and equipment and image processing method and equipment

Country Status (4)

Country Link
US (3) US9679212B2 (en)
EP (2) EP2942736A3 (en)
JP (1) JP6629513B2 (en)
CN (1) CN105100547B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766786A (en) * 2016-08-23 2018-03-06 三星电子株式会社 Activity test method and active testing computing device
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment
CN110069970A (en) * 2018-01-22 2019-07-30 三星电子株式会社 Activity test method and equipment
CN113569708A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body recognition method, living body recognition device, electronic apparatus, and storage medium

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9679212B2 (en) * 2014-05-09 2017-06-13 Samsung Electronics Co., Ltd. Liveness testing methods and apparatuses and image processing methods and apparatuses
US9934443B2 (en) * 2015-03-31 2018-04-03 Daon Holdings Limited Methods and systems for detecting head motion during an authentication transaction
US10049287B2 (en) * 2015-05-22 2018-08-14 Oath Inc. Computerized system and method for determining authenticity of users via facial recognition
US10275684B2 (en) * 2015-11-04 2019-04-30 Samsung Electronics Co., Ltd. Authentication method and apparatus, and method and apparatus for training a recognizer
KR20180102637A (en) * 2016-01-12 2018-09-17 프린스톤 아이덴티티, 인크. Systems and methods of biometric analysis
CN105740778B (en) * 2016-01-25 2020-01-03 北京眼神智能科技有限公司 Improved three-dimensional human face in-vivo detection method and device
CN107135348A (en) 2016-02-26 2017-09-05 阿里巴巴集团控股有限公司 Recognition methods, device, mobile terminal and the camera of reference object
US10635894B1 (en) * 2016-10-13 2020-04-28 T Stamp Inc. Systems and methods for passive-subject liveness verification in digital media
US11373449B1 (en) * 2016-10-13 2022-06-28 T Stamp Inc. Systems and methods for passive-subject liveness verification in digital media
US10430638B2 (en) * 2016-11-10 2019-10-01 Synaptics Incorporated Systems and methods for spoof detection relative to a template instead of on an absolute scale
US10726244B2 (en) 2016-12-07 2020-07-28 Samsung Electronics Co., Ltd. Method and apparatus detecting a target
KR102370063B1 (en) * 2017-03-28 2022-03-04 삼성전자주식회사 Method and apparatus for verifying face
KR102455633B1 (en) * 2017-12-21 2022-10-17 삼성전자주식회사 Liveness test method and apparatus
CN108154111B (en) * 2017-12-22 2021-11-26 泰康保险集团股份有限公司 Living body detection method, living body detection system, electronic device, and computer-readable medium
JP6984724B2 (en) 2018-02-22 2021-12-22 日本電気株式会社 Spoofing detection device, spoofing detection method, and program
US20190286885A1 (en) * 2018-03-13 2019-09-19 Kneron Inc. Face identification system for a mobile device
US11093771B1 (en) 2018-05-04 2021-08-17 T Stamp Inc. Systems and methods for liveness-verified, biometric-based encryption
US11496315B1 (en) 2018-05-08 2022-11-08 T Stamp Inc. Systems and methods for enhanced hash transforms
JP7131118B2 (en) * 2018-06-22 2022-09-06 富士通株式会社 Authentication device, authentication program, authentication method
US11138302B2 (en) * 2019-02-27 2021-10-05 International Business Machines Corporation Access control using multi-authentication factors
US11301586B1 (en) 2019-04-05 2022-04-12 T Stamp Inc. Systems and processes for lossy biometric representations
SG10201906721SA (en) * 2019-07-19 2021-02-25 Nec Corp Method and system for chrominance-based face liveness detection
US11475714B2 (en) * 2020-02-19 2022-10-18 Motorola Solutions, Inc. Systems and methods for detecting liveness in captured image data
US11967173B1 (en) 2020-05-19 2024-04-23 T Stamp Inc. Face cover-compatible biometrics and processes for generating and using same
EP4325429A4 (en) 2021-04-12 2024-05-08 NEC Corporation Information processing device, information processing method, and recording medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication
US20100158319A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for fake-face detection using range information
CN101999900A (en) * 2009-08-28 2011-04-06 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4209852A (en) * 1974-11-11 1980-06-24 Hyatt Gilbert P Signal processing and memory arrangement
JP2662856B2 (en) * 1995-01-11 1997-10-15 株式会社エイ・ティ・アール通信システム研究所 Apparatus and method for measuring shape characteristics
US6879341B1 (en) * 1997-07-15 2005-04-12 Silverbrook Research Pty Ltd Digital camera system containing a VLIW vector processor
JP4085470B2 (en) * 1998-05-29 2008-05-14 オムロン株式会社 Personal identification device, personal identification method, and recording medium recording personal identification program
US6263113B1 (en) 1998-12-11 2001-07-17 Philips Electronics North America Corp. Method for detecting a face in a digital image
JP2001126091A (en) * 1999-10-27 2001-05-11 Toshiba Corp Occupant face picture processing system and toll receiving system
KR100421221B1 (en) 2001-11-05 2004-03-02 삼성전자주식회사 Illumination invariant object tracking method and image editing system adopting the method
US7256818B2 (en) * 2002-05-20 2007-08-14 Simmonds Precision Products, Inc. Detecting fire using cameras
US7620212B1 (en) * 2002-08-13 2009-11-17 Lumidigm, Inc. Electro-optical sensor
ATE349739T1 (en) 2002-12-20 2007-01-15 Koninkl Philips Electronics Nv LIGHTING-INDEPENDENT FACE DETECTION
US20060122834A1 (en) * 2004-12-03 2006-06-08 Bennett Ian M Emotion detection device & method for use in distributed systems
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
JP4734980B2 (en) * 2005-03-15 2011-07-27 オムロン株式会社 Face authentication device and control method therefor, electronic device equipped with face authentication device, face authentication device control program, and recording medium recording the program
JP4696610B2 (en) * 2005-03-15 2011-06-08 オムロン株式会社 Subject authentication device, face authentication device, mobile phone, and subject authentication method
EP2194509A1 (en) 2006-05-07 2010-06-09 Sony Computer Entertainment Inc. Method for providing affective characteristics to computer generated avatar during gameplay
WO2008091401A2 (en) * 2006-09-15 2008-07-31 Retica Systems, Inc Multimodal ocular biometric system and methods
WO2008035745A1 (en) * 2006-09-20 2008-03-27 Panasonic Corporation Monitor system, camera and video image coding method
US20130212655A1 (en) * 2006-10-02 2013-08-15 Hector T. Hoyos Efficient prevention fraud
KR100887183B1 (en) 2007-03-21 2009-03-10 한국과학기술원 Preprocessing apparatus and method for illumination-invariant face recognition
JP4857170B2 (en) 2007-04-10 2012-01-18 ダイヤニトリックス株式会社 Cationic polymer flocculant and sludge treatment method using the same
US8463006B2 (en) * 2007-04-17 2013-06-11 Francine J. Prokoski System and method for using three dimensional infrared imaging to provide detailed anatomical structure maps
JP2009187130A (en) * 2008-02-04 2009-08-20 Panasonic Electric Works Co Ltd Face authentication device
WO2009123640A1 (en) * 2008-04-04 2009-10-08 Hewlett-Packard Development Company, L.P. Virtual machine manager system and methods
WO2010036554A2 (en) * 2008-09-25 2010-04-01 Dolby Laboratories Licensing Corporation Improved illumination and light recycling in projection systems
GB0901084D0 (en) * 2009-01-22 2009-03-11 Trayner David J Autostereoscopic display
CN101499164B (en) 2009-02-27 2011-02-09 西安交通大学 Image interpolation reconstruction method based on single low-resolution image
JP5106459B2 (en) * 2009-03-26 2012-12-26 株式会社東芝 Three-dimensional object determination device, three-dimensional object determination method, and three-dimensional object determination program
KR20110092752A (en) 2010-02-10 2011-08-18 이인정 A method for detecting image blurring and the camera using the method
US8675926B2 (en) * 2010-06-08 2014-03-18 Microsoft Corporation Distinguishing live faces from flat surfaces
US8542898B2 (en) * 2010-12-16 2013-09-24 Massachusetts Institute Of Technology Bayesian inference of particle motion and dynamics from single particle tracking and fluorescence correlation spectroscopy
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
JP5035467B2 (en) * 2011-10-24 2012-09-26 日本電気株式会社 Three-dimensional authentication method, three-dimensional authentication device, and three-dimensional authentication program
US9075975B2 (en) * 2012-02-21 2015-07-07 Andrew Bud Online pseudonym verification and identity validation
US8817120B2 (en) 2012-05-31 2014-08-26 Apple Inc. Systems and methods for collecting fixed pattern noise statistics of image data
JP5955133B2 (en) * 2012-06-29 2016-07-20 セコム株式会社 Face image authentication device
CN102938144B (en) 2012-10-15 2016-04-13 深圳先进技术研究院 Face based on total variation model heavily adds light method
CN104904197B (en) 2012-12-05 2016-12-28 富士胶片株式会社 Camera head and abnormal dip incident illumination detection method
US8914837B2 (en) * 2012-12-14 2014-12-16 Biscotti Inc. Distributed infrastructure
US9495526B2 (en) * 2013-03-15 2016-11-15 Eyelock Llc Efficient prevention of fraud
US20140270404A1 (en) * 2013-03-15 2014-09-18 Eyelock, Inc. Efficient prevention of fraud
US20140270409A1 (en) * 2013-03-15 2014-09-18 Eyelock, Inc. Efficient prevention of fraud
US20150124072A1 (en) 2013-11-01 2015-05-07 Datacolor, Inc. System and method for color correction of a microscope image
US9361700B2 (en) 2014-05-08 2016-06-07 Tandent Vision Science, Inc. Constraint relationship for use in an image segregation
US9679212B2 (en) * 2014-05-09 2017-06-13 Samsung Electronics Co., Ltd. Liveness testing methods and apparatuses and image processing methods and apparatuses
US20160085958A1 (en) * 2014-09-22 2016-03-24 Intel Corporation Methods and apparatus for multi-factor user authentication with two dimensional cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158319A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for fake-face detection using range information
CN101999900A (en) * 2009-08-28 2011-04-06 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BUYSSENS P, REVENU M: "Label diffusion on graph for face identification", 《IAPR INTERNATIONAL CONFERENCE ON BIOMETRICS》 *
XIAOYANG TAN 等: "Face Liveness Detection from a Single Image with Sparse Low Rank Bilinear Discriminative Model", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766786A (en) * 2016-08-23 2018-03-06 三星电子株式会社 Activity test method and active testing computing device
CN107766786B (en) * 2016-08-23 2023-11-24 三星电子株式会社 Activity test method and activity test computing device
CN110069970A (en) * 2018-01-22 2019-07-30 三星电子株式会社 Activity test method and equipment
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment
CN108496184B (en) * 2018-04-17 2022-06-21 深圳市汇顶科技股份有限公司 Image processing method and device and electronic equipment
CN113569708A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body recognition method, living body recognition device, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN105100547B (en) 2019-10-18
EP2942736A3 (en) 2015-12-23
US20150324629A1 (en) 2015-11-12
EP3699817A3 (en) 2020-10-21
EP3699817A2 (en) 2020-08-26
US20170228609A1 (en) 2017-08-10
US10360465B2 (en) 2019-07-23
JP6629513B2 (en) 2020-01-15
US11151397B2 (en) 2021-10-19
US20160328623A1 (en) 2016-11-10
US9679212B2 (en) 2017-06-13
JP2015215876A (en) 2015-12-03
EP2942736A2 (en) 2015-11-11

Similar Documents

Publication Publication Date Title
CN105100547A (en) Liveness testing methods and apparatuses and image processing methods and apparatuses
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Liu et al. Cross‐ethnicity face anti‐spoofing recognition challenge: A review
CN105469034B (en) Face identification method based on Weighting type distinctive sparse constraint Non-negative Matrix Factorization
CN111814620B (en) Face image quality evaluation model establishment method, optimization method, medium and device
US9152888B2 (en) System and method for automated object detection in an image
US20140341443A1 (en) Joint modeling for facial recognition
KR102257897B1 (en) Apparatus and method for liveness test,and apparatus and method for image processing
CN110414350A (en) The face false-proof detection method of two-way convolutional neural networks based on attention model
CN104680119A (en) Image identity recognition method, related device and identity recognition system
AbuNaser et al. Underwater image enhancement using particle swarm optimization
CN105989331A (en) Facial feature extraction apparatus, facial feature extraction method, image processing equipment and image processing method
CN103714340B (en) Self-adaptation feature extracting method based on image partitioning
CN102314598A (en) Retinex theory-based method for detecting human eyes under complex illumination
Wang et al. An interconnected feature pyramid networks for object detection
Liu et al. Presentation attack detection for face in mobile phones
CN111626212A (en) Method and device for identifying object in picture, storage medium and electronic device
CN114283087A (en) Image denoising method and related equipment
Kaur et al. Improved Facial Biometric Authentication Using MobileNetV2
US9224097B2 (en) Nonlinear classification of data
Yu et al. Efficient object detection based on selective attention
KR100711223B1 (en) Face recognition method using Zernike/LDA and recording medium storing the method
CN109063761A (en) Diffuser dropping detection method, device and electronic equipment
CN116012248B (en) Image processing method, device, computer equipment and computer storage medium
CN113128289B (en) Face recognition feature extraction calculation method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant